Hacker News new | past | comments | ask | show | jobs | submit login
1K colours on CGA: How it's done (reenigne.org)
227 points by fcambus on April 10, 2015 | hide | past | favorite | 42 comments



The reason it has taken the demoscene so long to pull this off is the same reason that the PC beat its competitors in the early 1980's.

From the beginning the PC wasn't defined by a single hardware implementation, but by a set of capabilities and interfaces. There were always "compatible" PC knockoffs and "compatible" extension cards that weren't really 100% bug-for-bug compatible. That meant that software writers had to stick pretty much to the approved interfaces. There was no sense trying to push the hardware beyond the intended capabilities because even if you succeeded it probably wouldn't work on the bulk of the machines out there with different variations of the hardware.

It took a long time even to see things like scrolling games on EGA cards, even though the scroll registers were designed right into the chip and documented as such. Why? Because there were no BIOS calls to implement this, and most developers took the limits of BIOS as the limits of what they could count on working. It took risky shareware developers (Carmack and Romero) to prove to everyone that really the EGA chip-level interface was a usable target.

So because of the prevalence of the mostly-compatible compatibles, the PC software market had another layer of abstraction built in. Software should work on any machine providing the documented capabilities and interfaces, not just on one exact hardware setup. This allowed for a cycle of compatible upgrades where software could be used across hardware generations, and hardware could upgrade with sufficient backward compatibility.

That upgrade cycle was missing for machines like the Commodore 64, which were defined by a single hardware implementation. For the Commodore, developers always felt free to push beyond the intended limitations of the hardware, with confidence that if it would work it would work everywhere. That fed a vibrant demoscene, but closed off any possibility of a hardware/software upgrade cycle and eventually led to the death of the computer line.


Hence the quote "there is MS DOS compatible, PC DOS compatible, and FlightSimulator Compatibe, in increasing orders of difficulty."

Later 'Jet' provided some test indication as well. I was always amazed because CP/M machines were all different sort of by design, it was the sheer quantity of the original 5150 PC that made it worth specializing for that target. I later owned an Epson QX-10 with an "MS-DOS" card. And simply MS-DOS compatible ran hardly any software at all!


This is why, despite growing up a Commodore kid, I've never really sat and wondered about the "could have been" very much like some people have... even if Commodore had survived they would have been in ever-growing trouble with their architecture and there's no reason to believe that whatever they put out in, say, 1995 would be any particularly better or worse than what we really got, because it would pretty much be a full rewrite anyhow.


Might have fared exactly like Apple in the 90s? Trying to push a 80s architecture and system in 1999?


And part of my point is precisely that in the subsequent rewrite, the OS-level specialness of Apple is now essentially gone. (Apple itself of course has a distinct identity.) While I do not necessarily believe where we are today is completely inevitable, I do also tend to think that it's an awfully strong local optima. (People often come up with radical ideas for alternate approaches but IMHO have a hard time proving alternate approaches are generally better, as opposed to better for a very particular use case.)


Why didn't this work out for high-end Android handsets? They are in a similar situation.


By 'this' do you mean why didn't Android completely beat iPhone? Similar reason to why the Mac is making a comeback. The hardware market has changed.

In the 80s / 90s, computers were very expensive and it was common to only buy exactly what you needed. PCs excelled at this. Everything was an option: Video card (MDA, CGA, EGA, VGA), ram, hard drive, hard drive controller, serial port controller, sound card. If you later needed more capabilities you upgraded whatever was necessary.

As computers became cheaper, upgrading began to make less financial sense. Upgrading a component and paying for it to be installed became almost equal to the cost of a new computer. There also became less to upgrade, onboard sound, network, etc was good enough. The only thing left to upgrade was CPU (but usually required a new motherboard) and video card. Upgradability which was originally a huge selling point became a niche feature.

As integration continued, size, weight, power consumption all became more important than upgradability / customization. Hardware wise, all Android phones and the iPhone are pretty much the same.


I wonder how long throwing away 'computers' after a year or two of use will remain common for us in the first world.

Does technology get the credit for the difference between the 80s, or is it some mix of the current balance of geo-politics, interconnected global trade, wage arbitrage that still allows slavery in the supply chain, and free interest rates (in the case of Europe, negative)?

May be in the end game biodegradable 'computanium' that is killed and reborn daily right before it becomes obsolete?

It still feels very strange and disturbing to throw away something less than a decade old much less two years.


I tend to retire my machines to less demanding or suitable tasks. A Mac Pro 2006 (with a Cinema Display from 2000) is still going strong as a studio music computer running Logic Pro X (essentially a Hackintosh on Mac hardware, as Apple stopped supporting them). A MacBook from 2005 drives a TV with streaming video (although it needs retiring now).

It is harder with smartphones, but I have older phones being guest map and GPS devices.


It is only common in countries where people go for contracts like US.

In many countries people pay full price for handsets and use pre-paid cards. In no way you see them replacing their mobiles every two years.


> May be in the end game biodegradable 'computanium' that is killed and reborn daily right before it becomes obsolete?

Server side from the client perspective, that's effectively what "the cloud" really is.


Android has too many other stumbling blocks that make this sort of social innovation happen - in Androids' case, I believe, there are far, far too many clones with variance in the implementation of the standard that make it impractical to target the broader platform for such tricks.

One the one hand. On the other hand, getting newer Android releases running on, for example, the G1DEV phone is an example of how this ethos has persisted, even on Android.


thanks for this, fascinating


It's impressive they got this many colors out of it! I still remember the classic CGA Pink+Cyan color scheme[1]; those always seemed like a particularly garish choice for "default" colors.

[1] Example screenshot: http://s.uvlist.net/l/y2008/04/49443.jpg


I just got a tattoo done in CGA magenta/cyan/green (a composite frame from Sierra's Black Cauldron). This turns my whole world, or at least arm, upside down!


Very cool! Got a link to a pic? I also have some oldschool "pixelated" tattoos, though in greyscale only.


This had as much to do with the quality of the displays as it did with the adapter. White, black, cyan and magenta all have significant contrast between them, and many machines with CGA were still hooked up to B&W and greenscreen monitors. If you'd designed your graphics around Mode 0 (the yellow, green, red, black one), it would look really terrible on a greenscreen. Even full colour displays (often TVs) would look miserable in the other colour modes, as they'd bleed much more than displays do now. The white/cyan/magenta/black was also really easy to build as a secondary palette for 16-colour EGA games. I remember a lot of the Sierra adventure games at the time offered a CGA mode, and while it never looked nearly as good as the EGA mode, it was at least playable.


Microsoft deployed a black, white, cyan, and magenta color scheme for marketing its Surface back in 2012-2013. I boggled at the logic as to this -- were they somehow trying to evoke nostalgia for pukey CGA displays? Because we don't really remember those fondly...


"Neo 80s" has been a popular aesthetic for the past few years, so I wouldn't fault Microsoft too much for using bright neon colors.


Feels like old comics printed in cheap magazines. Black, white and two tones. I kinda like it.


I am utterly amazed at the amount of hard work and programming mindshare expended on, let's face it, a very unimportant technology.

Bravo to you all for showing us that old-school hacking is alive and well!


What's boggles my mind is the fact that it took the enthusiasts up until now to uncover the potential of the platform, as compared to, say early 1980's when people were paid to produce entertainment software for the IBM PC.

Makes one wonder what kinds of fantastic things we might achieve with todays' technology, but probably never will, moving on to build tomorrow's technology instead.


IIRC there was a post by Steve Wozniak here a few months ago where he stated he still occasionally gets ideas for how he could have improved the code for the original Apple computers.

Edit: Found one article that quotes Woz on that: http://www.cultofmac.com/302087/38-years-later-woz-still-thi...


Pushing those old systems to their limit (and beyond) was a lot of fun and very instructional for those of us just getting into programming. On my Atari 800, I recall using 6502 assembly to hook the horizontal and vertical blank interrupts to change the color lookup tables and character definition maps on the fly giving me the ability to greatly increase the number of colors displayed and do crude character-set animation.


Reminds me of this demo that uses a single microcontroller to generate a PAL composite signal, with a few tricks to increase the colours available:

http://www.linusakesson.net/scene/phasor/


Strange question; why didn't many games in this era exploit this? I'd guess that the knowledge just wasn't as easily shared? Seems these systems were capable of pretty amazing things, but those things were frequently overlooked.


Early games did take advantage of CGA composite color hacks to achieve 16 color graphics. E.g.:

http://en.wikipedia.org/wiki/File:Microsoft_Decathlon_RGBvsC...

http://en.wikipedia.org/wiki/File:KQ_CompVsRGB.png

However, the era of CGA composite color graphics was very short-lived. By the time the IBM-PC started to become a popular home computing platform, 16-color Tandy/EGA graphics and RGB monitors were the norm. Any composite color hacks would be rendered obsolete and unusable on these later machines.


If I'm reading correctly between the lines, they are pouring a ton of CPU into this effect. I wouldn't be surprised if you told me what we saw in the demo is literally almost all this effect can do, and there's not a lot of power left over for actually running a game.

Same thing for all the things you see Commodore 64 demos do... by the time you're creating the awesome graphical effect there's often not a lot left over for the game itself. (Though there are some interesting exceptions... there appear to be some surprisingly high-quality side-scrolling platformers now based on "bad lines", which are both explained and then the platformers shown at https://www.youtube.com/watch?feature=player_detailpage&v=fe... . Though the entire presentation is fascinating and shows a few other demoscene effects in real programs.)


The CPU usage isn't that bad compared to some of the other things we did in the demo - with some help from interrupts it could be done with maybe 20% of CPU. There is also a much easier ~500 colour variant which doesn't take any CPU time at all once set up.

I think the real reason it wasn't discovered earlier is that most CGA PCs were not connected to composite monitors or TVs (people who could afford the big expensive IBM machine could generally afford a dedicated digital monitor as well). A few games used 160x200x16 composite but even those generally had modes for RGBI monitors as well (which wouldn't work so well with the 500/1K colour modes, though I guess there are the dithering characters 0xb0 and 0xb1 which might have worked). These +HRES modes also suffer from CGA snow, which might have been a deal-breaker.


Cool, thanks for the details!


Well they were able to use the leftover CPU slop to transform, shade, and rasterize a crude 3D model in real time, so it couldn't have been all that bad...


The 3D part used a more conventional 16-colour mode. Though it's a testament to how well Scali's code and VileR's palettes work, if they can be mistaken for the 1K colour mode!


I think if you asked most game makers of that time, they would simply say, "You're crazy. You're planning to exploit the exact CGA chips and the exact mux and the exact nature of the monitor's response to the horizontal porch. The failure mode is that the whole screen goes wavy and maybe lets out a puff of smoke and dies. You're crazy!"

But also, the reality is that the demoscene didn't even figure any of this out until 2013-ish.


Some did use the 160x100x16 graphics mode (using some of the same techniques in the OP) but one drawback is that they didn't always look 100% correct on clones, and some eventually broke when VGA cards came out.

http://nerdlypleasures.blogspot.com/2014/09/cga-16-color-rgb...


My Columbia MPC clone didn't have the right character shape in the text-mode character rom, but I used the method for preview images of ray tracing code I experimented with in the '80's.


Interesting, but it would be nice to see a picture of the resulting palette.



There's now a (less technical but prettier and easier to understand) writeup at http://8088mph.blogspot.co.uk/2015/04/cga-in-1024-colors-new... .


The writer of this piece is a genius. Amazing hackery.


Reminded me of the Doom/Keen days


two pictures of waveforms would be much better than text here ...


I wanted to make some pictures to go with the article, but I didn't have time (I wanted to get the article out quickly as there was so much curiosity and speculation about the methods we used). I just tried adding a quick hack to cga2ntsc to output the waveforms and viewed them in an audio package, but the result wasn't terribly elucidating. Perhaps I will redo the article with more pictures at a later date.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: