Something that makes me sad is how DSLR manufacturers insisted on making "digital film" instead of taking the next step and creating a truly revolutionary computational imaging tool.
For example, you could set up the camera in a studio to rapidly cycle through a set of flash guns, and then import the stack into Lightroom (or whatever) for this kind of additive light editing. This could then be programmed back into the flashes to create the final lighting composition.
This kind of process is regularly used in the film industry, where the same scene will be recorded with different lights turned on. Even camera motion is possible with robotic controllers. As a random example, I believe the new Blade Runner movie had the city flyover shots made this way.
A long time back... I thought about trying to capture motion in colors. Its not particularly novel and my first attempt with color print film (using a triple exposure and red, green, and blue filters) really confused the lab because the rest of the roll looked right, but this one print looked trippy.
My "ok, I could do this with black and white" had me thinking about Agfa Scala slide film (a black and white film) and then using red, green, and blue filter packs and print to color slide paper (cibachrome). I never found the lab that would entertain the idea (and the paper was $$) before I wasn't able to get that print done.
the synthetic lighting method was actually used in some of the Half life 2 episodes in a kind of Normal maps ++ method (basically normal maps with built in internal shadowing) I wonder if valve still uses it, as they mentioned actually creating the source texture was fairly expensive and involved final gather type rendering a 3d scene with 3 colored lights :) (as opposed to just measuring normal vectors per pixel)
I stumbled across this site via [1], and went on to read [2]. I was about to submit one of those to HN, when I realized the entire site was full of gems.
Yeah, blast from the past. Early days of the web. The image interpolation/extrapolation blew me away, particularly extrapolating away from a blurred image as a way to sharpen.
Honestly, when I read "Computer Graphics Hacks", I was expecting something different (like, for example, various obscure graphics programming techniques for games/demos/whatever). But this is cool too...
Side note on http://graficaobscura.com/ccode/index.html - interesting to see that SGI was still using "K&R style" parameters for C functions in 1994, when ANSI C had already been around for 5 years. Must have had something to do with the 80 character line lenght...
For those who are not quite getting what this site was about. It was Paul's personal, creative outlet for various ideas that were not quite serious enough for a SIGGRAPH paper. As far as I can see, it hasn't changed from the 90s when it was first published. Quite the time capsule to a day before Google when the web was new, 3D graphics was mainly in workstations and digital cameras were rare items. Not to mention, back when very few used the web as a personal, creative outlet.
The idea of modifying a digital image so that there is a negative shadow was rather mind-blowing to me at the time.
There's also the photographic stitching software c. 1994 which was cutting edge for its day.
Another related piece of it - http://www.employees.org/~drich/SGI/SiliconSurf/grafica/inde... which has a "future" (which I don't think ever came to be).
Another piece can be found at http://sgifiles.irixnet.org/sgi/graphics/grafica/