Whenever I read about new renderers I think I'm completely intellectually inept even with what I think is a reasonably well-rounded computer science education. It is like reading about string theory in a way. Sure, I can grasp the concepts, but the details and minutia are fascinating and admittedly a little beyond my immediate comprehension. I remember when POV-RAY was amazing when it ran all night to produce just one image with some reflection on a 486.
I suggest taking a look at Physically Based Rendering [1]. It goes through an advanced rendering system in a literate coding style. That is, the physical explanations are interspersed with the code of an open source renderer [2]. I'm not sure how it compares with this Disney renderer, but it should go a long way towards demystifying how these things work.
I wouldn't worry about it too much, Computer Science is past the point where you can have intricate knowledge of all of it. Knowledge of the existence of most things is good enough imo.
Also it goes both ways, I doubt the people writing these renderers have intricate knowledge of modern networking or web stacks etc.
Brent and the team at Disney Feature Animation have always pushed complexity like crazy. Most people keep telling them to just think procedurally or reuse elements, but Brent prefers to let the artists do whatever they want and the system should handle it. Since cores are cheaper than humans this is a good long-term tradeoff, but YMMV since very few studios would bother doing so much explicit detail (as opposed to procedurally tweaked variations).
I don't want to be a sourpuss, but there is a limit beyond which MPI stops scaling, when you look at the program runtime 60% of time is spent in MPI_Wait. Past 80k nodes you can't go much faster no matter how much money you got, I believe this is precisely the problem they address by intelligently grouping similar rays.
You forget that rendering a movie is almost infinitely scalable. Imagine, even when using only single-threaded code, you could scale for 130,000 nodes as there are about this many frames in a 90 minute movie at 24 fps. This is not totally true of course, for example you might want to do light or particle system calculation which persist across frames, but the general rendering bit is extremely scalable.
Echoing what z-e-r-o said, nobody uses MPI to render individual frames. You do one frame per box and lots and lots of frames (all attacking the beefiest network file system you can find, but luckily it's all reads and thus easier to cache).
I've never dealt with the new OpenEXR 2.0 deep stuff, but looking at the spec (http://www.openexr.com/openexrfilelayout.pdf page 13) the Deep Tiled stuff seems to totally be ready for this. I would have guessed that there might have been some global compression option that wouldn't be workable, but the design clearly does this all per tile instead.
Clicked thinking it was a Disney adaptation of the novel with the same name. In retrospect, it would be an unlikely adaptation for Disney to undertake ;)
Walt Disney Studios was once located at 2719 Hyperion Avenue in Los Angeles. It's surely named after that. The Walt Disney Company also used the name for other things, like the publishing company Hyperion Books, and the Hyperion Theater at California Adventure.
Hyperion wouldn't be that much out of character for them, even if personally "Hyperion, from the producers behind Pearl Harbor and The Avengers" isn't at tag line I'd particularly look forward to.
Hyperion happens to be the name of a street in Los Angeles where one of the original Disney animation studios resided (In Silverlake at the intersection of Griffith Park Blvd, where the Gelsons Market now stands).
Quote: Disney pioneered the use of special camera rigs that would render the foreground with one ‘stereo camera’ or solution, while rendering the background with another.
I saw that, and my brain hated it. I even tweeted about what I thought it was at the time. This is an absolutely terrible effed, and I don't understand why they do it.
"Since a hero character close to camera felt attractive with a stereo convergence, that would mean the background in the same shot would be too extreme and unpleasing on the eye. Instead of dialing down the stereo effect overall, Disney pioneered the use of special camera rigs that would render the foreground with one ‘stereo camera’ or solution, while rendering the background with another. Thus the foreground character would appear round and with more fullness, but their background would appear more relaxed, less stereo pronounced."
Yep. It's also worth remembering that different people respond physiologically to 3D movies very differently [1]. Disney thought the effect was useful enough to patent and use for more films, so it's unlikely everyone responds that negatively.
How time and Moore's law flies...