Hacker News new | past | comments | ask | show | jobs | submit login
Disney’s New Production Renderer ‘Hyperion’ (fxguide.com)
159 points by mineral_or_veg on Oct 14, 2014 | hide | past | favorite | 26 comments



Whenever I read about new renderers I think I'm completely intellectually inept even with what I think is a reasonably well-rounded computer science education. It is like reading about string theory in a way. Sure, I can grasp the concepts, but the details and minutia are fascinating and admittedly a little beyond my immediate comprehension. I remember when POV-RAY was amazing when it ran all night to produce just one image with some reflection on a 486.

How time and Moore's law flies...


I suggest taking a look at Physically Based Rendering [1]. It goes through an advanced rendering system in a literate coding style. That is, the physical explanations are interspersed with the code of an open source renderer [2]. I'm not sure how it compares with this Disney renderer, but it should go a long way towards demystifying how these things work.

1. http://www.amazon.com/Physically-Based-Rendering-Second-Edit...

2. http://www.pbrt.org/


PBRT's an excellent book if you want to play with / build / understand raytracing renderers.

Some of the knowledge is not-quite state of the art any more, but everything's still relevant. And there's a third edition coming soon.


I wouldn't worry about it too much, Computer Science is past the point where you can have intricate knowledge of all of it. Knowledge of the existence of most things is good enough imo.

Also it goes both ways, I doubt the people writing these renderers have intricate knowledge of modern networking or web stacks etc.


Brent and the team at Disney Feature Animation have always pushed complexity like crazy. Most people keep telling them to just think procedurally or reuse elements, but Brent prefers to let the artists do whatever they want and the system should handle it. Since cores are cheaper than humans this is a good long-term tradeoff, but YMMV since very few studios would bother doing so much explicit detail (as opposed to procedurally tweaked variations).


An interesting metric is that the San Fransokyo city shots in BH6 contain more geometry on screen than all of the shots on Frozen combined.

In the past we would have used a matte painting for distant background stuff. It's pretty sweet.


Which also means that in 10 years they can go back and reshoot old movies. "Back on location" is just a GUID in some PB scale object database.


>> Since cores are cheaper than humans

I don't want to be a sourpuss, but there is a limit beyond which MPI stops scaling, when you look at the program runtime 60% of time is spent in MPI_Wait. Past 80k nodes you can't go much faster no matter how much money you got, I believe this is precisely the problem they address by intelligently grouping similar rays.


You forget that rendering a movie is almost infinitely scalable. Imagine, even when using only single-threaded code, you could scale for 130,000 nodes as there are about this many frames in a 90 minute movie at 24 fps. This is not totally true of course, for example you might want to do light or particle system calculation which persist across frames, but the general rendering bit is extremely scalable.


Bandwidth allowing of course - that's often the bottleneck when you're rendering production scenes with +300GB of textures :)


Echoing what z-e-r-o said, nobody uses MPI to render individual frames. You do one frame per box and lots and lots of frames (all attacking the beefiest network file system you can find, but luckily it's all reads and thus easier to cache).


You wouldn't need MPI, just have multiple concurrent writes to a tiled EXR.

We do it occasionally for stupidly big (>64K) frames.

Doesn't work for deep though :)


Off hand, I can't think of a reason why it wouldn't work for deep, too, other than perhaps disk space and I/O. Why do you say it wouldn't?


Hi Andrew ;).

I've never dealt with the new OpenEXR 2.0 deep stuff, but looking at the spec (http://www.openexr.com/openexrfilelayout.pdf page 13) the Deep Tiled stuff seems to totally be ready for this. I would have guessed that there might have been some global compression option that wouldn't be workable, but the design clearly does this all per tile instead.


It would in theory work - our solution doesn't at the moment.



Clicked thinking it was a Disney adaptation of the novel with the same name. In retrospect, it would be an unlikely adaptation for Disney to undertake ;)


Walt Disney Studios was once located at 2719 Hyperion Avenue in Los Angeles. It's surely named after that. The Walt Disney Company also used the name for other things, like the publishing company Hyperion Books, and the Hyperion Theater at California Adventure.

http://blog.wdwinfo.com/2013/07/28/the-walt-disney-hyperion-...


It's worth remembering that in addition to all their famous kids films Disney has also produced all of these films: http://en.wikipedia.org/wiki/List_of_Touchstone_Pictures_fil....

Hyperion wouldn't be that much out of character for them, even if personally "Hyperion, from the producers behind Pearl Harbor and The Avengers" isn't at tag line I'd particularly look forward to.


Hyperion happens to be the name of a street in Los Angeles where one of the original Disney animation studios resided (In Silverlake at the intersection of Griffith Park Blvd, where the Gelsons Market now stands).


Quote: Disney pioneered the use of special camera rigs that would render the foreground with one ‘stereo camera’ or solution, while rendering the background with another.

I saw that, and my brain hated it. I even tweeted about what I thought it was at the time. This is an absolutely terrible effed, and I don't understand why they do it.


The article explains why they do it.

"Since a hero character close to camera felt attractive with a stereo convergence, that would mean the background in the same shot would be too extreme and unpleasing on the eye. Instead of dialing down the stereo effect overall, Disney pioneered the use of special camera rigs that would render the foreground with one ‘stereo camera’ or solution, while rendering the background with another. Thus the foreground character would appear round and with more fullness, but their background would appear more relaxed, less stereo pronounced."


Yep. It's also worth remembering that different people respond physiologically to 3D movies very differently [1]. Disney thought the effect was useful enough to patent and use for more films, so it's unlikely everyone responds that negatively.

1. http://www.plosone.org/article/info%253Adoi%252F10.1371%252F...


You tweeted?! Good God, man!


I'm not quite sure what it's referring to. Is there an example somewhere?


innovation, it takes someone to try something and other's to build off of that to get anywhere. they had the guts to put the foot forward.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: