Hacker News new | past | comments | ask | show | jobs | submit login
Intro to RenderMan for Blender (pixar.com)
138 points by mariuz on July 17, 2015 | hide | past | favorite | 71 comments



I know everyone is excited about this but I would like to see the benefits of using it over cycles.

So far I must say cycles is pretty great


Cycles is hard to use for animation because it is very noisy, it's fast to get an initial image (great for tweaking) but it's slow to get a noise-free image. Also this includes pixar's "denoising as a post process" which is really amazing (and might even be applicable to cycles renders).

Cycles shading graph is very limited (but well chosen) compared to the flexibility of Renderman Shading Language.

Renderman's subpixel shading is just amazing, so getting crisp, antialiased renders even with fine detail is automatic, whereas cycles might require tweaking the settings or really cranking up the sample rate.

Cycles struggles and has memory limitations on large (even not so large) scenes.

Renderman produces very good motion blur without much performance hit, which is critical for integrating with real-world footage.

Renderman has a lot of features (similar to plugins) to integrate with external tech (e.g. you could plug in a custom hair generator step), which is critical for production.

Renderman has a lot of "scalability" features like caching indirect lighting into brickmaps, or loading large sets of tiled textures, that cycles (or even blender) just don't support.

Also, renderman has been used in production for decades, so a lot of places have built up large libraries of tricks, shaders or tools that work with renderman that they can use.

Renderman is well integrated with many render-farm technologies, so if you're working on a large animation there are some great option. Of course there are ways to do this with Cycles, too, but they are generally less mature and lack some production features.

With all of that said, cycles is still an amazingly powerful renderer, and completely free (not just free to use). You don't have to generate and install a license file to use Cycles, and if you want to tinker with the source code, you can do so! That's not going to happen with renderman anytime soon...


For a fair comparison we should compare PRMan's RIS with Cycles though, not REYES. RIS is what Pixar are focusing on and rendering their movies with now.

Yes, REYES is great at subpixel shading, rendering fast motion blur, using brickmaps with indirect light, and using very little memory for detailed displacement. But with RIS all those things are gone. PRMan's implementation might still be more efficient, I don't know, but with the switch to path tracing they are definitely giving up various advantages that REYES had.


Exactly, with the move to RIS, they're in pure path-tracing now, and they're still behind Arnold generally from a performance perspective.

Displacement and subd performance is still very impressive (compared to Arnold), but motion blur (especially deformation for curves and triangles) has a noticeable overhead now.


Cycles is pretty great. RenderMan's value is entirely in it's extremely fine level of control. It's typically used in production settings by RenderMan power users who can write custom shaders, complex interactions with non turn key systems, like custom simulators, and other production workflow tools.

It has been hammered on in many ways by many people for several decades and is focused on stable efficient production. For this reason it can take quite a lot of work to get great images out of it, the configurability and control come at a cost.

I think of light transport simulators as a completely different thing from production renderers. A simulator is like a camera that captures reality, where as production renderers are not bound by reality so they are more like fine art tools (brushes, rulers, etc.).


I dont know the current situation, but last time i checked cycles was sower relative to the competition and unarguably Renderman is far more battle-tested.


My experience with Cycles is that it generally looks almost great, except for a bunch of terrible looking fireflies (lone, overly bright pixels) for which there is no straightforward cure.

(I am very grateful for Blender and Cycles in general though).


Rendering, is a job in itself. There are people who do nothing but develop looks and write shaders all day everyday, and others who create and manage distributed rendering pipelines.

Cycles is great, but just like Mental Ray, in other applications, it's part of another package. Sure you can run them headless, but why? When there is something that is made for it that is so well documented and has great support solely for that purpose.

Rendering features aside, it's a great business and architecture choice.


You realize you didn't name a single benefit over Cycles?


It isn't even up for debate among anyone with in depth rendering knowledge. Cycles is free. That's the extent of its advantage. Renderman is faster, far more flexible, has true layered shaders, handles volumetrics, lots of geometry types, displacement, ptex textures, has better integrators, AOVs, a shading language, a detailed API, light path expressions... There are probably twice as many things I can't think of at the moment.


Two benefits were named: 1. Deciding to go with renderman gives allows you to tap a large talent pool that are already familiar with Renderman. Much larger than cycles. 2. Renderman works with a number of other 3d tools besides Blender. Cycles could work with other software, but I suspect this is a relatively niche group doing so. This means that other pieces of software could be used in conjunction with Blender.


Another way of putting it is that you can get a job working with Renderman.

What's really huge here is being able to interface open-source software with the commercial rendering pipeline that is used to make most of the movies we watch.

I had a big fight with my 3d modeling instructor who made us use 3ds max because he wrote the Adobe Press book on it, though he not very subtly suggested that we would have to pirate it in order to do our homework. I pushed really hard to be allowed to do work in Blender, wasn't a win at the time, but I think this is clearly the direction things will go - the days of paying thousands of dollars for content creating software that is primarily used by a handful of companies are numbered. Companies like Pixar and ILM should be contributing to projects like Blender instead of developing proprietary internal tools and marrying themselves to packages like Maya that aren't terribly better than Blender, if they even are better at all anymore.

Blender has some quirks, esp in modeling from what I remember, but it's solid, and more importantly, there are room for lots of tools, just not lots of tools that break the bank. This is a hobby for everyone it's not a profession for, and even the professionals are often contractors who struggle to maintain a toolkit.

Anyway, <3 Blender. :)


> Companies like Pixar and ILM should be contributing to projects like Blender instead of developing proprietary internal tools and marrying themselves to packages like Maya that aren't terribly better than Blender, if they even are better at all anymore.

For anyone who might think the same thing:

1. This is not going to happen anytime soon. 2. The tools they use, proprietary or not, are better than blender. I know everyone loves to rally around open source software and Blender has benefited lots of people for a variety of reasons, but high end visual effects is not its place right now. 3. The large 3D programs are actually used by an enormous amount of people these days, not just the big VFX companies.


This is already happening:

- Sony Pictures Imageworks open sourced Open Shading Language, Field3D, Alembic and OpenColorIO.

- Pixar opensourced OpenSubDiv

- Disney opensourced Ptex, BRDF Explorer, SeExpr and Partio

- Dreamworks open sourced OpenVDB

I wouldn't be too suprised if we would see an open source renderer within the next 10 years, especially if SPI is going to replace their in-house version of the Arnold renderer, as open source software has been extremely successful for SPI.


The open sourcing of C and C++ libraries in CG is awesome. But that is skewing the original point. Studios aren't going to replace their interfaces with blender and they aren't going to open source their own creation software. Most studios use a combination of maya, katana, 3ds, houdini and vray.

What you listed are mostly formats that are beneficial to everyone to share (and don't forget OpenExr). OpenSubdiv lets others match renderman's subdivision. Also Imageworks' open source push can largely be traced back to Rob Bredow I believe.


Autodesk recently made their software (including 3ds Max and Maya) free for educational/non-commercial use for 3 years. No watermarks or other restrictions.

http://autodesk.blogs.com/between_the_lines/2014/10/students...


I used 3ds Max to do part of my thesis project. I just installed the 30-day trial on different machines in the computer lab by turns. :)


I'm not sure why you think circumventing the license poorly is any better than using cracked software. I won't call it theft (I don't think illegal copying should be considered theft) -- but I also don't see any meaningful distinction between how you break the license and use software you haven't paid for.


Its faster, there is a massive wealth of knowledge out there, plus you can do wonderfully fancy thing like interactive rendering.


That's almost a silly question.

If you Google it for a couple minutes, you'll see why a lot of people would prefer Renderman.


Can someone please advise me how to get started with this stuff? I have been developing enterprise applications which is boring to all kids, including my nieces and nephews. I wish I could something with Blender etc. to impress those kids. I have found the documentation to be too intimidating for an absolute newbie. I am looking for a dummies guide for rendering. I am not looking to change careers; just want to pursue this to get an understanding and hook those kids into programming.


If you're interested in writing a renderer yourself (which is obviously way cooler than using some tools), I'd recommend first picking up either "Fundamentals of Computer Graphics" by Shirley and Marschner or "Computer Graphics: Principles & Practices" by Foley and Van Dam. Once you understand the basics of computer graphics and especially of shading and ray tracing, I'd recommend picking up "Physically Based Rendering: From theory to implementation" by Pharr and Humphreys which is an excellent book that explains the State of the Art of photorealistic rendering. Once you understand how physically based rendering, path tracing, BSDFs and all that good stuff works you're ready to write your own rendering (or to implement a new SIGGRAPH paper in the framework provided by the book). It's a lot of fun writing your own renderer and you'll produce some beautiful pictures.


As someone who has dabbled, I have to recommend the "Business Card sized Raytracer"[0]. Learning how it worked, modifying it, trying to make it faster or sharper, display different shapes, etc. may serve to whet one's appetite on the subject.

First thing I recommend is trying to get it to show your name[1] instead of the original authors initials.

[0] - http://fabiensanglard.net/rayTracing_back_of_business_card/

[1] - http://lelandbatey.com/anim07.gif


You can follow the official Blender getting-started guide: https://www.blender.org/support/tutorials/

The CG Cookie tutorial especially: https://cgcookie.com/course/blender-basics/


I second the CG Cookie tutorials. Knowing nothing about Blender and little about 3D animation, I started watching tutorials on a Monday afternoon and finished rendering early Thursday morning, though I did little else besides work, attend class and sleep those days.

It should give you an idea of the sort of animation you can expect to achieve in your first week. The only thing I didn't make myself was the wood texture.

https://youtu.be/YrvfTPx4JlM


I started working with RenderMan in 1988/89. If you're a programmer then I think the RIB interface is even easier then using GUI tools like Blender. It can be very fun to play with directly. Feed the following text into `prman`.

  Display "hacker news" "framebuffer" "rgba"
  Projection "perspective" "fov" [45]
  Translate 0 0 10
  WorldBegin
    Translate 0 -1 0
    Rotate 120 -1 0 0
    Geometry "teapot"
  WorldEnd
Then check out the Steve Upstill book "The Renderman Companion", "Advanced RenderMan" by Tony Apodaca and Larry Gritz, and the spec[1] for reference.

You can learn a lot of creative skills, geometry, animation, math, and technology from this simple interface. Note how rotations are in degrees, and commands are easily human readable. The interface was designed to be useable by people as easily as by software ("WorldBegin", cute and intuitive). Key things to remember when playing around with the RIB interface:

It's a 'right handed' coordinate system, so use your right hand to model the axis (perpendicular index finder, thumb, and middle finder form the axis). Put your thumb along the positive axis to visualize things like rotation. Above I use a negative 1 x axis (vector) to rotate 120 degrees so I can visualize the direction of rotation by pointing my right thumb to the right (negative x axis), and the direction my fingers curl is the direction of rotation.

The transformation hierarchy makes it easy to control and reason about transformations without resorting to matrix math. Use that to create easy to manipulate groups. The relevant commands are:

  # parent object
  TransformBegin
    Translate x y z
    Rotate angle x_axis y_axis z_axis
    Scale x y z
    Skew angle dx1 dy1 dz1 dx2 dy2 dz2
    # child object
    TransformBegin
      # Translate x y z...
      # ... invoke geometry
    TransformEnd
  TransformEnd

1: http://renderman.pixar.com/products/rispec/rispec_pdf/RISpec...


There are a gazillion YouTube tutorials for Blender showing you how to make some very cool stuff. Some of the newer tutorials using Blender's Cycles renderer creates photorealistic images without much effort.

Just make sure that you get recent tutorials - there is probably stuff from pre-2.5 Blender floating around that would be confusing to anyone using newer versions.


This series of blog posts from Mike Farnsworth (a rendering fanatic and blender contributer) goes over how to create a "modern" ray tracer from scratch. Highly reccomended:

http://renderspud.blogspot.com/2012/04/basic-ray-tracer-stag...


I used to use this site: http://www.blenderguru.com/


I recommend checking to see if there's a class available where you live. I used to teach a kids 3D modeling class using Art of Illusion (great beginner 3D app) up in Portland, and I know that there were adult classes for e.g. Maya available as well.

Since you're learning the fundamentals of modeling, lighting, rendering, and animation, it really doesn't matter what software you start with (really, it doesn't). In-person classes are amazing for learning about anti-patterns as you work on your projects. For example, just learning to navigate the various modeling views can be very tricky for beginners--many of my students would start arranging objects in a perspective view, and it was easy to point out the flaws in that approach early and get them back on a better path.


I have also been working my way through learning Blender. 2 years back, when I was writing a 3D game, I learnt Blender in the context of importing models, setting textures, material, exporting to the quake .md5 format.

Blender has a steep learning curve. Along the way I learnt some very basic modeling to build objects joining default shapes like Lego!!! That was Round 1 of learning and that gave me an insight into what I needed. I enjoyed it enough to want to learn further, with a 'different' full time day job.

Now, I am working my way through cg-masters video tutorials (warning: these are $$ not free), but worth the money. The order in which I work through these are: 1. Master It. Vol 1 & 2 fundamentals. 2. Character Creation Vol 1, 2, 3. 3. Environment Modeling & Texturing OR Environment & Animation All of the above at: http://www.cgmasters.net/training-dvds/

For character animation much recommended (again $$): 1. Animation Fundamentals from https://cgcookie.com/course/blender-animation-fundamentals/, I have the DVD of this, now it is only online I think.

Books I have referred to: 1. The first book I worked my way through: Blender Foundations (Roland Heiss) : http://www.amazon.com/Blender-Foundations-Essential-Guide-Le... 2. Browsed through: Beginning Blender and Tradigital Blender.

Now I would recommend: Blender MasterClass from NoStarch. I would recommend purchasing a paper copy of this book.

I have also been making notes of what I am learning through the cg-masters videos: https://bitbucket.org/dmsurti/learn-blender, snail like progress here, and no progress since Mar this year.

This was about books and videos. Don't forget the hardware. PLEASE buy a keyboard with a num-pad, I have the Logitech Solar which has this as also a great 3 button mouse, the Logitech Anywhere Mx. I have worked without this setup and it is a pain to work without these, in Blender.

This has been the way it has worked out for me. I hope it helps you figure your way out. It is a hard grind, but totally worth it, especially to keep the right brain happy and not starving, as we mostly do. And just like you I am doing this only for fun and not to change careers. Just trying to do the only right brain activity well.


On the topic of hardware, I've been rocking the tenkeyless keyboards for quite a while now and they work better than you might think with Blender. With a simple pie menu for changing views bound to the q key the numpad is pretty extraneous. If you want a good set of pie menus that's premade and ready to roll, Wazou's are pretty good. https://github.com/pitiwazou/Scripts-Blender

I do agree that a 3-button mouse is super necessary, though a Wacom or equivalent tablet is a better bet in the long run (much better for sculpting, concepts and texturing) and less likely to wreck your fingers.


Come up with an idea to work towards. Like making a model of a favourite toy from when you were a kid. Then trawl documentation and tutorials towards doing that.

Then repeat the exercise with something more complicated.

Keep doing this relentlessly and you will get good surprisingly quickly.


RenderMan is now free for all non-commercial purposes, including evaluations, education, research, and personal projects. The non-commercial version of RenderMan is fully functional without watermark or limitation. For further details please refer to Pixar's Non-Commercial RenderMan FAQ. http://renderman.pixar.com/view/DP25849

So this is non-free.


You can also post renderings / videos you make to sites like YouTube and legally make money off of advertising.


Indeed, that is explicitly permitted. Weird how they consider it indirect. Reading through 11) I thought indirect meant other things.


Its indirect to their business model, and will just mean more talent ready to go for those buying it for movies / tv production.


Free as in beer, at least. I see no problem with that. Good on Pixar for doing some good. Just because they fall short of some sort of arbitrary ideal, it doesn't mean they're bad people or that this licensing method is a useless gesture.


Any links to a good comparison of RenderMan vs. Cycles? Performance would be nice to know.


Blender is winning.


It gives me no reasons to use it over the standard Cycles renderer.


Not even for all of the available shaders, books, tutorials and other resources made available for it over the years? The analogy I make is a programming language that is so-so, but has a hundred relevant, useful libraries vs. the technically better language with hardly any libraries. Cycles has the basic building blocks to create a library of pre-defined shaders, but Renderman has over 25 years of history, and tons of RSL examples.


Show me just one really cool thing that can be done with Renderman and can't be done with Cycles. Show off that 25 years of history.


I didn't say anything could or could not be done with Cycles. I said there were 'tons of RSL examples'. Do you understand the significance of a history, and body of work available to start with, and not having to start from scratch? I was using BMRT in the 90s, and reading books about procedurally-generated textures. People wrote so many shaders in RSL, most available for you to copy, modify and re-use. I am assuming there are not nearly as many OSL examples as RSL examples or samples of code. I may be wrong. For the record, I'll use both, and I am happy to see Renderman as an option. I am particularly interested in OSL as a corollary to RSL. Sony used it successfully to counter Pixar's mention here. I am also eager to test my RSL chops again too!


Most of those shaders written back then are pretty much worthless now that Renderman has introduced RIS mode (instead of REYES mode), which was introduced last year. Pixar recommends writing shaders in C++ now instead of in RSL, as C++ shaders are just way more performant right now. OSL is pretty mature, I know that both Sony Pictures Imageworks and Double Negative use it in production. Now that Renderman also support OSL I'm pretty sure we're going to see OSL examples posted online. OSL shaders are awesome compared to RSL shaders, as OSL gives all control to the renderer, which allows it to do a lot of optimization (such as using the renderers own importance sampling strategies).


Thanks, I'll have to look even more into OSL (vs. GSL or RSL). I would hardly use the word 'worthless' just because RIS mode was introduced last year. The basics of what the prodecurally generated textures book taught me back then, still apply. Geometry, math, building up patterns in software, etc... Not to mention, the basis of such shaders just need to be translated just as GSL can be relatively simple to translate into OSL as the videos by Thomas Dinges show: https://www.youtube.com/watch?v=4LQXjIDWtz0


Any movie by Pixar, Industrial Light and Magic, Double Negative, Framestore, Weta, etc.

Alternatively, a brief and by no means comprehensive list of things I've found to be better in Renderman versus Cycles:

* Bidirectional pathtracing and VCM for fast, accurate caustics and SDS (specular-diffuse-specular) illumination without fireflies. [0][1]

* Extremely efficient subdivision surfaces and displacement. In Cycles, there is a performance penalty for subdivs and displacement. In Renderman, there is almost zero overhead. [2][3]

* Significantly faster subsurface scattering [4][5]

* Support for OpenVDB volumes, and much faster volume rendering/scattering/etc. [6][7]

* Importance sampling for emissive effects, such as explosions and flames. Think candles with actual fire that don't firefly! [8]

* Importance sampling of HDR maps (basically, prevents fireflies when using HDR maps) [9]

* Much better memory efficiency. Renderman is designed to handle literally hundreds of gigs of stuff in memory going in and out of core all the time. Don't have a link for this one, since this comes mostly from experience with both renderers and having used Renderman in a large production studio.

* Disney "principled" BSDF [10]

* Arbitrary geometry lights, with importance sampling [11]

* Better/faster hair rendering and hair shaders [12][13][14]

* The Denoiser is PRMan 20 is legitimately magic. [15][16]

Generally, there isn't necessarily any particular feature that Renderman has that Cycles is outright missing, but almost everything in Renderman is a lot faster and more optimized (as seen in the link list below, a lot of the research on making stuff like subsurface scattering and hair and whatnot extremely fast comes from Pixar and the Renderman devs/researchers in the first place). On the flip side, Cycles has a usable GPU mode, whereas I doubt Renderman will ever get a GPU mode anytime soon. Cycles is really pretty great, but having a volunteer team versus a dedicated, paid team that has to support not just Pixar's production studio but also a ton of large VFX houses produces a much faster development pace and a much higher incentive to optimize the crap out of every inch of Renderman.

Disclaimer: I now work for a different production renderer team at a different studio (some consider us to be a rival to Renderman, we don't see it quite that way), but I've worked at Pixar before.

[0] http://renderman.pixar.com/resources/current/RenderMan/PxrVC...

[1] http://cgg.mff.cuni.cz/~jaroslav/papers/2012-vcm/2012-vcm-pa...

[2] http://renderman.pixar.com/view/displacements

[3] http://graphics.pixar.com/opensubdiv/docs/intro.html

[4] http://graphics.pixar.com/library/ApproxBSSRDF/paper.pdf

[5] http://graphics.pixar.com/library/PhotonBeamDiffusion/paper....

[6] http://www.openvdb.org/

[7] http://renderman.pixar.com/resources/current/RenderMan/rfmOp...

[8] http://graphics.pixar.com/library/MISEmissive/paper.pdf

[9] http://renderman.pixar.com/resources/current/RenderMan/PxrSt...

[10] http://renderman.pixar.com/resources/current/RenderMan/PxrDi...

[11] http://renderman.pixar.com/resources/current/RenderMan/risLi...

[12] http://renderman.pixar.com/resources/current/RenderMan/PxrMa...

[13] http://graphics.pixar.com/library/DataDrivenHairScattering/p...

[14] http://graphics.pixar.com/library/ImportanceSamplingHair/pap...

[15] http://renderman.pixar.com/resources/current/RenderMan/risDe...

[16] https://renderman.pixar.com/view/denoiser


DNeg are still using Mantra (and Clarisse) for a fair amount of stuff (Ant-man is their first production using just RIS in PRMan 19/20), Framestore are solidly Arnold now, and Weta are using their own PRMan clone (but a path-tracer) called Manuka.

PRMan's volume support in RIS is still pretty poor - even in 20 - they're still using generic Woodcock tracking which just doesn't work well, as the max extinction coefficient is global to the whole volume, so is very inefficient as you can't localise the step distance efficiently, or importance sample the density integration. Similarly, in RIS, the importance sampling for emissive volumes is non-existant (unless you write your own interior integrator).

Hair rendering in RIS is only fast-ish because they tesselate to triangles (based on the shading rate), which means stupid memory usage.

Pixar's (Disney's) denoiser doesn't really work with hair.


AFAIK Industrial Light and Magic used Arnold for a lot of movies, Framestore used Arnold for Gravity (I'm not sure what they use now), Double Negative uses a whole range of renderers (with layer between their OSL shaders and the renderers) and Weta uses their own in-house renderer Manuka now. Most studios switched away from Renderman as its support for ray tracing was extremely lacking until a year ago. On your list of points:

* Bidirectional path tracing and VCM, surely is nice, but you're absolutely fine with just path tracing. Artists have learned how do to avoid SDS paths and how to optimize their scenes for path tracing. It would require a lot of time to train artists to optimize their scenes for VCM and with the amount of diffuse noise (which is unavoidable) I'm not even sure if it's worth it.

* I doubt the subsurface scattering in Cycles is slower than Renderman as it only supports bicubic and gaussian profiles which are even easier to evaluate and sample than the approximate BSSRDF ;). Their profiles are extremely lacking (and I doubt it's enough these days), but SPI used bicubics and gaussians a few years ago [1].

* I absolutely can't imagine that Cycles doesn't have Environment Map sampling, which is something pretty basic and explained in PBRT. Sure, Cycles probably doesn't have an implementation of "Portal-Masked Environment Map Sampling" by Bitterli, Novak and Jarosz yet, so I'm sure Renderman is better at it, but Cycles is decent at it.

* Same for arbitrary geometry lights, but I'm sure Renderman does a better job at it (although Pixar is probably limited by Solid Angle's patents on importance sampling quads and other stuff, Cycles doesn't have to deal with patents).

* AFAIK Cycles has a state of the art hair shader, but it seems to lack the importance sampling scheme by Eugene d'Eon.

I absolutely agree with you that Renderman is a more complete and faster renderer, but Cycles is absolutely amazing if you keep in mind that it was completely developed by a small team of hobbyists. Cycles isn't meant for production rendering, but it's amazing for hobbyists.

[1] http://dl.acm.org/citation.cfm?id=2504520


> * Bidirectional path tracing and VCM, surely is nice, but you're absolutely fine with just path tracing. Artists have learned how do to avoid SDS paths and how to optimize their scenes for path tracing. It would require a lot of time to train artists to optimize their scenes for VCM and with the amount of diffuse noise (which is unavoidable) I'm not even sure if it's worth it.

This is %100 not true. Forward path tracing is not adequate for anything except for direct illumination. Any enclosed space that has most of it's illumination from some sort of bounce is not feasible without some sort of illumination caching. Bidirectional path tracing and VCM do not carry any addition complexity from an artist's point of view. Also avoiding SDS paths is not always something that artists can simply avoid, and any illumination that is missing is room for improvement.

Do you have links to Solid Angle's patents? Renderman actually samples emissive geometry exceptionally well.


There are very few scenes that need Bi-directional or VCM (in VFX, anyway) - in theory it converges faster for indirect illumination, but due to the fact both methods need to be used in incremental mode (1 sample per pixel over the entire image), you significantly lose cache coherency for texture access (even for camera rays) as you're constantly trashing the texture cache, meaning renders are a lot slower. There are much better ways of reducing this indirect noise in production (portals, blockers).

On top of this, it's also very difficult to get the ray differentials correct when merging paths, so you end up point-sampling the textures, meaning huge amounts of texture IO.


So there are two different things there, bidirectional tracing without and with VCM. VCM takes longer to trace but takes care of outlier samples that can't be oversampled away in practice.

When it comes to any sort of bounce, forward raytracing is painful, anything that helps is good.

Most renderers don't take into account much cache coherency of textures at all, which makes me think you work for Disney?


Per iteration, bi-directional is extra work too. Obviously these integration methods are much better at finding certain hard-to-find light paths/sources, but my point is that in VFX, it's generally good enough to fake stuff by just turning shadow/transmission (depending on renderer) visibility off for certain objects to allow light through.

It's rare that we actually have glass/metal objects with lights in/around them such that bi-directional / VCM actually makes sense - even for stuff like eye caustics we've found that uni-directional does a pretty good job. And other situations like car headlights behind glass with metal reflectors behind the light, just turn transmission visibility off for the glass part, and yes, it's not fully-accurate (in terms of refraction and light leak lines), but we're generally using IES profiles for stuff like this so we get accurate light spread patterns anyway.

Well, they do in that camera rays generally (and light rays in bi-directional/VCM) end up using the higher mipmap levels of textures, so you're reading a lot more data for these samples, hence pushing stuff out of cache much more: we've seen this with PRMan 19/20 in RIS: using incremental can have a 3x slowdown in some cases compared to non-incremental, as the camera rays are much more coherent per-bucket in non-incremental, so the highest level mipmaps are kept in cache much more. With incremental, you're only sending the bucket size number of samples and equivalent texture reads for the camera rays, with secondary bounces generally using much smaller/lower mipmap tiles for the texture request (and you can get away with box filtering these in 95% of the cases), then moving on to the next bucket, which will probably need completely different high-level mipmap tiles for its camera rays. With texture IO often being the bottleneck in VFX rendering, this is a huge issue.

Nope, still in London for a bit, then off to NZ...


Hmmmmm.

So you're next in line to try to pull the sword from the Manuka stone?


Forward path tracing is what is used to render more than 95% of graphics you see in movies today. It's not great for closed spaces and VCM is better, but it's definitely possible to render everything with forward path tracing. People have been doing it for years, bidirectional methods have only started to get used in production very recently.


Forward path tracing with direct lighting only has been done for 'years' (about 4-5 years). Pure forward path tracing without caching of secondary illumination is being done now, but at great expense of render time and artist time to deal with the inevitable noise.

It is something that people are getting to work in the end but it is a consistently painful disaster.

So basically forward path tracing is not adequate. That doesn't mean it can't be forced to work through huge render farms, hacks, compositor painting and who knows what else.


I agree it's possible to do better than forward path tracing, but a big part of why nearly everyone switched to it was that it significantly reduces artist time, is a lot less painful and requires fewer hacks than the methods used before it.

The state of the art keeps improving but I've never heard any call the switch to forward path tracing a painful disaster, quite the opposite.


Few hacks yes, less painful I have to disagree with. It is the present and of course the future, but what I have seen is that the hacks shift to do anything possible to reduce noise, and the artist's time shifts to doing whatever they can do deal with noise. It never ends up being 'fire and forget' because renders end up either overkill on cpu time or with noise somewhere in the image.

Forward raytracing as it currently stands is awful to use at the moment on a large scale for final motioned blurred images that can be sold to a client, but because it is simpler it is still better than alternatives. That is because getting the same results out of (REYES) renderman was something left to a handful of gurus and fanatics.

Now we have a state where renders take 8 hours per frame per character per pass but people still like it better. So be it, but that is still very painful.


All fair points!

From your username, would you happen to be Marcos or someone else from Solid Angle? If so, big fan of Arnold. :)


Nope, sadly not. This username was one of the few computer graphics related usernames left on reddit and solid angle sampling is so much cooler than area sampling. You're not only person who thought this, so it's probably about time that I switch to a different username. I'm also a big fan of their work, though, I would love to work for that company once I'm done with university.

Btw, I also really enjoy reading your blog, keep up the good work. Looking forward to the day that you release the source of your renderer.


I am not sure, but can Renderman use the GPU, since I read if you use OSL in Cycles it is restricted to the CPU only. The node editor can utilize the GPU, but not OSL.


I'm pretty sure every movie company you named could have rendered the same thing in Cycles just fine. You still can't name a single important thing that can't be done with Cycles.

Speed is good, but you cherry picked some niche thing that's probably faster than Cycles (which you haven't provided evidence for). That's a silly argument.

EDIT:

Thanks for the updated post, much more substance.


The burden of proof here is on Cycles. There's 30 years of history showing off Renderman, and everybody in the CG industry knows that it's awesome. Not many people have even heard of Cycles. What movies have used it? Why is it better than the existing rendering software? Is it compatible with the Renderman spec? How easy is it to set up a render farm with it?


This. In theory Cycles can be used to make the same kind of movies Renderman can. Please name a few examples of cases where this was done.

On the flipside, when people say Renderman can be used to make production level stuff, they can point to literally any film with CG made in the past 25 years as evidence.


Post updated with detailed list and links for each list item.

At studios I've been at, we did tests with various renderers. Sure, we could have rendered a film in Cycles, but it would have taken much longer.


I'm pretty sure you can't.

Let's say you want to render realistic skin using Cycles. You can't, the only BSSRDF profiles that Cycles offer are simple bicubic and gaussian profiles, these wouldn't work for rendering realistic skin. Cycles doesn't even offer a simple dipole. Renderman on the other hand offers a dipole, it offers the state of the Photon Beam Diffusion, it offers Pixar's recent approximate BSSRDF, which is almost as accurate as PBD and is as easy to compute as a gaussian or a bicubic profile. So you'd have to implement your own BSSRDF shader, which is only doable for large studios such as Weta which have their own R&D department.

So let's say you want to render hair. You can do this with Cycles, but their importance sampling code is not state of the art (last time I checked), which means that you can probably expect double the amount of noise in Cycles. You could still use this for rendering, but computing time is quite limited (especially if artists want to compute previews). You'd either have to implement your own hair shader, or you'd have to purchase a ton of extra hardware.

Let's say you want to render a scene with tons of triangles and textures, you probably won't be able to do that in Cycles. They don't use special tricks for quantizing those triangles in the memory. Their code for caching textures (sometimes multiple terabytes for a single scene), is also not as good as Renderman, especially when using the GPU (a GPU isn't very good at constantly streaming terabytes of data into it), which means that your texture date either needs to be limited, or you'd need to throw a lot of rendering machines at it.

Cycles can do most stuff that Renderman (or Arnold) can, but that's not the point. If you'd pay me to work on a renderer for you fulltime for a year, I could produce a feature complete renderer, but it won't be optimized. The code won't be optimized, but probably more importantly the ray intersection code and the importance sampling code won't be optimized, which means that renders will be slow and noisy. Pixar has a whole team working on optimizing their code and whole team of researchers working on improving the importance sampling. Cycles is made by hobbyists who are doing an excellent job (Cycles is an amazing renderer for amateur users), but it's just not in the same ballpark as Renderman or Arnold.

Cycles is an absolutely amazing renderer, but it just lacks a lot of stuff that you'd during production of feature movies. If you want to use Cycles for your own use I can absolutely recommend it, it will have everything you need and it's open source!


Just one nitpick, Arnold supports the same BSSRDF profiles and has been used for realistic skin rendering in movies. The gaussian profile is actually suitable for rendering realistic skin, by using a combination of multiple gaussians you can very accurately match measured human skin profiles.


Yeah, such profiles can certainly be used for skin rendering. I know that SPI used the method you mentioned on a few movies. There will be visible flaws though. The skin and especially the lips will look too waxy when using gaussians or bicubics. Weta Digital used Quantized Diffusion on Promotheus for this reason [1]. Pixar has an upcoming talk on a simple, yet highly accurate BSSRDF profile this SIGGRAPH, so let's hope the folks at Blender will implement that in Cycles.

[1] http://www.fxguide.com/featured/prometheus-rebuilding-hallow...


> it will have everything you need and it's open source! Not everything for animation, yet. BTW, for those who don't know, solid angle is the company behind the Arnold Renderer, and the creator of cycles, Brecht Van Lommel left the Blender Foundation not so long ago to work for them. The quantity of people contributing to Cycles is pretty small compared to other areas of Blender.


> I'm pretty sure every movie company you named could have rendered the same thing in Cycles just fine.

Movies aren't made by software, they're made by massive teams of people who share overlapping expertise.

Do you want to write Lua for Kerbal Space Program, or Perl for NASA?


http://renderman.pixar.com/view/movies-and-awards

Although it should be pointed out PRMan wasn't the only renderer used for quite a few of these movies (definitely the later ones) - multiple studios work on movies these days, and other renderers like Arnold and VRay until a year ago were very close to knocking Pixar completely out of the market (even ILM switched to using Arnold for a couple of years).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: