It's best to remember that ray tracing isn't the be all and end all of graphics since it's far harder for artists to control the results than the other hacked up ways. One consequence of this is their example comparison, where I think it's entirely subjective to say the ray traced output is better. Neither is particularly great.
The big win is around indirect lighting, but the development of that in standard rasterisers in the last decade has exceeded even my wildly optimistic expectations.
New options = good, but there's no such thing as a graphics silver bullet.
I'm sorry man, but that sounds crazy. Raytraced scenes are much easier for artists to control the results because things actually behave like you would intuitively expect (if the amount of rays is high enough).
If raytraced scenes were hard to control, why would every single animated movie be raytraced?
Raytracing really is the be all and end all of graphics. The more rays you can render, the more realistic your scene will be, up to 100% realism.
Of course, we can come pretty far with the hacks as well, and it's hard to say if hardware raytracing can come close to the quality hardware shader hacks can achieve on traditional GPU's in real time.
I think you are missing a shift that has happened in the industry over the last few years. Yes, now that computers are fast enough, pixar raytraces everything in renderman. Most other major studios use that or arnold, a pure raytracer.
That's Pixar - they're behind the curve, and PRMan 17 and 18 (their commercial renderer they sell) has been pretty poor at full raytracing (monte-carlo integration) due to poor acceleration structures and the overhead of their RSL shading language.
PRMan 19 looks like it's going to fix these issues to some degree.
Other renderers like Arnold and VRay (full raytracers) have been being used for the last 5 years by other studios.
You are absolutely right. People should stop upvoting my post because I'm totally wrong. Not only was Cars the first movie that was done using ray tracing, they thought it was glitchy and MU which was released 6 years later was actually the first movie to be fully raytraced.
That said, the technique they did use was scanline rendering, which is still rather different from z-buffering, which is the way GPU's generally render.
Of course now I'll change my argument that Pixar is after a non-realistic cartoony feel, so that gives them more freedom.
If you look at live action movies, the CG parts in there are either painted or ray traced, they need absolute control to be able to blend them.
That's Pixar - they've been late to the path tracing party by several years. Also MU didn't use raytracing for hair shadows - they still used deep shadow maps for that.
Other studios like ILM, SPI and Bluesky have been using full path tracing for years.
Raytracing is just one tool in a bag of necessary algorithms. For instance, there's no obvious way to extend raytracing to offer indirect lighting—all existing algorithms offer some form of tradeoff. Furthermore, many effects are flat out a pain to render with raytracing: displacement mapping, subsurface scattering, anti-aliasing, and scenes where both speed of rendering and dynamic geometry are needed. All of those require substantial thought and are non-trivial to implement.
And don't forget non-realistic graphics.
The problems of rendering are unlikely to become inherently easier—I don't think raytracing will ever be "end all of graphics". And while "every singly animated movie" might be raytraced to some extent, it's highly unlikely any of them use pure raytracing to achieve the effect.
Uh, what. Sampled raytracing is the most natural approach to indirect lighting, displacement mapping, subsurface scattering, and anti-aliasing.
As far as non-realistic rendering goes I am not sure how this is relevant, you can alter the properties of light and still use ray tracing, and many effects use image based techniques anyway.
Raster will die in the long run, it's pretty obvious.
Raytracing may be conceptually the most natural but it's not always the best choice for a lot of these cases - displacement, motion blur, depth of field and anti-aliasing are still often handled much more quickly by micropolygon/scanline rendering even if the lighting and shading is raytraced. Hair and fur is pretty problematic with raytracing too.
That said, physically-based shading is massively easier to art-direct, even if the "front end" as it were of the renderer isn't naively tracing rays from the camera. There's a killer set of slides and notes documenting ILM and Sony's switch to PBR here: http://renderwonk.com/publications/s2010-shading-course/
> Raster will die in the long run, it's pretty obvious.
Care to elaborate why you think this is the case?
Even when doing raytracing, the fastest way to do the first pass is to rasterize your triangles and output hit data to textures (similarly to deferred rendering).
Additionally, triangle rasterizerization can be used for vector graphics, etc while raytracing is inherently only for 3d content.
I think that in the future we will have some kind of hardware acceleration for raytracing along with traditional triangle rasterization and they will be used together.
In the long run graphics will probably converge on an approach that properly estimates the rendering equation. That's why I think ray tracing ultimately will be the preferred solution.
> Sampled raytracing is the most natural approach to indirect lighting
Its been a while (~15 years or so) since I looked very deeply into this, but at that time I thought that radiosity was notably superior to raytracing for indirect lighting, and everyone and their cousin seemed to be doing hybrid ray tracing/radiosity systems for realistic rendering.
So, you didn't address the problem of "There's no obvious raytracing algorithm." All of the above algorithms have tradeoffs that make them ill advised in some sense. None of the above algorithms can handle displacement mapping or dynamic scenery with any speed. Get back to me when you solve those ridiculously combinatorial problem—any object can interact with any other object. And when you can animate objects in constant time between frames (as opposed to, at best, O(log(n)), and at worst, O(n)), maybe it'll be useful for realtime.
Path tracing can be seen as a extension of ray tracing that supports a much broader range of effect like indirect lighting, soft shadows, anti-aliasing, etc. With the correct function for the material, you can simulate subsurface scattering relatively easily too. But, to me, the beauty of this technique goes beyond the effects you can achieve: it's an incredibly compact, uniform way of simulating light. Compared to rasterization, which to get the same result, you need a enormous code base that deals with very effect with caution. Adding a reflection in a rasterized scene might not reflect your shadows, etc. You need care about every possible interaction between every effect.
But that is because of hardware capability... i.e. computers weren't good enough for artists to work with... wasn't it?
I've been out of the 3D world for a while, but Ray tracing was the be-all-end-all/holy grail of 3d. It is simulating light... you can't do better than physics.
John Carmack confirmed what fidotron said in the Quakecon 2012 talk which I stumbled across in the comments here recently: https://www.youtube.com/watch?v=MG4QuTe8aUw : he talks about ID artists referring to light-map dinking as "painting with light" and generally being reluctant to give up direct control of final appearances.
Likely you're both right, and while it's harder to learn how to dink scenes and assets with ad-hoc rendering techniques than just use raytracing, an artist who has already mastered the black arts of tweaking everything just so is going to resent losing that control and having to let the physics take over.
an artist who has already mastered the black arts of tweaking everything just so is going to resent losing that control and having to let the physics take over.
Which in the long run is good for gamers. The industry obsession with pre-baked rendering passes is holding back games on an interactivity level.
You might feel differently if you ever had to do shadow mapping on traditional hardware. Even in AAA games, there are tons of artifacts in the shadows. Same goes for reflections.
I don't see why it's harder for artists to control the results... can you elaborate? It seems like it's going to be 90% the same. You have a point on a surface, you have a camera vector and light vectors, you supply a small piece of code and out pops a color. The major differences in the pipeline are in the scaffolding: depth tests, reflections, and shadows will be done in different ways. The differences for artists are that they'll have to learn a system with different limitations.
My guess is that game graphics are going to follow the developments in movie CGI. Pixar switched to ray tracing in 2013, and games will switch when hardware powers up and expertise filters down.
> The differences for artists are that they'll have to learn a system with different limitations.
If current-generation AAA games are any indication, either very few artists ever learned the raster workarounds or few artists ever had time to implement the workarounds.
I think ray-tracing will be a game changer. Raster shadows are easy to get subtly wrong and very difficult to get right. Cube-maps are easy to get subtly wrong and very difficult to get right. Transparency is easy to get subtly wrong and very difficult to get right. The list goes on.
> Even in AAA games, there are tons of artifacts in the shadows. Same goes for reflections.
Exactly! I can't count the number of times I've seen shadows pointing in the wrong direction, having the wrong color, having the wrong penumbra/antumbra, casting through solid objects, etc. Cube-map reflections are even worse (yay for faucet handles reflecting forrest scenes) especially when they're moving. Expect to see a reflection slide up the body of a car as it comes to a stop? If you're not in a car-racing game, forget about it.
All of those problems can be overcome with artist sweat and tears. The code has already been written and is in the big engines, but the effects still regularly fail to happen in AAA titles.
Ray-tracing makes it easy to do things right. None of the raster techniques have achieved that landmark. This WILL be a game changer.
I'm sorry but there is a pronounced industry trend exactly against what you argue. Physically based rendering is winning out almost everywhere. A straightforward path tracer is capable of simulating nearly all the physics of light to any desired degree of fidelity. Part of why it's won out is that it's actually far simpler for artists to work in terms of real world material concepts vs tuning abstract parameters in fake models that can be coerced into to looking real. Eg "dusty metal with this albedo map" vs maps for parameters in some mutilated and extended phong shader.
Physically-based rendering has little to do with global illumination, and is only really about the BRDF [1] used for computing the final pixel color based on a number of inputs for that point in space.
I don't think so, not with modern tools anyway. Raytracing is in my experience the technology that most closely models the real world. Most technologies used in games even today are hacks, using precomputed values oftentimes derived from ray traced models. Raytracing is quite natural to work with because it simulates real world objects and does so in a realistic manner. The only difficulty might be caused by the lack of such real-time raytracing hardware as depicted here causing slower development as one has to wait for a scene or animation to render. IMO, raytracing is indeed the pinnacle of photorealistic rendering currently.
I've yet to see a single rasterised game come anywhere close to ray-tracing in the shadow department and that's what destroys immersion for me the most: low-resolution, aliased shadows crawling all over the place.
The big win is around indirect lighting, but the development of that in standard rasterisers in the last decade has exceeded even my wildly optimistic expectations.
New options = good, but there's no such thing as a graphics silver bullet.