Interactive raytracing is the future both in mobile and desktop. My project http://clara.io, an online 3D modeler + renderer, recently gained the ability to do embeded interactive ray traced 3D embeds using V-Ray (one of the best and most accurate renderers in the world).
It is cloud streaming, this is because we do not want to be limited by the CPU/memory capabilities of the client. V-Ray is an amazing renderer, but for complex scenes it can require +32GB and use up to 40 cores or even clusters of machines. We want to virtualize that infrastructure for our users.
It's best to remember that ray tracing isn't the be all and end all of graphics since it's far harder for artists to control the results than the other hacked up ways. One consequence of this is their example comparison, where I think it's entirely subjective to say the ray traced output is better. Neither is particularly great.
The big win is around indirect lighting, but the development of that in standard rasterisers in the last decade has exceeded even my wildly optimistic expectations.
New options = good, but there's no such thing as a graphics silver bullet.
I'm sorry man, but that sounds crazy. Raytraced scenes are much easier for artists to control the results because things actually behave like you would intuitively expect (if the amount of rays is high enough).
If raytraced scenes were hard to control, why would every single animated movie be raytraced?
Raytracing really is the be all and end all of graphics. The more rays you can render, the more realistic your scene will be, up to 100% realism.
Of course, we can come pretty far with the hacks as well, and it's hard to say if hardware raytracing can come close to the quality hardware shader hacks can achieve on traditional GPU's in real time.
I think you are missing a shift that has happened in the industry over the last few years. Yes, now that computers are fast enough, pixar raytraces everything in renderman. Most other major studios use that or arnold, a pure raytracer.
That's Pixar - they're behind the curve, and PRMan 17 and 18 (their commercial renderer they sell) has been pretty poor at full raytracing (monte-carlo integration) due to poor acceleration structures and the overhead of their RSL shading language.
PRMan 19 looks like it's going to fix these issues to some degree.
Other renderers like Arnold and VRay (full raytracers) have been being used for the last 5 years by other studios.
You are absolutely right. People should stop upvoting my post because I'm totally wrong. Not only was Cars the first movie that was done using ray tracing, they thought it was glitchy and MU which was released 6 years later was actually the first movie to be fully raytraced.
That said, the technique they did use was scanline rendering, which is still rather different from z-buffering, which is the way GPU's generally render.
Of course now I'll change my argument that Pixar is after a non-realistic cartoony feel, so that gives them more freedom.
If you look at live action movies, the CG parts in there are either painted or ray traced, they need absolute control to be able to blend them.
That's Pixar - they've been late to the path tracing party by several years. Also MU didn't use raytracing for hair shadows - they still used deep shadow maps for that.
Other studios like ILM, SPI and Bluesky have been using full path tracing for years.
Raytracing is just one tool in a bag of necessary algorithms. For instance, there's no obvious way to extend raytracing to offer indirect lighting—all existing algorithms offer some form of tradeoff. Furthermore, many effects are flat out a pain to render with raytracing: displacement mapping, subsurface scattering, anti-aliasing, and scenes where both speed of rendering and dynamic geometry are needed. All of those require substantial thought and are non-trivial to implement.
And don't forget non-realistic graphics.
The problems of rendering are unlikely to become inherently easier—I don't think raytracing will ever be "end all of graphics". And while "every singly animated movie" might be raytraced to some extent, it's highly unlikely any of them use pure raytracing to achieve the effect.
Uh, what. Sampled raytracing is the most natural approach to indirect lighting, displacement mapping, subsurface scattering, and anti-aliasing.
As far as non-realistic rendering goes I am not sure how this is relevant, you can alter the properties of light and still use ray tracing, and many effects use image based techniques anyway.
Raster will die in the long run, it's pretty obvious.
Raytracing may be conceptually the most natural but it's not always the best choice for a lot of these cases - displacement, motion blur, depth of field and anti-aliasing are still often handled much more quickly by micropolygon/scanline rendering even if the lighting and shading is raytraced. Hair and fur is pretty problematic with raytracing too.
That said, physically-based shading is massively easier to art-direct, even if the "front end" as it were of the renderer isn't naively tracing rays from the camera. There's a killer set of slides and notes documenting ILM and Sony's switch to PBR here: http://renderwonk.com/publications/s2010-shading-course/
> Raster will die in the long run, it's pretty obvious.
Care to elaborate why you think this is the case?
Even when doing raytracing, the fastest way to do the first pass is to rasterize your triangles and output hit data to textures (similarly to deferred rendering).
Additionally, triangle rasterizerization can be used for vector graphics, etc while raytracing is inherently only for 3d content.
I think that in the future we will have some kind of hardware acceleration for raytracing along with traditional triangle rasterization and they will be used together.
In the long run graphics will probably converge on an approach that properly estimates the rendering equation. That's why I think ray tracing ultimately will be the preferred solution.
> Sampled raytracing is the most natural approach to indirect lighting
Its been a while (~15 years or so) since I looked very deeply into this, but at that time I thought that radiosity was notably superior to raytracing for indirect lighting, and everyone and their cousin seemed to be doing hybrid ray tracing/radiosity systems for realistic rendering.
So, you didn't address the problem of "There's no obvious raytracing algorithm." All of the above algorithms have tradeoffs that make them ill advised in some sense. None of the above algorithms can handle displacement mapping or dynamic scenery with any speed. Get back to me when you solve those ridiculously combinatorial problem—any object can interact with any other object. And when you can animate objects in constant time between frames (as opposed to, at best, O(log(n)), and at worst, O(n)), maybe it'll be useful for realtime.
Path tracing can be seen as a extension of ray tracing that supports a much broader range of effect like indirect lighting, soft shadows, anti-aliasing, etc. With the correct function for the material, you can simulate subsurface scattering relatively easily too. But, to me, the beauty of this technique goes beyond the effects you can achieve: it's an incredibly compact, uniform way of simulating light. Compared to rasterization, which to get the same result, you need a enormous code base that deals with very effect with caution. Adding a reflection in a rasterized scene might not reflect your shadows, etc. You need care about every possible interaction between every effect.
But that is because of hardware capability... i.e. computers weren't good enough for artists to work with... wasn't it?
I've been out of the 3D world for a while, but Ray tracing was the be-all-end-all/holy grail of 3d. It is simulating light... you can't do better than physics.
John Carmack confirmed what fidotron said in the Quakecon 2012 talk which I stumbled across in the comments here recently: https://www.youtube.com/watch?v=MG4QuTe8aUw : he talks about ID artists referring to light-map dinking as "painting with light" and generally being reluctant to give up direct control of final appearances.
Likely you're both right, and while it's harder to learn how to dink scenes and assets with ad-hoc rendering techniques than just use raytracing, an artist who has already mastered the black arts of tweaking everything just so is going to resent losing that control and having to let the physics take over.
an artist who has already mastered the black arts of tweaking everything just so is going to resent losing that control and having to let the physics take over.
Which in the long run is good for gamers. The industry obsession with pre-baked rendering passes is holding back games on an interactivity level.
You might feel differently if you ever had to do shadow mapping on traditional hardware. Even in AAA games, there are tons of artifacts in the shadows. Same goes for reflections.
I don't see why it's harder for artists to control the results... can you elaborate? It seems like it's going to be 90% the same. You have a point on a surface, you have a camera vector and light vectors, you supply a small piece of code and out pops a color. The major differences in the pipeline are in the scaffolding: depth tests, reflections, and shadows will be done in different ways. The differences for artists are that they'll have to learn a system with different limitations.
My guess is that game graphics are going to follow the developments in movie CGI. Pixar switched to ray tracing in 2013, and games will switch when hardware powers up and expertise filters down.
> The differences for artists are that they'll have to learn a system with different limitations.
If current-generation AAA games are any indication, either very few artists ever learned the raster workarounds or few artists ever had time to implement the workarounds.
I think ray-tracing will be a game changer. Raster shadows are easy to get subtly wrong and very difficult to get right. Cube-maps are easy to get subtly wrong and very difficult to get right. Transparency is easy to get subtly wrong and very difficult to get right. The list goes on.
> Even in AAA games, there are tons of artifacts in the shadows. Same goes for reflections.
Exactly! I can't count the number of times I've seen shadows pointing in the wrong direction, having the wrong color, having the wrong penumbra/antumbra, casting through solid objects, etc. Cube-map reflections are even worse (yay for faucet handles reflecting forrest scenes) especially when they're moving. Expect to see a reflection slide up the body of a car as it comes to a stop? If you're not in a car-racing game, forget about it.
All of those problems can be overcome with artist sweat and tears. The code has already been written and is in the big engines, but the effects still regularly fail to happen in AAA titles.
Ray-tracing makes it easy to do things right. None of the raster techniques have achieved that landmark. This WILL be a game changer.
I'm sorry but there is a pronounced industry trend exactly against what you argue. Physically based rendering is winning out almost everywhere. A straightforward path tracer is capable of simulating nearly all the physics of light to any desired degree of fidelity. Part of why it's won out is that it's actually far simpler for artists to work in terms of real world material concepts vs tuning abstract parameters in fake models that can be coerced into to looking real. Eg "dusty metal with this albedo map" vs maps for parameters in some mutilated and extended phong shader.
Physically-based rendering has little to do with global illumination, and is only really about the BRDF [1] used for computing the final pixel color based on a number of inputs for that point in space.
I don't think so, not with modern tools anyway. Raytracing is in my experience the technology that most closely models the real world. Most technologies used in games even today are hacks, using precomputed values oftentimes derived from ray traced models. Raytracing is quite natural to work with because it simulates real world objects and does so in a realistic manner. The only difficulty might be caused by the lack of such real-time raytracing hardware as depicted here causing slower development as one has to wait for a scene or animation to render. IMO, raytracing is indeed the pinnacle of photorealistic rendering currently.
I've yet to see a single rasterised game come anywhere close to ray-tracing in the shadow department and that's what destroys immersion for me the most: low-resolution, aliased shadows crawling all over the place.
Ray tracing works backwards from the way actual optics works. It sends out a ray for each pixel and then finds out what it falls on then does math to figure out what it should look like. In a simple example, you have a few light sources and you trace a ray back to an object (polygonal surface), then you shoot off other rays from there toward the light sources (or you shoot off potential reverse reflection rays). If something is blocking a light source you adjust the amount of shadow being rendered or you bounce off of that surface again toward the light sources, and so on. Making it possible to render shadows, reflections, and so on with decent fidelity.
In contrast to that technique you have "radiosity", where the illumination at each point on a surface is calculated based on its environment. Radiosity is better at rendering diffuse light, ray tracing is better at rendering reflected light.
The underlying difficulty is that illumination is a maddeningly combinatorial problem. In principle every part of every object in a scene contributes illumation to every part of every other object. The light falling on a desk lamp from the LEDs of a clock also illuminates the desk itself, and so on. Radiosity and ray tracing attempt to solve those problems by making simplifying assumptions and performing a subset of the calculations necessary to illuminate and render a scene completely faithfully. In principle a more proper ray tracing algorithm would be to send out a huge number of rays in every direction for every image field ray, then follow each of those through their evolution, sending out yet more for surfaces each fall on, and so on. But that's far too computationally intensive (it's an O(n!) level problem).
Path tracing is similar to these techniques but makes different simplifying assumptions. And the most important aspect is that instead of deterministically calculating everything in a specific way it uses a monte-carlo method of statistically sampling different "paths" then using the data to estimate the resulting illumination/image rendering. Path tracing results in a compromise between ray tracing and radiosity of being able to render both reflective and diffuse light well, though it has its own short-comings.
This sounds somewhat wrong. Ray tracing is just another term for path tracing, as paths are made up of traced rays. A naive camera-to-light witting ray tracer is still a sampling path tracer but has bias towards a specific subset of paths, namely those which follow specular bounces from the eye to the light or those which have one diffuse bounce.
Much of the progress in photorealistic rendering has come from improving the techniques used to sample paths. Bi-directional path tracing samples paths by tracing paths from both the light and the eye and performing a technique called multi-importance sampling in order to weight them appropriately. This allows the algorithm to pick up more light paths that would otherwise be hard to reach, such as those which form caustic reflections.
One of the more recent developments on this front is the use of the "specular manifold", which allows a path sampler to "walk" in path-space that allows capturing of even more partially-specular paths. (See Wenzel 2013.) This technique allows efficient sampling of the main light transport paths in scenes where a specular transparent surface surrounds a light source, ex a light bulb casing.
Edit: for this specific hardware, it sounds like they are using a hybrid approach so may very well be doing a basic eye-to-light ray tracing algorithm and then using raster for diffuse surface shading.
I thought I was clear that path tracing and ray tracing shared similarities. But ultimately they differ in several key ways.
For example, ray tracing tends to be deterministic, and it starts at the view frame and moves toward the scene and then light sources. Path tracing has no such constraint. Modern path tracing techniques use a combination of ray-tracing-like paths as well as paths beginning from light sources (gathering rays and shooting rays, in the parlance of the field).
In principle you could consider path tracing to be some sort of specialized subset of ray tracing (or vice versa for that matter) but in practice it is different enough in its specifics and implications that it makes more sense to treat it as merely related.
This might be bit of a tangent but this is one of the problems where it matters which parameter you are looking for.
While it is a O(n!) in relation to reflections a real physical rendering is 'mere' O(n) in relation to volume (or n^3 in relation to the sides of the cubic volume we want to simulate)! Why? We simply create a grid of around 200nm volumes and run a wave simulation for the light. Or O(nlogn) if we want to use fft based simulation.
Naturally this consumes horrendous amounts of memory and computational power simply because the size of spaces humans are interested in are so massive compared to wavelength of light.
But eventually we might move into this direction, maybe we even live to see it eventually.
They should have showed the car moving. You'd have seen the reflection slide up the hood in the raytraced demo.
It takes A LOT of work and planning to get light and shadow correct in a raster setting. Most AAA games don't bother. Raytracing makes it easy to get them right, which will make all the difference in the world.
I don't think it _is_ super obvious. Rather, I think the goal is to properly fill in the scene with the very subtle clues we unconsciously look for.
I had a similar experience working a bit with Radiance years ago. The output looked mediocre, at best, but it was reporting a real view of the scene; it's call to fame is that it outputs real energy values in physical units (like energy per unit solid angle or luminance or such). But despite the cartoony models I provided, it did demonstrate subtleties that weren't hacks (needed for this application), things that you wouldn't think to or bother implementing directly.
This? This grabs a lot of needed visual artifacts faster and more correctly without crazy hacks (or so it seems - it does say hybrid rendering).
I think that's because they used poor art assets. Most games look far more pleasing and realistic than those examples while using traditional rasterization pipelines, mostly on the strength of the artists and an offline prebaking stage.
There's a reason Nvidia employs top CG talent for their hardware demos. A pretty demo sells more than an ugly one, even if they essentially pull off the same effect.
Today's mobile devices, especially the higher-end ones, are quite powerful. They offer more processing power, memory and storage than laptops did just a few years ago, and desktops just a few years before that. So I don't think that "doing it on mobile" is really that powerful of an argument any longer.
"... the potential that these technologies have to revolutionize the user experience from mobile and console gaming to virtual and augmented reality."
I've been interested in ray-tracing since the early '90s, and I'm glad it's finally coming to real-time, but this isn't going to "revolutionize" shit. It's going to make 3D games and VR slightly prettier than they were. It's not going to enable new styles of gameplay or new modes of interaction. We will never again see anything like the enormous forward leaps in realtime graphics that happened during the '90s.
I'm also a bit put off by their comparison showing that PowerVR has better reflections, shadows, and transparency than a raster engine with reflections and some shadows turned off and a very poor choice of glass-transparency filter.
On another look, I don't even know what they're going for with the shadows. The rasterized image has "NO SHADOWS" printed right between the shadows of a building and a telephone wire, and their hybrid render has the light from the diner windows casting shadows across outside pavement in broad daylight. Bwuh?
So what is the API and libraries like? Is there any sort of built-in OGL fallback or do devs need to write two completely different renderers? Is there standards relating to this; are any other vendors going to be implementing same API? Is this new hardware compatible with the older Caustic2 cards?
It never fails to amaze me how powerful raytracing is. Last year I took an "Image Synthesis" class and did a quick presentation about a post I've seen here on HN about a raytracker in 1337 bytes [0]. It is amazing how such a small program can generate an image with depth of view, shadows and texture.
I see headlines like this and I'm like "are they still in the driver dark ages? Yep? Call me when they stop acting like a 2 year old and play ball in Mesa"
This brings back some memories. Used to do some experiments using POV Ray back in the day.
I remember how slow the process was, it could take several hours/days to generate an image full of reflections, but in the end the results were usually stunning...
Before it was renamed "povray", it was called "dkbtrace". I printed the entire dkbtrace source and studied it, to see how a real Ray Tracer worked; and through it, I learned how efficient vtbls work (it was 1990 C, and though C++ was already starting to become visible and popular, it was still "that new language that may or may not become popular" - dkbtrace implemented all the OO inside).
I think the sample frames demonstrate that hybrid ray tracing is less realistic that pure path tracing. I hope that someone figures out how to make stuff like the Brigade 3 demos based just on feeding geometry and textures to hardware.
I'd wager that the current rasterising pipeline is more flexible than one based on raytracing - capable of generating a range of styles, not just photorealistic ones. And therefore having an extra dedicated chip for raytracing seems uneconomical.
The comparison examples given in the article were slightly ridiculous. How does a non-reflective car represents 'traditional' rendering? Look at any great AAA game and you'll see reflections, refractions, radiosity, etc., that are all pretty amazing. I don't think that general demand will be there for alternative rendering hardware for quite a while.
> I'd wager that the current rasterising pipeline is more flexible than one based on raytracing - capable of generating a range of styles, not just photorealistic ones
Nope. I'd say the "raytracing pipeline" really does replace only the rasterization step, and once the triangle to draw is "found", you do whatever shading/lighting/fx/fragshader you want to draw it with? Wouldn't this be the most sensible approach?
Rasterization in lay-man's terms really is just "figure out which triangle, if any, is 'hit' at this pixel", so a historically much faster and very neat hack to avoid the tracing of a ray..
But you don't get cheap soft-shadowing / ambient occlusion / reflect-refract and you'd need to do occlusion culling seperately for current-gen "complex" scenes to avoid a drawcall for all kinds of hidden objects. That's where at some point raytracing as it becomes more feasible also becomes much more attractive. Potentially also reducing geometry-LODing headaches etc..
Two thoughts: I would love to see a video of a dynamic environment (aka actors/objects moving in a scene). Also, how long before cryptocurrency miners use these to gain step function over current gpus.
I thought Wolfenstein 3D was an example of ray tracing but reading this article and Wikipedia it seems to be all about lighting effects. What do I misunderstand?
This is incredible! Mobile GPU does 300 million rays per second without using any shading GFLOPS? This is the GPU that is going to be on the next iPhone, right? Brigade 3 does 750 million rays per second on Nvidia GTX 580 with full power. I just wonder what this thing could do when scaled up.
Well, if AMD had taken the need to get their GPGPU offering out the door seriously or if they had later taken the need for CUDA compatibility seriously then Nvidia would have had to compete.
Alas, we're talking about a company that doesn't even think it's important for their installer to reliably replace existing drivers. The result: I get to pay through the nose for my predecessor's CUDA lessons. Yay.
It works best on Chrome, Firefox and Safari:
https://plus.google.com/u/0/+BenHouston3D/posts/DYq2RKJENC5
Here is another example:
https://twitter.com/exocortexcom/status/443538733661704192
I can only see raytracing becoming more popular.