Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Monte Carlo ray tracer in Rust (github.com/dalamar42)
168 points by Dalamar42 on April 19, 2020 | hide | past | favorite | 33 comments



Hi HN, first time posting here!

I started this project last year when I was looking for something fun to learn Rust with and I had Peter Shirley's excellent Ray Tracing in a Weekend series of books recommended to me. In the process I got interested in learning ray-tracing for its own sake so I decided to finish all three books and I am now considering continuing this further on my own.

I am sharing in case this is useful to others who are on a similar path to where I was.


You may want to look at this article for further inspiration: https://bheisler.github.io/post/writing-gpu-accelerated-path...


Thanks for sharing. I'll have a look


Really beautiful results. It’s interesting to see Ray Tracing’s durability. In the 90s it was so prohibitively expensive (computationally), that a whole bunch of techniques were developed to achieve realism in somewhat... hackier ways. Now we’re seeing GPU support, etc.


This is my first attempt at graphics programming and as I went through it I was quite surprised by the simplicity of ray tracing. Before this, I hadn't imagined you could get beautiful pictures with so little code. I found the whole thing quite elegant.


The same thing applies to lots of fields of computation. In fluid dynamics, for instance, a direct simulation of the Navier-stokes equations is quite straightforward and will produce the most accurate solution. All the complicated parts are layers and layers of tricks to save large amounts of compute time with (hopefully) negligable effects on the ultimate accuracy.


This is often true in statistics also. It's high time monte carlo was brought earlier in the curriculum. It's the gift that keeps giving.


Since you seem knowledgeable about this -

Can you explain what the difference between 3D graphics generated with Ray Tracing and without? My understanding is that Ray Tracing is essentially the brute force solution to the problem of lighting. We developed "hackier" approximations which run much quicker so our games can run at reasonable framerates.

Is true Ray Tracing really that much better than the approximations we've developed? Is it really worth the significant drop in performance? As an old PC gamer, I've often revered clever optimizations and approximations as not only necessary, but undeniably clever and (in my eyes) cooler.

Today it seems there's a shift in the other direction.

Now that our GPUs can just barely run Ray Tracing in modern games, people are so hyped to use it. For me, it is strange that people would clamor over graphics which are less optimized.

Is the visual benefit that high? Does a game that runs on lots of hardware like Minecraft really need Ray Tracing that requires very specific GPUs?

I guess for me Ray Tracing feels like finding the primes by naively running divisions on every number less than the one you're looking at. It's the easy solution that is terribly slow. Existing lighting engines are (perhaps) like using a sieve?


The "hacks" to get semi-decent lighting are not free. Every time you want another lighting effect you need another hack. The hacks are getting more elaborate and complicated as people want a better looking result. It can be more difficult for artists to work with the current system since everything you see is an artificial hacky construction.

We're now approaching the point where raytracing is the simpler, easier, faster alternative. It's also great for artists because the results are more predictable. Raytracing gives a better result, or simply makes it possible for things like good shadows, transparency, reflection and global illumination.

It might take a while, but the current way of doing things is a dead end. If a triangle is smaller than a pixel, you're basically just raytracing anyway. That's also becoming a more common thing these days. There will continue to be hybrid solutions for a while, but as raytracing takes over certain portions of the rendering, which it is good enough for already, there is no reason to go back.


It's not just that ray tracing looks better than rasterization for the same scenario, but that some scenarios are simply impossible / impractically complicated to render with only rasterized graphics. Just one example is specular reflections with non-screen space content and a dynamic environment.

Rasterized graphics can do specular reflections quite well as long as the reflection only shows things that are currently in view. One can then perform a limited form of ray tracing in the depth buffer to detect which part of the screen is visible where in the reflection. However, as soon as we want to reflect things outside of the screen it gets tricky, as we can't simply perform this screen space ray tracing. Instead, we have to rely on pre-baked reflection map textures. This works well enough for static objects, like the environment, but you can't see dynamic content, like players, as they can't be pre-baked into the texture. Also, it's useless when there are no static objects, like in Minecraft where there's no such thing as a static environment -- every block can change dynamically. And this is where the limit of specular reflections of rasterization is basically hit. There are of course workarounds, like rendering the scene a second time from the mirrors perspective, and then combining the view render and the mirror render, but you can imagine it gets prohibitively expensive quite fast as you add a few more reflective objects to the scene. Also, this method doesn't work for glossy reflections -- that's even more complex.

So this is just one example of how rasterized graphics limits us -- we can't have more than a few reflective objects in a scene which features dynamic geometry, which is a very sensible thing to want to have!

In my opinion, ray tracing really is that much better than the approximations we've developed. Also consider how much faster / cheaper it would be for a studio to create new graphics engines when you only have to write 1000 lines of ray tracing code instead of 100'000 lines of rasterization hacks (for a worse-looking result!).


Hate to tell you this but ray tracing gets just as complicated. You’re just shifting the realism bar much higher, but certain effects are always just out of reach in a given time budget and require specialised solutions and hacks to achieve.


In a given time budget, maybe, but that wasn't much a part of the question I answered. The question asker compared ray tracing to brute forcing finding primes vs. using a sieve -- and that's just not how it is.

Also, I'm not sure about "just as complicated". Is rendering refraction with dispersion "just as complicated" to achieve in a given time budget with ray tracing as with rasterization? I must admit I'm not well versed in modern rasterization hacks, but as far as I know that is simply impossible to achieve, regardless of how much time you have.


> Also consider how much faster / cheaper it would be for a studio to create new graphics engines when you only have to write 1000 lines of ray tracing code instead of 100'000 lines of rasterization hacks (for a worse-looking result!).

You were talking about games here, no? That's ultimate hard time constraint.

1000 loc gets you a very basic path tracer which isn't really going to be good for very much.

Your dispersion example is interesting - you can't really do it correctly with rasterization, no, although you can do a distorted background texture lookup with individually offset/blurred RGB channels. If you want to do rough glass you can just increase the blur amount. Not correct but looks 'good enough' in a lot of cases.

With ray tracing you can just trace 3 rays (one for one for each of red, green and blue). Simple! Except can you really afford 3 rays? Also how much do offset them by? You could use Cauchy's formula and use real refraction indices, but then you're going to get ugly separation between the channels. You could sample the whole visible spectrum and use temporal accumulation to build up the correct color, but now you've got color noise. What happens if you want to simulate rough glass? That's going to be very noisy indeed.

What about shadows from the glass? You can't afford to render caustics to do it correctly after all. Do you just ignore them? That'll look weird. Use a fresnel-weighted transparent shadow? Probably but now you have to handle that correctly everywhere and running a shader for shadow rays is expensive too so maybe you have to special-case that situation so most of your scene lands on the happy path.

My point is that anyone can write a basic path tracer in a weekend that will correctly simulate light transport given an infinite amount of time. Writing a renderer that will produce an image of a given quality in a given amount of time, incorporating a list of effects that an art director has decided are essential to the look of you product, is a very hard task still. It's simpler in a lot of ways, but also has to handle a lot of other complexities for the things that aren't possible in a rasterizer but are still very expensive to compute in a ray tracer.


You make many good points, but I still feel like you're really underselling the potential of simple ray tracing. Hardware acceleration of intersection testing and BVH construction/traversal is only going to become more prevalent, and with temporal reprojection and spatial denoising methods, many kinds of noise can be mitigated. It doesn't even have to be that complex -- "An Efficient Denoising Algorithm for Global Illumination" by Mara et al. is only 7 pages (more like 5, really).

Regarding spectral path tracing and colored noise, Wilkie et al. have written a good paper on this, "Hero Wavelength Spectral Sampling", if you're interested.


In your sample images, why do you get some aliasing just on the white skylight sort of rectangles and seemingly nowhere else?


That’s pretty common, because the area lights are often very bright. So the subpixel values are either 0 or white hot (and much brighter than 1.0). If you model the light sources with actual physical units and do some sort of exposure process for the “film”, you can get a more natural result.


Do you happen to have a link to any reference material or paper explaining the technique you are describing or a term for it I can search for? I knew why this aliasing was happening, but I wasn't quite sure how to fix it.


I’d actually search for “tone mapping” for the camera / film part.

For physical light units you want to look for “lumens”, “lux” and “IES Lights” (though you probably don’t need to care about the directional profile). It’s super handy to be able to plug in real units sometimes like “oh, this lightbulb is 500 lumens, I’ll use that”.


Thanks, I'll look them up


I guess I assumed the apparent AA elsewhere was a result of some sort of post-processing but what you're describing makes more sense, thanks.


The basic idea for the anti-aliasing is to take many samples inside every pixel and average them out which smooths outs the edges. The problem with the lights, as the other commenter explained, is that even with this the light is so bright that there is still a hard edge between the adjoining pixels.

[0] https://raytracing.github.io/books/RayTracingInOneWeekend.ht...


Nice work!


Thank you!


I'll chime in with my own ray tracer! https://gitlab.com/JoJoZ/futhark-tracer

It's a spectral path tracer written together with my friend as part of our master's thesis. It's implemented in Futhark, which is a new language for GPU programming. It's a purely functional array language, and it's very ML-like. It makes GPGPU programming quite easy, and as far as we can tell, the optimizations are really good!


For those looking to go beyond Pete’s “in a weekend” book, the full text of the 3rd edition of PBRT is freely available online:

http://www.pbr-book.org/3ed-2018/contents.html


Great work.

You might be interested in checking out the implementations of pbrt in rust too

https://github.com/wahn/rs_pbrt


That's interesting. Thanks, I'll have a look


How do you get it so clean? My renders have artifacts no matter how high I crank up the fidelity: https://github.com/brundonsmith/raytracer

Edit: I just zoomed way in and I can see ever so slight bits of speckling :) Maybe I'm just not turning the bounce-ray count up high enough...


The author is actually preferentially sending samples towards "attractors" [0] for lights and dielectrics. Doing so massively reduces variance. My favorite example is Figure 5 in this old paper [1] (I've never actually read Pete's ray tracing in a weekend, so I don't know if this is in there).

The most common variance reduction technique though would be to do direct lighting (aka "next event estimation" if you want to be overly pedantic). Your lighting in that image looks like it's just a "Hi, I happen to be emitting tons of energy" sphere in space, which will cause a lot of noise. Alternatively, if you are sampling it, you are likely not sampling the sphere as well as you could. You'll want to sample the sphere following the setup in Figure 2 in Pete's "Direct Lighting" paper [2] (which as a reminder, is currently accessible at the ACM during Covid-19).

[0] https://github.com/Dalamar42/rayt/blob/fc57fa4afc080a578e21e...

[1] http://graphics.stanford.edu/~boulos/papers/gi06.pdf

[2] https://dl.acm.org/doi/10.1145/226150.226151


The 3rd book [0] is primarily about implementing an MC ray tracer and then adding the techniques in the second paper you linked. That's what I've used in my code as well. I just skimmed through the paper and I think the book covers most of the material from the paper with the exception of spatial subdivision. I will have a more careful read later to see exactly what my implementation is missing from this.

[0] https://raytracing.github.io/books/RayTracingTheRestOfYourLi...



Hey, could the readme provide a list of how far along this project got (i.e. material types, object imports, textures, optimizations for collisions)?


Hey. At the moment what is implemented is exactly what you are going to find in the three books of the Ray Tracing in a Weekend series. If I can find the time to continue this project, I am going to make a git tag for the current version for people who just want to use this while going through the books and I will also add a changelog for any changes I make from that point onward.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: