In theory, upping the resolution will also remove the need for spatial anti-aliasing, though it's not efficient. Distributing rays in time feels a bit different because what you're really trying to do is integrate over an interval of time. So I'm not sure I agree that they are the same thing.
Remember that to remove aliasing (spatial and temporal), you want no frequencies above the nyquist rate.
Typically, anti-aliasing uses an average of multiple samples across the width of the pixel (MSAA), or across the timespan of the frame to eliminate this.
That is in fact a rectangular function in the time/spatial domain. Which is wrong. A rectangular function reduces rather than eliminates the frequency components above the nyquist rate, and also attenuates frequencies near but below the nyquist rate (leading to blur).
In fact, you want a rectangular frequency domain function, which in the spatial/time domain is a sinc function.
This isn't done in real cameras because it is technically too hard, but in 3D rendering, it should be done, and will produce smoother animations.
I have never seen anyone do this, but results should be theoretically better. I’d also like to see a freeze frame of a sinc time domain sampled motion blur - it would probably look very weird, even though it looks good at the playback rate.
They aren't different, just different dimensions. If you increase the frame rate you would need less temporal anti-aliasing. You can actually see noise artifacts in the motion blur of the first ice age movie (and many other places to be fair, like the slices in the transitions on Big Bang Theory).
I assume you're thinking of Kevin Egan's paper from 2009 [1]? There have been some follow ups since, but the basic idea is "you can filter the hell out of it, kind of". Sadly, while these look okay, such filtering is still prone to over blurring. The frames described actually focus on hair which is a perfect example of what wouldn't work well in the filtering systems, and requires enough samples for anti-aliasing that motion blur comes "for free".
It's engineering a viable cheap/quality solution for production that is the challenge being spoken of, of course. Research work only gives a starting point for that, and in this case it's other parts of the pipeline that were optimized to support the needs of motion blur.
They don't say how, but I assume like all of us they moved to a motion-blur friendly time-based BVH. I'm surprised this only recently came up for Blender!
> After a few days of investigation, Sergey improved the layout of hair bounding boxes for BVH structure. What does this mean? A more in-depth explanation is coming soon.
Sounds more that they optimized their BVH implementation
Sorry if I wasn't clear. Using a time-dependent BVH (where instead of two vec3 for the corners you store four, one pair for t=0 and one for t=1) is an "optimization". Given the later sentence:
> After that, he applied the same optimization to triangles (for actual character geometry)
it suggested that the bug was just using a single bounding box that ignored the motion (which is correct, but slow).
Looking at the diff they did it doesn't really look that way (only had a quick glance though).
Looks like they're not using a swept BVH (interpolated time segments) for motion blur which seems weird, instead leaf nodes seem to be duplicated for each time sample.
So it looks like they're just making savings by skipping motion segments outside the current ray time, and I'd assume the bboxes of the parent inner nodes are all much bigger than they need to be (as they're not swept), and so intersection performance is still quite bad (compared with how it could be)?
Every article I see where it is a blog about 'making X Y times faster' the answer is always 'by doing something that reasonable people would assume we had already tried' and this article is no different. This is just cycles implementing a crude version of the cutting edge from 15 years ago.