Hacker News new | past | comments | ask | show | jobs | submit login

It's generally called "LOD", dynamically changing the level of detail of an object based on the distance from the camera.

It has been a pretty widely used optimization technique for a long time, and not specific to Unreal Engine (though I can't find which 3D renderer introduced it first).




The impression I got was that instead of dynamically changing between a few different pre-made models for an asset at a specific point, it could automatically scale a very high quality and poly count model down to less quality (that is, generate the lower quality model automatically from the high quality one), and on the fly or maybe JIT, but I'm not sure if that's accurate or if that's also old tech.

Edit: See your sibling comment, which came in just after I originally replied here.


I was in the back room of Epic Megagames in 97 when Tim Sweeney showed me his automatic LOD on the Skaarj model. It was pretty basic compared to the current tech, but I think remember him saying it didn't require lower poly meshes/maps to be generated by the animators/artists. I don't know if the LOD was pre-baked or not at the time, but given that we were running the first MMX chips it probably was.


You can see that on Mario himself in Super Mario 64 if you have him run far away from the camera (or fall off an edge).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: