Hacker News new | past | comments | ask | show | jobs | submit login

It was my understanding that a large reason why early voxel engines had great performance is because you only draw each pixel once (kinda like how deferred lighting only calculates lights for each pixel once) eliminating overdraw and therefore doing work that wasn’t ever going to be visible.

But now GPUs can crunch that data super fast (as you say) and memory bandwidth is often a bigger issue. So while voxels still get used in some places, they’re avoided in favour of more compressed data formats and rendering techniques.

At least, that’s what it sounded like to me, I’m no graphics expert.




>because you only draw each pixel once

It depends on the algorithm, if you did heightfield then yes, you could easily eliminate invisible pixels without testing each of them against z-buffer. However you could not do complex models with heightfields.

A popular general voxel algorithm in 90s was "chains", where an object was represented as a sequence of voxels, where position of each was encoded as an offset from the previous one and voxels were on a regular grid so there were only 26 possible offsets. If you did not do perspective projection then you could have just projected all 26 deltas into the screen space and compute each voxel's position with 3 additions. With the offset taking 5 bits you could fit each voxel in a 16-bit dword together with some ramp-lit material for per-pixel lighting. It was a very fast loop even on a CPU without division. On a modern CPU or GPU, of course, this is painfully slow, since each position depends on the previous one and ensures strict order on the loop. Memory-bandwidth wise this is quite nice because of the very compact input data. E.g. you could put position and texture deltas into 8 bit voxels and had more compact representation for a smaller model than using popular 16/32 byte vertices.


Oh that’s interesting, thanks for the comment! Must read up on it more :)

Do you know how modern voxel algorithms work? Do they just blindly render point clouds to make use of the GPU parallelism and let the z-buffer remove invisible voxels (and a spatial index to render only the ones in view) or is there more sophistication? I assume there’s some clever parallel algorithms available nowadays, or?


I am not closely following voxel developments so I could be wrong, but my impression is that the practical implementations focus on the representation and its compression. They all render with some type of raytracing using compute. There had been approaches to generate a triangle mesh from a voxel representation but I am not sure how far did they get.


Thanks!


Early voxel engines (referred to as height-map engines in the article) were narrow, optimized, limited ray tracers essentially.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: