Hacker News new | past | comments | ask | show | jobs | submit login
How Voxels Became ‘The Next Big Thing’ (medium.com/eightylevel)
159 points by mariuz on May 27, 2018 | hide | past | favorite | 116 comments



I remember when Voxels were the "Next Big Thing", 20 years ago.

Delta Force:

https://en.wikipedia.org/wiki/Delta_Force_(video_game)

Which used the Voxel Space engine:

https://en.wikipedia.org/wiki/Voxel_Space


After seeing Comanche in 1992, the term "voxels" for a time was quasi-synonymous with "heightmap terrain" to me. https://en.wikipedia.org/wiki/Comanche_(video_game_series)

There used to be quite a few Turbo Pascal "voxel"-heightmap demos floating around.

Blade Runner was surprising: It used voxels for characters. https://en.wikipedia.org/wiki/Blade_Runner_(1997_video_game)

The technique seemed to fall out of favor for a while due to the polygon GPU boom (I guess I missed Delta Force) until Outcast in 1999. (Back to heightmap terrains.) https://en.wikipedia.org/wiki/Outcast_(video_game)


Blade Runner didn't actually use voxels, they used a rather unique technique that they called "slice animations".

The 3D models were sliced from bottom to top into a couple of hundred slices (depending on desired quality) by intersecting the model with a horizontal plane and storing the resulting polygons.

The engine can only rotate the models around the vertical axis.

I made a hacky javascript version of the renderer a long time ago: http://thomas.fach-pedersen.net/bladerunner/mccoy_anim_13_fr...

EDIT: Let me also plug our WIP Blade Runner engine for ScummVM: https://github.com/scummvm/scummvm/tree/master/engines/blade...


That's really cool. Great job!


https://en.wikipedia.org/wiki/Blade_Runner_%281997_video_gam...

Seems that the people that developed the game considers it voxel based.


The page you linked quotes Louis Castle:

> What we are using is not voxels, but sort of 'voxels plus.'

So "not voxels". Louis Castle probably called it voxel-like because he didn't want to get too technical in an interview. Their technique has not been described in detail in the press but see http://deadendthrills.com/future-imperfect-the-lost-art-of-w... for an article that calls it a "slice model"-technique.

I'm not an expert on voxels but I've reverse engineered and reimplemented most of the Blade Runner rendering engine and in my opinion it doesn't count as voxels. For one, you're never going to be able to rotate the models around any axis other than the vertical.


Fun! Yeah, we stored our data as slices for space and restricted rotation to the Y axis. Both were optimization since each frame of an animation was a full model there was no need to rotate them. The renderer could render them from any angle though so I still consider them voxels. More like voxels lite then voxel plus. We also used a lot of sprite cards with zdepth and a quick normal hack for lighting. You had to cut corners where you could back then!!


Hello Louis!

I've certainly had a lot of fun figuring out how you did what you did, so thank you for that, no matter what you call it :)

You certainly crammed a lot of tech into one engine! Full screen 15 BPP videos with full z-buffer with smaller alpha-channeled videos rendered on top, character models with lighting. Even the UI is looping videos.

Once I get proper path finding working Blade Runner will be a lot more playable in our ScummVM engine.

I've probably rewatched the opening scene of the game a thousand times while working on it...

I only wished that you had used a scripting language for your game code instead of compiling it to DLLs. I know you optimized heavily for speed but it would have made our task a lot easier :)


So that's why you didn't include the Deckard -vs- Pris scene where she rotates around the X-axis! ;)

https://youtu.be/e9t5ikxjAQ4?t=1m9s


Fun indeed!

Can you talk about how the artists authored the original models? Was it an automated conversion from a standard polygon model or from a full voxel model? Or was it all drawn in this special slice format directly somehow?


Not Louis, but the article I linked above says they used 3D Studio Max and converted it to the slice model format.



Reads to me like they used voxels for the grunt work and then painted the result on some otherwise unimpressive polygons at the end (aka voxels plus as they put it themselves).


Speaking of Comanche, there was a discussion on it 6 months ago when I posted the link to "Terrain rendering in fewer than 20 lines of code" https://news.ycombinator.com/item?id=15772065


I think the difference is VR.

At the moment, CGI is essentially a craft. You use a set of tricks, illusions, and hacks to deliver an immersive experience. Games like Doom show that when you get all these tricks right, the result is impressive.

Unfortunately, like all hacks, they aren't robust. Most of them work on the assumption that the player won't look to closely at anything, or will only look at certain angles, or move through the game world at regular speeds.

People who do CGI are already well aware that when you go into VR, a lot of the hacks just don't work. Often, to deliver a high-quality experience, you have to throw polygons at the problem.

I'm pretty skeptical of this presentation - talking that much about optimization and performance makes me suspicious that their performance is terrible. However, ultimately, unless better hacks are devised, I think VR will be too much of a robustness test for the current model. In the short term, that will probably just result in some ugly graphics. In the long term, it'll put a lot of weight behind the drive to find better, less hacky solutions - and voxels are kinda first and foremost in that category.


The author doesn't see it the same way:

"There are a bunch of techniques people typically consider being voxel-based. The oldest ones used in games were height-map based where the renderer interpreted a 2D map of height values to calculate boundary between the air and the ground in the scene. That‘s not truly a voxel-based approach as there‘s no volumetric data set in use (ie in Delta Force 1, Comanche, Outcast, and others)."


I saw them doing the same pitch for a decade now, can't find old version because I don't remember their original name but I do remember their terrain demo.

https://www.youtube.com/watch?v=Gshc8GMTa1Y

it was quite impressive 8 years ago as it is today, but 10 years of demo without applications are weird.

here's a 2011 critique from notch of this very technology https://notch.tumblr.com/post/8386977075/its-a-scam - I don't think things have changed much


The Notch blog post you linked mentions Euclideon’s “Infinite Detail”. I’ve looked into that over the years (but don’t know anything about Atomontage), so will post my thoughts here.

My understanding of Euclideon (based on the snippets of information, interviews and videos they’ve released) is that the primary technology they created is an on disk spatial index and streaming system that allows only the data that is on screen to be streamed from disk, allowing them to have very large and complex datasets on disk while still rendering in real time. The “infinite detail” branding was referring to that the detail possible is only limited by disk space, not by rendering performance.

That sounds pretty cool to me and not a scam as many people think.

However, they’ve never been particularly convincing in the applicability to games or even that the quality gains are as high as they claim. They kept touting how easy it is for artists since they can 3d scan real objects and use that scan data almost directly (but many polygonal models start as scans too, so...), but I’ve never been impressed with the quality of their scans or demos. Their highest quality demos are nice (a lot nicer than their earlier demos which, to me, honestly looked like it was from the 90s) but they still don’t compare to modern games in my opinion. They also have not proven that it works well for animations or dynamic lighting and shadowing. At least, they’ve never demonstrated it.

They did use their tech for their Holoverse[0] games, but, again, the quality of the actual graphics wasn’t particularly impressive in my opinion (certainly the animations seemed stiff and boring) and from reviews I’ve seen on YouTube, VR headsets provide a better experience and superior quality.

So, my verdict is that they do have some novel and interesting tech, but they haven’t been able to get anywhere close to the quality claims that they’ve made and I certainly wouldn’t bet on them over more traditional modern graphics.

[0] https://www.holoverse.com.au


Euclideon are scammers not because they have nothing, but because they don't have what they claim they have. They claim to have revolutionary technology; they have something so well-understood that even I, merely an interested layman in the field of graphics, understand what they have just fine, very nearly well enough to just sit down and implement the core data structure right now, with no sign I've ever seen that they have anything beyond that. They claim to be able to animate in realtime, despite the fact that one of the tradeoffs of the tech is that becomes impossible in the sense we mean it. (You can animate much like a cel-based cartoon, but it can't be very interactive except on very restrictive conditions, ones that virtually no modern graphics user could accept.) It's delta between the rather pedestrian tech they have and the wild claims they make that is the scam, not what they do have.

I've heard it claimed that the whole company is basically a Germany-specific tax dodge method. I have no independent verification of this, but it fits the facts I have about their behavior.


> not because they have nothing, but because they don't have what they claim they have

Absolutely.

> animation

Yep, from what I’ve seen of their holoverse ganes, that’s exactly what they’re doing and the results are, predictably, not particularly impressive. When compared to state of the art notion captured animation we have in modern games, it’s really not very good at all.

> data structure

It may be just their hype as opposed to actual facts, but they certainly make it sound a bit more sophisticated than a “traditional” spatial-index tree data structure in the sense that they claim they can stream just the voxels that are on screen at any given moment from a huge on disk dataset. But we both know they’ve been known to exaggerate, so you could very well be correct.

> It's delta between the rather pedestrian tech they have and the wild claims they make that is the scam, not what they do have.

That’s a very good point and I certainly agree.


"It may be just their hype as opposed to actual facts, but they certainly make it sound a bit more sophisticated than a “traditional” spatial-index tree data structure in the sense that they claim they can stream just the voxels that are on screen at any given moment from a huge on disk dataset."

To be honest, while I don't have that much difficulty imagining that they can stream voxels off a disk (since it's a straightforward application of the oct-tree structure; you get a good 50% of the way there just by writing a naive octtree larger than memory and letting the OS swap algorithm do its thing, with some slight preloading), none of the demos they've ever showed should require anything to be streamed off the disk. Everything I've seen would firmly fit in memory of very modest desktop systems of the time of the demo. It's possible you can zoom in farther than they demonstrate, but, I mean, it's a demo. I'm allowed to expect they're showing off the best.

This is another one of the reasons people like me call them scammers; the first few graphics demos they put out weren't even that impressive. They were ugly and lacking in detail. They weren't even the kind of ugly you get when you have an impressive graphics tech and the person showing it off has no visual design chops, they were just plain ugly. Had the demos gone around the internet, but stripped of the hype they were accompanied with, nobody would have given them a second look. Any contemporary nVidia demo put out to demonstrate their graphics card look wildly better.

The demos where they scan real scenes and display them work much better. The weirdest thing of all to me is that they don't lean into that and just make that their business, but no, they won't shut up about how awesomesauce their stuff will be for games, even when they have a perfectly sensible application developed. Bizarre stuff.


yeah notch is talking about a different engine with overly bold claims, but given the technological overlap many of the talking point overlaps.

for example, animation and deformation, light mapping and content production are still going to be issues. sure they have a motion captured talking person and it's impressive, but it's still part of a fixed content pipeline, if you require ragdoll animation or an interactive actor it's way harder.


>it was quite impressive 8 years ago as it is today, but 10 years of demo without applications are weird.

That's practically Molyneuxesque!


What was/is holding voxel back? the new programming model? computation density?

I guess minecraft is an example of a voxel game (supposed a voxel game written in block style according to a random post on the internet[0]) that became really popular -- so I guess the answer could be "nothing", but it sure seems like voxels are still niche (and don't want to be).

[0]: https://forum.unity.com/threads/voxel-games-vs-block-games.2...


In principle, "voxels" and "polygons" are the same algorithm - you project some points on the surface to the screen and eliminate invisible ones with the z-buffer. Polygons allow to use sparser points and fill space between them with interpolation, voxels rely on the points density or blowing up the point size.

Interpolation is quite complicated and voxels, on the other hand, could be arranged so it would be very easy to draw on a traditional CPU. So one could write a really fast voxel renderer on something like Pentium 200, which would produce better looking graphics than a polygonal renderer at the same speed (it would not be as versatile but when you do a demo you are free to chose scenes where voxels will do the job better).

On GPUs interpolation is not just cheap but it's the only way to exploit massive parallelism since you can draw all interpolated points simultaneously. Drawing points, on the other hand, is sequential (because of z-buffer all primitives have to be drawn in the fixed order) and is already super slow by itself. GPUs are also optimized for drawing triangles so they do a lot of things with them they cannot do for a list of points (which would be the primitive you'd use to draw voxels). Of course you can do your voxels on compute but then you are not using color/depth buffer hardware so either way you are disadvantaged.


It was my understanding that a large reason why early voxel engines had great performance is because you only draw each pixel once (kinda like how deferred lighting only calculates lights for each pixel once) eliminating overdraw and therefore doing work that wasn’t ever going to be visible.

But now GPUs can crunch that data super fast (as you say) and memory bandwidth is often a bigger issue. So while voxels still get used in some places, they’re avoided in favour of more compressed data formats and rendering techniques.

At least, that’s what it sounded like to me, I’m no graphics expert.


>because you only draw each pixel once

It depends on the algorithm, if you did heightfield then yes, you could easily eliminate invisible pixels without testing each of them against z-buffer. However you could not do complex models with heightfields.

A popular general voxel algorithm in 90s was "chains", where an object was represented as a sequence of voxels, where position of each was encoded as an offset from the previous one and voxels were on a regular grid so there were only 26 possible offsets. If you did not do perspective projection then you could have just projected all 26 deltas into the screen space and compute each voxel's position with 3 additions. With the offset taking 5 bits you could fit each voxel in a 16-bit dword together with some ramp-lit material for per-pixel lighting. It was a very fast loop even on a CPU without division. On a modern CPU or GPU, of course, this is painfully slow, since each position depends on the previous one and ensures strict order on the loop. Memory-bandwidth wise this is quite nice because of the very compact input data. E.g. you could put position and texture deltas into 8 bit voxels and had more compact representation for a smaller model than using popular 16/32 byte vertices.


Oh that’s interesting, thanks for the comment! Must read up on it more :)

Do you know how modern voxel algorithms work? Do they just blindly render point clouds to make use of the GPU parallelism and let the z-buffer remove invisible voxels (and a spatial index to render only the ones in view) or is there more sophistication? I assume there’s some clever parallel algorithms available nowadays, or?


I am not closely following voxel developments so I could be wrong, but my impression is that the practical implementations focus on the representation and its compression. They all render with some type of raytracing using compute. There had been approaches to generate a triangle mesh from a voxel representation but I am not sure how far did they get.


Thanks!


Early voxel engines (referred to as height-map engines in the article) were narrow, optimized, limited ray tracers essentially.


The „Voxel Space“ engine from the 90th has only very little to do with what we call voxel today. It it basically a raycasting engine for height maps [0]. It is very limited. You can‘t look up and down more than 20 degrees and you can‘t even render houses, cars or trees.

[0] https://github.com/s-macke/VoxelSpace


> What was/is holding voxel back?

The fact that the domains they might be used in are also the domains people are guaranteed to have spent a lot of money on specialized polygon-pushing processors. Voxel techniques seemed to peak at about the time of the last hurrah of CPU-based rendering.


The main problem is memory bandwidth at higher resolutions. Minecraft runs at a really low resolution. (Internally.) Occlusion handling is less efficient too because bounding and occlusion volumes are discrete and either very inaccurate or nearly as slow as rendering the voxels.


>> What was/is holding voxel back?

Modeling and animation. I'm not even sure what some of these videos are showing, but it doesn't seem like it can be real voxels on a regular grid. Regular grids are hard to create models with, and are essentially impossible to animate in useful ways. I didn't see mention of how the animation in these clips was being done but there were indications that the models started out as polygons.


Article calls out Minecraft: voxel-looking, actually polygons though. A kind of voxel-cheater.


One question is how to you rig and deform voxel models onto skeletal animation?


Another technique related to voxels is the "Signed Distance Field" (using a data Field or procedural Function).

Signed Distance Function: https://en.wikipedia.org/wiki/Signed_distance_function

2D Signed Distance Fields are used for rendering fonts and other shapes defined by a texture whose signed pixels (-128..128) specify how far each is from the border (+ is outside, 0 is on the border, - is inside). They scale beautifully because you can smoothly interpolate between samples.

Drawing Text with Signed Distance Fields in Mapbox GL: https://blog.mapbox.com/drawing-text-with-signed-distance-fi...

TextMesh Pro - Font Asset Creation Process: https://www.youtube.com/watch?v=eXhfygRgIVY

You can convert them to piecewise outline contours (like an iso-map slicing at many different elevations) with the Marching Squares algorithm, but it's much more efficient to render them directly pixel-by-pixel with a shader.

Marching Squares: https://en.wikipedia.org/wiki/Marching_squares

Signed Distance Fields are excellent for shaders efficiently antialiasing and drawing effects like outlines, drop shadows, embossed and beveled edges, glows, multi texture mapping, bump mapping, etc). The shader can quickly determine exactly how far away each pixel is from the outline, and blend the appropriate effects together for that pixel, so you get precise anti-aliasing, outlines, drop shadows, etc, practically for free! (And literally for free, now that Unity gives TextMesh Pro away for free.)

TextMesh Pro - Unite 14 Demo: https://www.youtube.com/watch?v=q3ROZmdu65o

3D Signed Distance Fields are used for rendering volumes with the Marching Cubes and Tetrahedra algorithms (which are nice for users continuously carving and sculpting with tools, like Astroneer).

Astroneer: https://astroneer.space/

Astroneer - Digging to the Center of the Planet: https://www.youtube.com/watch?v=C3zxBhPzxYU

That also scales and handles different levels of detail nicely because you can interpolate between samples -- it creates smooth polygon meshes with interpolated normals, not harsh cubes with pointy corners like Minecraft.

Marching Cubes: https://en.wikipedia.org/wiki/Marching_cubes

Marching Tetrahedra: https://en.wikipedia.org/wiki/Marching_tetrahedra

Fast Marching Method: https://en.wikipedia.org/wiki/Fast_marching_method

Fast Sweeping Method: https://en.wikipedia.org/wiki/Fast_sweeping_method

Level-Set Method: https://en.wikipedia.org/wiki/Level-set_method

Shaders can't render 3D Signed Distance Fields as directly and efficiently as 2D (unless you're rendering to 3D volumetric textures). But you can still implement Marching Cubes on the GPU, using compute shaders (or abusing 2D shaders).

Marching Cubes on the GPU for Unity3D: https://github.com/Scrawk/Marching-Cubes-On-The-GPU


I recall some glorious LAN sessions in the various Delta Force games. Only shooters where to snipe you actually had to account for bullet drop because of the distances involved.


Even older than that, Comanche: Maximum Overkill (1992) was voxel based. Amazing at the time, but I remember it'd fall apart pretty quickly as you got close to objects.

https://en.wikipedia.org/wiki/Comanche_(video_game_series)


I think the trojan horse for voxel engines is lighting systems.

There are already game engines that voxelizes the world to a very rough voxel representation to efficiently calculate indirect light propagation.

I think more and more lighting and physics simulations will be implemented in a rough voxel representation of the game world. Then, to improve accuracy and detail this representation will become finer and finer, until it makes more sense to render this voxel representation directly, rather than rendering the polygons.

What's missing is perhaps hardware support for rasterization of polygons to voxels. It's hard to compete against the super efficient hardware rasterization of polygons to pixels otherwise.

I think character models will be polygon based for a long time still, since it's much more suited for character animation.


You can use the GPU's harware rasterizer to voxelize a scene.

http://www.alexandre-pestana.com/voxelization-using-gpu-hard...

This approach is used in most commercial games that support voxel-based global illumination.


Voxels always where the next big thing. The problem is they dont just demand ressources n^3 for all dimensions- they demand n^6, if you want to go into fine grained detail depending on closeness to the actor.

Cube: Sauerbraten did go there, and it uses something like oct-trees to store these tesselated, refined voxels.

And that is where the whole concept crawls on the shore to die - the tooling for this to work with all that has been invented before is just- not efficient- to sculpt in voxels, you basically send marching cubes through polygon models.

Personally that was the point where i dropped it- because- when i have polys allready- and dont need the destructable physics (they ruin gameplay &performance anyway most of the time) - then why bother.

Its one of those technical superior solutions which never seem to surpass the legacy solutions (like Rust and C). My assumption is, that one day, the polygon world will just eat the voxel world- having a .toVoxel.ApplyPhysicalModel(Sand).AfterPhysicsRestDo(ToPoly), and thats what will remain.

Very small cubes of dust.


Where are you getting n^6 from?

A hierarchical level-of-detail scheme (e.g. the one used in Sauerbraten) is still only n^3 in the worst case. If you store one full-resolution copy of your world, and then a copy with 1/2 the resolution, then 1/4 the resolution and so on, you get a geometric series that converted to only a constant-factor overhead.


You need a way to anizotrophpic filter the voxels. This means the mipmap should be fully indexable. That is n^3 * m memory for mipmap and n^4 filtering...

Isotopic filtering is easier, just 3*n. However, it will make surfaces look... unreal, as if physics didn't form them, at times.

You cannot wing the quality as easily in 3D though. Continuity is paramount as is some degree of smoothness. Ringing is unacceptable.


You are right, its been quite a while since i have dealt with the cube-engine. The Level of detail mechanism just adds a constant to the formula- basically speeding up the n^3 voxel growth as the player explores the world in detail and distance at the same time in the worst case.

Looking at a ants-mandibles-mikro-biota, while observing a planet near the horizon.

https://www.youtube.com/watch?v=SsjeiLklThY


Here is a very old voxel project that still amazes me

http://www.advsys.net/ken/voxlap.htm

It has pretty cool looking visuals, dynamic objects and some basic physics as well. I think the most impressive part the developer was able to run this on ~10-15 years old computers.


For those that don't know, Ken Silverman also created the Build engine that runs Duke Nukem 3D and others.


This site does something weird, I haven't seen before. As you scroll around the page, not clicking on anything, it adds entries to your stack of visited pages! You then have to hit the back-button N times to get out. It's stealthily redirecting to itself or something like that.

You can monitor this: stay on the page, and then every once in a while right click on the back-button in Firefox (or however it's done in your browser to pop up the forward/back list). See the growing number of repetitions of the page title in the list.


With JS you can push things into the history. Handy for somethings in a single page application, but misused in this application.


I’m not getting that on Safari mobile, but I do hit a ton of forced reloads due to repeated JS issues if I try to zoom in on the images. I wonder if that might be related in some way.


One of the reasons I've disabled js on Medium.


Just another reason to not even visit medium. That site has become complete garbage.


Weird... Is there any reason for them to do that or is it safe to assume it’s an error?


HTML5 history API abuse


Abused or indirectly used?


What happened to that australian company that had millions in funding for voxel tech? They seems quite skiddish and didn't want to show all their information in their demos. Did anything come of it?

Euclideon - 7 years ago: https://www.youtube.com/watch?v=00gAbgBu8R4


2018 Euclideon video.[1]

The big quadrillion-vowel worlds and planet-sized world claims seem to have disappeared, but now they can do small-scale voxel projects and handle deformation. No info as to how, and no products on their site for non-static voxel environments.

Voxel systems are straightforward if everything is in memory, but how do they scale?

[1] https://www.youtube.com/watch?v=nr5JqYYye3w [2] https://developer.nvidia.com/content/basics-gpu-voxelization


The video you linked to in your [1] is of Automontage. Automontage and Euclideon are different teams.

Euclideon's channel is https://www.youtube.com/user/EuclideonOfficial

I myself like Automontage better because they are waaay less "hype" and more "here's what I have".


Typo: It should be Atomontage


I don't think the recent surge in interest in voxels is from people with deep knowledge of the technology. For the vast majority of people voxels are just "that thing Minecraft uses." That's why every indie game made in the last five years is made of huge blocks with low-res textures.


I was really surprised that this is the only comment mentioning the obvious. That will be 60% or more of the current revival.

Additionally there are probably quite a few people who didn't know anything about Voxels, but transitioned the "pixelart" hype into the 3D world, coming to the same end point.

I bet both these reasons together make 90% of the current interest.


Perhaps someone here can answer this: if a voxel is an aligned cube in a rectangular grid, how could one represent a mirror (or shiny surface) that's oriented at (say) 45 degrees w.r.t. the grid?


If all pixels are squares in a grid, how could one represent a curved line?

Answer is similar - approximation, and lots of tiny squares/cubes/voxels.

E.g. https://static1.squarespace.com/static/5aa5ba4b55b02ca2a2f3b...


The problem is that no matter how small you make the voxels, the reflection will never converge to something accurate.

You have this situation:

    *
    *
    *
    ****
       *         <=== incoming light
       *         ====> reflected light
       ****
          *
          *
          *
Versus:

    \  | reflected light
     \ |
      \|______ incoming light
       \
        \
         \
PS: This makes me wonder how nature does it. Is the fact that photons are (in a sense) "bigger" than atoms responsible for the fact that we can have shiny surfaces?


Sure, but then you don't use just Voxels, you use voxels organized in an octree. That way your voxels have information on their neighbors and can compute lighting and reflection for each such Voxel.

http://lup.lub.lu.se/luur/download?func=downloadFile&recordO...


> That way your voxels have information on their neighbors and can compute lighting and reflection for each such Voxel.

How many neighbors do you need to fetch from the tree for every pixel, to reconstruct a surface normal vector with good precision like 0.5-1% each axis?

Without these normals, it’ll be very hard to implement stuff like specular lightning, and reflections/refraction/environment mapping.

I think for this reason there’re no reflecting materials on the first video from that article (the one with the tank), and there’s high temporal noise on the second video “Voxel Surfaces with Materials”.


I can't really say. This particular area isn't really my specialty, but cursory Google searches yielded some interesting links.

From what I remember, there are techniques to cull voxels that aren't visible. So it's possible some of them don't even matter when doing calculation. See: https://tomcc.github.io/2014/08/31/visibility-1.html https://tomcc.github.io/2014/08/31/visibility-2.html

From what I gather, the biggest issue with Voxel are lack of proper hardware support, forcing you to render more using software.


> there are techniques to cull voxels that aren't visible

Culling invisible voxels is fine. But to reconstruct a normal with sufficient precision, I’d say you need to sample quite large area of nearby visible voxels. IMO the RAM bandwidth costs are prohibitive.

As far as I understand it’s impossible to do in screen space because that would introduce artifacts near the edges of the objects.

With traditional rasterizers, accurate per-pixel normals are very cheap to compute, i.e. some computations in the VS, then interpolation within triangle implemented in hardware, then a single lookup from the normal map in the PS.

> lack of proper hardware support

What hardware support would you need for them? GPUs are very efficient doing many kinds of general-purpose computing. For the storage, modern GPUs support volume tiled resources, at the first sight this feature in D3D 11.3 and 12.0 looks very suitable for these voxels: https://msdn.microsoft.com/en-us/library/windows/desktop/dn9...


> Is the fact that photons are (in a sense) "bigger" than atoms responsible for the fact that we can have shiny surfaces?

Yes, if you simplify things a bit, this is true. "size of the photons" is in this case their wavelength. Wavelength of visible light (~500 nm) is very much larger than the size of silver atoms in a mirror (~100 pm). This is why you can have a mirror even with surfaces that are not atomically flat.


photons don't bounce, get absorbed and emitted from the materials at a specific frequency once the electron gets excited enough. metallic materials emit photon without absorbing much energy, while other material absorb more of the energy and radiate it later as infrared.

the 'direction' of the emission depends on the interference with all the other incoming photons. that part is rooted in quantum mechanic and I've no idea how it works exactly.


This is irrelevant to the way lighting is typically modeled in videogames.


he was replying to the last sentence in grandparent


I missed the PS, my mistake. Thank you for pointing it out.


Each voxel could have a normal. That would allow for smooth diagonal and even curved surfaces.


A voxel is no more a cube that a pixel is a square: it's a sample point. For each sample point you store material information and the value of a scalar field at this point. You can then compute normals to the sampled surface using something like the marching cubes. https://en.wikipedia.org/wiki/Marching_cubes


The Marching Cubes and Marching Tetrahedra algorithms I described in another comment of this thread can represent non-grid-aligned surfaces.

It interpolates the positions and normals of the vertices of the mesh between the samples of the 3d grid. Despite its name, it doesn't render a bunch of cubes like Minecraft, but rather it renders arbitrarily oriented triangles whose vertices happen to fall on the edges of cubes (or tetrahedra).

Marching Cubes: https://en.wikipedia.org/wiki/Marching_cubes

Marching Tetrahedra: https://en.wikipedia.org/wiki/Marching_tetrahedra

Astroneer: https://astroneer.space/

Astroneer - Digging to the Center of the Planet: https://www.youtube.com/watch?v=C3zxBhPzxYU


Waiting for one of these demos to include animated characters. Not holding my breath.


There's been a lot of work done in the last 5 years on animating voxel characters. You can probably do a google search and find some papers to your satisfaction. It is something of a challenge though, in that voxels used as an atomic unit can't deform. I can't remember what tricks are used, but I believe it requires pervertig the voxel engine to add exceptions/complexity to deal with animation.


Check out what some mad genius did with The Sims 1's original "2D sprite + z-buffer" artwork -- it's not perfect, but it's totally flabbergasting that it works as well as it does!

The Sims 1 in 3D: https://www.youtube.com/watch?v=r5D7GPQDDUI

Here's how The Sims 1 sprites work:

Each object has a set of sprite bitmaps for four different rotations, at three different scales. Symmetrical objects can re-use and flip sprites for different rotations. Each set of sprite bitmaps includes a color bitmap, an alpha bitmap, and a z-buffer bitmap. The smaller scales can be derived by shrinking the largest scale, but Maxis renders them all directly in the 3D Studio Max sprite exporter, which looks nicer, and players can export and import them with Transmogrifier, and edit and touch them up with 2D tools like Photoshop or Gimp (or even program 3D tools like Blender to export them).

Making a totally new object! By Bunny Wuffles. http://bwsost.woobsha.com/9firstnewobject/page1.html

Transmogrifier walk-through: http://wiki.thesimsresource.com/images/5/54/Steve-Using_Tmog...


Despite all of the controversy I'm still in awe at what Hello Games were able to pull off using this technique in No Man's Sky.


Atomontage demos are cool, but there's always been some controversy about what their tech can do in the "real-world".


Well demo video's don't tell you a lot. They claim to have some secret sauce.

However, the demo video's makes you even question that they have a secret sauce at all. When you see them delete parts of the game world you see how thin the ground layer is when part of it is deleted.

The are big problems with voxels. The simplest being space as the game world increases in size the amount vowels grows with the 3rd power.

My big problem is no technical details have been published. They claim to have a pending patent on their site. However, that may not really tell us much.

I have played with voxels myself and oct-trees and other algorithms related to voxels. The only real use voxels have seen is medical imaging. If they have figured something out they should be having a paper in a math journal. I doubt it though mainly since it would involve pushing thing to the limit or beyond the limits of information theory. The only other option for such small voxels is massive amount data storage or object with relative homogeneous structure at which point it's not much better than hollow polygons.

Not only the main memory is limited, and while SSDs are getting better when playing with high density voxels with a pretty how amount entropy you can quickly exist main memory. So what I did instead was just request exposed/surface voxels and arrange the octree data structure to be linear from outside in to help with memory caches. Still all the really mattered was the exposed voxels, and that's not much different from a hollow polygon. Honestly, after having played with voxels myself I would say they are cool, but you not making a whole game out of them. If anything it's just tool to be used in some parts of a game.

I will add an other thing that gives me doubt they have anything really new is the fact when they start destroying their models you see the same color in side the model for the most part. Even the brick wall that got destroyed is pretty homogeneous.


Looks like they just formed a real company https://www.atomontage.com/blog/2018/4/5/atomontage-inc-laun....

One of their investors is an MD specializing in imaging. https://www.atomontage.com/#team-section That'll be their bread and butter.

I always wondering if the community picked up VoxelQuest https://www.voxelquest.com/ and ran with it, but last I looked that wasn't the case.

I'm too old to do serious graphics programming, but might be interested in a voxel engine written in C# with a good community.


> I'm too old to do serious graphics programming

Too old? Maybe too busy, but never too old.


The issue with VoxelQuest is that, while it was an astonishing piece of technology, it wasn't documented in any way a normal human can understand. I spent several hours trying to read it, and while I'm impressed with the results, the lack of design patterns and structured code (the code is separated in just about 10 files, and it is almost 50k lines long, if not more) makes it really difficult to work with.

I dabbled in voxels myself, even wrote a Minecraft clone and all that, but VoxelQuest was the first real voxel engine I saw working in a really nice way. It had proper lighting, shadows and even nice effects such as reflections. It is kind of a bummer that the dev quit the project, so much potential wasted :(


I agree with you on the poor documentation - that stemmed from me almost never working with others professionally and either doing a lot of solo contract work or work where my code was mostly isolated from other systems. I've gotten a little bit better about standard practices after working for a company that uses them and team work is super-critical as small changes can potentially cost $10k+ in damage per hour, so understanding the code and making it testable is more important than producing new code.

Over the past year I have been working on Voxel Quest silently, because progress is so incredibly slow that I still do not have much interesting to show (and I dove to the deepest depths of feature creep and tried to build a compiler/transpiler to address a lot of problems I was having with c++ and slow turnaround times).

To be honest, over the past month or so I got frustrated with my slow progress and took a break to work on a simple 2D game, just to see if I had any ability to control what I consider my greatest weakness: failure to prioritize. So far that experiment is successful but now I'm asking myself if making a 2D game is a waste of my knowledge - maybe that is true or maybe its just shiny object syndrome. >_<

Overall the biggest problem is my past two years have been almost entirely consumed by my work, renovating my house, raising a kid, and other stuff. I typically have less than 10 hours per week to concentrate on projects (more often 5), so I am working about 1/8 to 1/16 as fast as I was when working on VQ full time.

Anyhow, it would be nice to work on it in greater capacity, or at least something similar, I just dont know where that money would come from. I am very hesitant to ask people for money at this point, given my track record, so I feel like my next thing has to be self-funded or funded by someone who can truly afford to lose the money. I could easily find people who would throw more money at VQ but I dont think that it would be a sufficient amount to survive off of, so it would be effectively wasted.

Not sure what to do in the short term, to be honest, other than work at my current pace. :/ I'll share my current progress publicly once there is something interesting to show.


> The simplest being space as the game world increases in size the amount vowels grows with the 3rd power.

Not quite. Not unless you store the voxels in a three dimensional grid structure, which you wouldn't do for anything but the most simple applications.

If your game world is anything like the real world, most of it is empty space. Additionally, you usually don't have to store realistic data for the interior of the models.

You'll use a datastructure that exploits this. The most common and simple one for voxels would be sparse voxel octrees.

In addition to this, you can use compression techniques for both texture data and geometry. Most of this data will be relatively predictable.

I'd wager that the data for a modern efficient voxel implementation grows at roughly the same rate as a modern polygon based implementation. Most models these days have a quite high density of polygons after all. No surface in the real world is completely flat. As the article mentions, when you have a high density of polygons, you're using a lot of data to store the positions of each vertex, while in a voxel datastructure that data is implicit.

I think the only real problem with voxels when competing with modern polygon engines, is animation. Polygon models have a huge benefit there. Like you said though, you might want to use voxels for part of the game. I know that John Carmack was interested in using it for static landscape and buildings, and saw it as a natural extension of the megatexture technology.


If this is a destructible environment, they would need to store interior details to render exposure in any real application.

Animation is very hard. Mathematics can help here, but the results look unnatural and fairly shoddy (see the building doing a simple sinusoidal based sway in their video). The problem is essentially one of scale. With polygons, we only need to animate the vertices. With this, we need to animate every voxel. This presents massive opportunities, however also massive challenges both for the machine performing the computations and also the software engineers trying to get realistic animations.


It should be possible with enough elbow grease to build an engine based off voxels imo, seems like some combination of lazily generating voxels based off simple primitives and being able to generate what the "inside" of something looks like on the fly gets you most of the way there through the rendering problems at least, right?

Now whether that's worth it or not is an open problem.


And for that, you could again look at what Minecraft-like games have come up with.

Minetest, for example, groups its "voxels" into cubic chunks, which are then loaded depending on a max distance and presumably some sort of a ray tracing approach to figure out, if the "voxels" in such a chunk are visible.


Minecraft itself throws a chunk at a time (a horizontal 16x16 section of the world) at it's rendered, which then spits out an OpenGL model for the rendering. (iirc). This is the, or one of the, reason(s) why clipping through one block gives the ability to see through all those in the middle, instead of 'just' seeing the next block adjacent to the one you are clipping into. The ray tracing is just that only faces of voxels that touch air or another non-solid block (fencepost,liquid,torch,etc) are included into the OpenGL model, along with their texture. The raytracing is done by the GPU, and handled via the depth buffer.


I probably should have highlighted this better, but I was not talking about Minecraft in the second part of my comment. I was talking about Minetest:

https://www.minetest.net/

Which is essentially a game engine for creating Minecraft-like games. Minecraft itself grew historically and is not state of the art anymore.

Which especially shows in the pillar chunks that you mention. Their size is 16 x 16 x world-height, meaning that the world height can only be so big (256 blocks in the case of Minecraft).

Minetest has cubic chunks, 16x16x16, which allows for virtually infinite world height, but also for loading those chunks in a much smarter way than Minecraft has it (which basically just loads chunks in a circle around the player).

When you're in a cave, for example, you don't need 400 blocks in all directions to be loaded, you just need to see the next wall. Whereas when you're standing on a mountain, you do need to be able to see all of the chunks that make up the surface of the surrounding terrain, but you again don't need any blocks to be loaded below the surface.

And for that, I assume they use a raytracing approach. Periodically check what chunks are in line of sight for the player and make sure those are loaded. When on a mountain, that's a lot of chunks, when in a cave, it'll only be a few.

So, I'm not talking about whether they are rendered or not, I'm talking about them being loaded into RAM or not (which obviously does also result in them not being rendered when they're not loaded).


For something like Minecraft where the voxels are comparatively huge they’re probably using something like Marching Cubes to extract a poly mesh from the voxel chunks rather than ray tracing. You can actually get pretty far with surface extraction techniques see for example the VoxelFarm engine.


I was lucky enough to play some of their demos in person recently. I was really surprised by the performance (even in VR). I feel like there are at least a few real-world applications that could benefit with regards to visualization. Particularly "organic" stuff - I noticed that voxels were able to capture details like skin wrinkles and anatomy really well (they demoed a beating heart captured from a CT scan).


Lighting, shadows and global illumination are still incredibly hard with voxels. And this is true for both Atomontage and Euclideon. Their demos look like really over HDR'ed.


Not really, in fact it's the other way around.

While rendering triangle meshes using rasterization (the "classic" way) is fast (because modern hardware is built for this), doing real-time light transport is a pain-in-the-ass, and involves a lot of inelegant (but very smart!) tricks (e.g. shadow mapping). On the other hand, we have ray tracing on triangle meshes, which is elegant, but really doesn't perform as well as rasterization, because you need a lot of rays to converge to a non-noisy result.

The state-of-the-art global illumination renderers all use a hybrid approach. Either they reduce the triangle mesh so real-time Radiosity becomes viable (like in the Frostbite 2 engine), but these suffer from not being able to simulate specular surfaces as good. While the other approach is to discretize the mesh to a voxel representation and do Voxel Cone Tracing (VCT) (which is the same as ray tracing, but instead of shooting rays, you shoot "thick" rays, and get the incoming importance over an area), which produces non-noisy results, and can be done really fast. There is a reason why engines like Unreal Engine, Unity and Godot have started to implement VCT, and why a lot of research is being geared toward finding efficient ways to do scene voxelization (just look at the DICE/SEED thesis work proposals and you'll see that this is indeed true).

The advantage of having an engine that only deals with voxels is that the techniques I've shown above can be done naturally, you can use a "pure" voxel approach instead of a hybrid. There is no need to convert triangle mesh scene to a voxel representation. My main worries with the method shown in the article is that doing animation (those not generated procedurally) using voxels is very hard. Also, the amount of memory needed to store a voxel scene is several orders of magnitude larger than a triangle mesh, even if you compress it, and worst of all, you need to store the animation as well, and depending on how that's done, can be a absolute nightmare (but I'm not up-to-date with the latest reserach there, maybe there are some fancy techniques that solve this problem).


Not at all. Lighting and shadows can be calculated the exact same way as polygons. As for GI, voxels are superior to polygons. If you look at all the real-time GI technologies, they are mostly voxel based. Ex: Nvidia VXGI


I'm not sure what exactly that Playstation 2-looking tank in the video is supposed to do to convince me that voxels are the next big thing, but it's not doing it.


you're looking at the wrong part.


Fun! Yeah, wes stored our data as slices for space and restricted rotation to the Y axis. Both were optimization s since each frame of an animation was a full model there was no need to rotate them. The rendered could render them from any angle though so I still consider them voxels. More like voxels lite then voxel plus. We also used a lot of sprite cards with zdepth and a quick normal hack for lighting. You had to cut corners where you could back then!!


Hope they'll make it, always liked voxels.


Want to play with Minecraft-style voxels? There's a really wonderful freeware voxel editor called MagicaVoxel:

https://ephtracy.github.io/

Pretty incredible what you can do with it.


Does anyone know any comprehensive article or tutorial on learning voxels?


Is there somewhere a "hello World" code about how to create such a voxel in same sense as this article suggests?

Are they just creating "boxes" or "cubes" for each voxel?


Ray tracing is one oft used method. Store voxels in a kd tree and do a directed search to find the first intersection (first “filled” cube intersecting with the ray).


Why do people call a textured 3D cube on a grid a voxel (Minecraft), but do not call a textured 2D tile on a grid a pixel?


pixel was already used so they call it texel


A texel is a element of a texture: https://en.wikipedia.org/wiki/Texel_(graphics)


The more different approaches the better. It is always worth trying, even if it takes a while.


So voxels makes some things easier than with polygons.

What is HARD with voxels? Animation? shadow rendering?


Without some tricks, the number of calculations increases drastically.


For Blade Runner.


Voxels will be key for further VR immersion.

In a VR game, being able to look at things very close, and seeing smaller and smaller details almost infinitely, will help go a long way toward making things feel more real.

Think taking a break in an FPS, and kneeling down to see the blades of grass, and then closer to see dirt and pebbles, and even closer to maybe seeing ants.


> being able to look at things very close, and seeing smaller and smaller details almost infinitely

That is called tesselation and it's already there in games for more than a decade.


There's so much misunderstanding and wrong terminology surrounding voxels. Voxels are a way to store world data in memory, basically 3d bitmaps. They won't give you infinite details, just the opposite - they can only give you single '3d resolution'.

To actually get infinite details, you have to switch underlining world representation from static data (meshes, voxels) to mathematical (perlin noise, fractals). Here you make an important trade-off - mathematical representation is harder to make changes to, because each modification would increase complexity of math formula used to describe the world. You can see lot of deformations in voxel demos but they are global transformations (scale, shear).

Minecraft uses hybrid approach. Inital world is described by math (called world generation) but it's then serialized (sampled) as data with specific voxel size, that you can easily manipulate.

Edit: another, IMO far more impressive approach is Media Molecules' upcoming Dreams software that uses constructive solid geometry and sparse distance fields to describe and manipulate 3d geometry. Here's a detailed tech talk: https://www.youtube.com/watch?v=u9KNtnCZDMI




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: