Hacker News new | past | comments | ask | show | jobs | submit login

I'll say the same thing now as I said last time these guys released a video: I'll believe it when I see them make a single blade of grass move, or when they place a single dynamic light source and cast a single dynamic shadow. Until then, this technology is awesome, but more or less useless.



Even if it's absolutely impossible to animate this technology I don't think it would be "useless". I don't see why it wouldn't be possible to have the static elements (buildings, tree trunks, ground, debris, etc etc) of a level rendered using this tech, and polygons used for everything else. It used to be this way in the bad old days (Doom etc) - Polygons for level structure and sprites for animation. This would at least free up the "polygon budget" to improve the features which couldn't be voxel-based.

If anything you could create a pretty interesting art style with hyper-realistic backgrounds and cel-shaded characters, or similar. Even as a replacement for pre-rendered background scenery or out-of-level elements it would be useful.

And (thinking of the Game of Thrones titles story) whatever mojo they've got must surely be able to be applied to other industries, eg. CGI - if Weta or Pixar could improve the poly count of their static backdrops without just buying more rendering farms, that's gotta count for something.


"polygons for everything else" would mean you integrate two completely different rendering methods in one pipeline. I won't say that's impossible, but it sure sounds non-trivial.


Would this not be perfect for films where you are rendering per frame?


Yes, ray tracing is basically the technique that Hollywood use today.


What is somewhat disappointing is that they don't tell if they have managed to solve these issues or not, as they were clearly commented on one year ago. Another major issue is the RAM-usage, as point cloud data will require a lot of memory.

Their claims of being 100,000 times better than current technology is also "frustrating", as it is repeatedly mentioned as if you've for some strange reason forgot it in the last 30 seconds. It is also a lie if they cannot use this in a game: I would suspect it's not impossible to increase the level of polygons to 10-100 times the amount - if not even more - if there's no animations or dynamic light sources around.


It seems they're getting around the RAM issue by procedurally generating their objects. I'm cautiously optimistic, but I want to see it in something approaching a consumer game before I'll believe it. Carmack seems to think the next-gen consoles might be able to handle it, so if they can push out their SDK within a year, it might just power the next Crisis. Assuming, that is, the technology is actually useful.


Dynamic shadows work in exactly the same way as they do for standard triangle rasterization.

Each light has a shadow map (which can be thought of as "the depth of the scene, from the point of view of the light").

During the final rasterization pass, the shadow map is sampled. If the sampled depth is less than or equal to the current fragment's depth, then the fragment is in light; otherwise it's in shadow.

So, nothing has fundamentally changed here. When a particle (voxel) is rasterized, it outputs a depth, just like a triangle outputs a depth.

tl;dr: Dynamic shadows work fine.


"tl;dr: Dynamic shadows work fine."

So... where were they in the video?

There's a difference between "some voxel engines can have dynamic lighting" and "dynamic lighting works with this particular engine".

It's not like this is some obscure or useless feature that nobody has ever heard of. In fact for all the "detail" one can't help but notice the number of other things missing from the video.


I didn't mean to imply Euclideon had dynamic shadows.

I meant to imply that Euclideon are lazy and haven't implemented dynamic shadows yet.

There's no fundamental reason why dynamic shadows wouldn't work in voxel engines. But whether you believe me or not is of course up to you.


Near the end they claim to have added more detailed shadows to their engine just after making this demo. I'm not sure if they meant 'dynamic shadows' by that.


I think they meant diffused shadows, the small shadow demo they give didn't appear to be dynamic.


The "improved shadow" demo they showed was clearly ambient occlusion. It's impossible to say whether their particular implementation supports dynamic shadows but as a rasterization technique many of the same techniques used in games ought to work fine here (pre-baked lightmaps, various kinds of shadow maps, screen-space ambient occlusion, and even the fanciest new real-time GI stuff coming from SIGGRAPH).


Obviously not every 3D rendering technique in existence would be included in a short demo video.


For something as obvious as for gaming (where in the video other games have people running around while these guys are hovering around a static environment), I expect something that at least looks like a game, hell id go for simple animation such as the top comments blade of grass.


from what i understand, the voxels more or less dynamically adjust their resolution according to their distance from the modelview origin (where the player is). if the engine generated dynamic shadow maps per light source, would'nt the voxels need to be high resolution with respect to each light source, so that the shadow map isn't a bunch of huge blocks? i imagine that alone would be a pretty big performance hit


The dynamic adjustment is traversing the voxel representation when you render the view. This kind of per-pixel shadow map requires rerendering the scene for each light source - but in more or less the exact same way that the player view is rendered. Then some work rectifying that information. So it would be a big performance hit, but not obviously any worse than normal rendering. Although did someone say it was only running at 20fps as it was?


20 fps in software. They're still finishing hardware rendering.


Because it's probably incredibly difficult to do with the added memory and bandwidth restrictions.


Also if they actually make a scene with different things in it. Having millions of objects in your scene is not that hard if they are all copies of the same object. The old demonstration showed a bunch of repeated copies of a single object. This one showed a lot of copies as well, although not as obvious.


That's like saying Megatexture isn't impressive since most of the texture detail is copied over and over.


It is hard in that you still have to draw every poly in every copy of the object. Instancing allows you to reuse some per-vertex calculations but it doesn't help at all with fragments, which is where most of the power is typically used.


Yeah, it "only" solves the memory use problem.


Would you say the same thing about polygon models? That is: if you have a scene with a million[0] copy of the same polygon-based elephant model, would it run as smoothly as this demo?

[0] not to be taken literally


It's all fun and games until you introduce animation... oh wait, that is fun and games... I just confused myself.


Useless huh? Static geometry and lighting was the state of the art for a while, starting with Quake. It certainly puts some constraints on game mechanics but it's significantly better than useless.


Quake actually had "dynamic lightmaps" - that is flickering lights and such by alternating between two (or more?) lightmaps for the same geometry. It also had moving level geometry: elevators and all kinds of traps.

Granted, both of these features are somewhat hacked into the static BSP level structure, but this only shows that it's "necessary" to have dynamic lighting and moving stuff to offer a good experience.


The characters weren't static though. That's what he's alluding to.


Yeah, and the characters had to use a completely seperate renderer with much simpler lighting. Likewise, the early voxel engines will probably use polys for dynamic stuff. Someone else mentioned an engine called Atomontage that does exactly that.

Sometimes you have to take a step backward before you leap forward.


Cyril Crassin's stuff is also interesting and it's here, now, running dynamically on GPUs. See: http://www.youtube.com/watch?v=fAsg_xNzhcQ

See his papers page: http://artis.imag.fr/Membres/Cyril.Crassin/

I am curious what methods Euclidean are using, however.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: