Hacker News new | past | comments | ask | show | jobs | submit login
The Poor Man's Voxel Engine (et1337.com)
274 points by et1337 on Feb 19, 2015 | hide | past | favorite | 49 comments



Long story short, the garbage collector doesn't like to clean up large objects. I end up writing a custom "allocator" that hands out 3D arrays from a common pool. Later, I realize most of the arrays are 90% empty, so I break each chunk into 10x10x10 "sub-chunks" to further reduce memory pressure.

This episode is one of many which explain my present-day distaste for memory-managed languages in game development.

If you push any programming environment far enough, you will always end up "writing a buffer pool" or some other such memory management optimization task like this. So are memory-managed languages all just "bad?" No. That's way too simplistic. Memory-managed languages are just one particular set of tradeoffs. You can think of it as a kind of "technical debt." Is debt always bad? No. Sometimes it's the smart thing to do.

So where should the blame lie? Either on the person who chose the tool, or just chalk it up to the "unforeseeable" and switch to a better tool.

(And this is why building modular, well-factored systems is often the smart thing to do.)

(EDIT: It occurs to me that this story parallels the development of many industries. Things start out "hacky" and cobbled, but then standard interfaces are established so that components become "modular." Then, when people in that industry learn more about precisely what people need and what works best, modularization is left out to build a leaner and better performing widget. Yes, modularity is sometimes the smart move -- but like all things, it is very dependent on context!)


This comment is entirely too wise and reasonable for me to endorse. I need things in black and white. Garbage collection bad! Technical debt bad! Modularity good!


U lucky simple-minded bastard! :)


So, I will point out that in the case of frequent allocations and deallocations of small objects, most managed memory environments tend to have a bad time. This is forgivable but there will inevitably be GC pauses to clean up lots of internal fragmentation.

For game development, this is far more noticeable than in other applications, specially because there tend to be both lots of these allocations when doing vector arithmetic for graphics, physics and gameplay, and also because there is by definition a human being with 100% focus on what the program is doing and watching for any perceived choppiness.

For this reason, one of the first things to do when attempting game programming in these environments is to reach for a pooled math library.


I think you're substantially agreeing with the author you've quoted, at least as far as the quote. Expressing distaste for memory-managed languages in game development and being able to cite a specific scenario for that is not the same as saying that memory-managed languages are all just bad. I think you, the author and anybody with a modicum of technical maturity can agree that "fast" memory-managed languages are not at an absolute global optimum because they make specific tradeoffs that not everyone will always want to make. Granting this, if we have even one reason to still keep around a memory-managed language (and that's not hard to come up with) then we have a case for keeping multiple languages around and being able to interface among them.

Not everybody realizes this; especially among people who argue about programming languages on web forums, there are still a large number who are still married to the idea that one language has to do everything and if any language has any weakness, we should leave it alone as "bad" because its flaws mean that it is not the Miss Universe of programming languages. It apparently takes some technical maturity to realize that significant flaws are often part of complete packages of tradeoffs that make something a great choice for some contexts even while they are not the best choice for some other context.


I think you're substantially agreeing with the author you've quoted, at least as far as the quote.

Indeed. The sibling comment to yours is by him.


Is there any cross platform language that does it like Objective C w/ ARC? I find that one a pretty good balance between being able to go nitty gritty with optimisations while not having to care about memory management details for the standard use cases. IMO it's also much easier to learn than C++. Could Rust be the answer?


C++ std::shared_ptr does the same thing as ARC (reference counting based on scope/lifetime). Rust probably has something similar in it's standard library.

FWIW I wouldn't use reference counting in game development if I could avoid it, because it has pretty bad performance characteristics as well (the only thing it really has over GC is that it's deterministic, and that, with the exception of in Objective C, you can choose where and where not to use it).


Most games don't do any heap allocations at al during runtime(or at least very very few), so you shouldn't/don't end up with garbage collector pauses during normal gameplay. If you refcount your main structures, they can be auto cleaned up /removed when changing levels/loading scenes/swapping chunks out on the fly.


As an aside, it seems bizarre that you can exhaust the address space simply by allocating large objects over and over. Doesn't the windows vm system reclaim unmapped pages? Surely the .NET runtime uses holes in the Large Object Heap? It can't be that idiotic that new allocations always bump the pointer, right?


Really good read, and takes me back to my younger days of feeling around in the dark trying to work out 'the one true way'. A couple of things that went through my mind that I don't see mentioned, or is kinda brushed over:

* You mention you originally rendered everything, then later moved to breaking into chunks?

* How do you decide what's 'in view', do you do any culling?

* You don't mention culling polygons that are pointing away from the camera?

Sorry if I missed those, but my thoughts were:

Use an octree [1] to register the scene blocks, you can then recursively intersect the viewport with the tree to find out what's in-view. There's perhaps even some cunning way you could 'rasterise' the blocks in the viewport using Bresenham's algorithm (tracing the edges of the viewport) [2].

In terms of the polygons facing away from camera, you can get the dot-product of the normal of the polygon and the vector of the camera. If it's negative then it's pointing away, if it's positive it's pointing toward the camera (or vice verse, I can't remember). In a tight loop that can be damn quick, and saves rendering something that can't be seen.

Apologies if this is all obvious, and you're already doing it, or if it's all handled automatically. It's been over 10 years since I've had to do any of this stuff, so just dredging it from the depths of my memory ;)

[1] http://en.wikipedia.org/wiki/Octree

[2] http://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm


You pretty much nailed it. I store data in a bastardized octree, more or less. I break the world into chunks to facilitate culling. I check the bounding box of each chunk against the view frustum. There are generally under 100 chunks in view at any one time, so the culling doesn't have to get fancy. And yes, the GPU handles back face culling automatically these days. :) I do use Bresenham's for raycasting.


Good stuff and good luck with the project :)


I read this yesterday and came away extremely impressed with your self-analysis, documentation and the game's progress itself.

Is this entirely a one-man operation?


Glad you enjoyed it. More or less, yes. I've brought on a few contractors at times to help with audio, animation, and a few other things.


You should write a post on how to delegate.


Or get someone else to write one for him :-)


He is not alone. I have followed several projects that try to create Minecraft-like Voxel engines. Thanks for the article.

Minecraft was originally inspired by Infiniminer: http://thesiteformerlyknownas.zachtronicsindustries.com/?p=7...

It was coded in C# (.Net 2) and XNA 3.0 runtime. The code was not obfuscated and someone published the code. As Infiniminer was a multiplayer game, hacks and bots destroyed the game community as well as several knock-off clients. That everyone had to download dotNet 2 and XNA 3 didn't help either. So Minecraft with its original Java browser applet won the audience by storm. The rest is history and Notch just bought the most expensive house in L.A.

The history of DirectX support on non C/C++ is a sad story of deprecated APIs:

* Visual Basic 6 with DirectX 7 support: Direct3D retained mode (COM-based scene graph API), Direct3D immediate mode and DirectDraw (2D)

* Visual Basic 6 with DirectX 8 Direct3D immediate mode (different API), no DirectDraw (2D)

* C# with managed Direct3D (Microsoft.DirectX.Direct3D), supports only DirectX 9: http://www.riemers.net/eng/Tutorials/DirectX/Csharp/Series1/...

* C# with XNA 1-4 (DreamSpark/MSDNAA license), supports only DirectX 9: Released in December 2006, XNA is intended to push the ease of game programming to the extreme. XNA is new wrapper around native DirectX. As development on a new version of Managed DirectX has been cancelled, XNA can be thought of as the new version of Managed DirectX. Although the code is not 100% the same, it is VERY similar. No windows event handling, built-in update and drawing loops and XBOX360 compatibility are just some of the some of the reasons why XNA will become the future of DirectX game programming. XNA is built on top of DirectX 9 -- http://www.riemers.net/eng/Tutorials/xnacsharp.php

Windows comes with OpenGL 1.1 (from 1996) and one has to init its context to load a third party OpenGL 4 context: http://www.gamedev.net/page/resources/_/technical/opengl/mov...

Internet Explorer 11 supports WebGL 0.9 (almost no extensions, current would be WebGL 2): http://webglstats.com/


> Windows comes with OpenGL 1.1 (from 1996) and one has to init its context to load a third party OpenGL 4 context

To expand on this: when you create an OpenGL context on Windows, you get a context provided by your vendor's drivers, with OpenGL 1.0-1.1 functions provided by Microsoft's DLL, and all the remaining entry points must be retrieved manually, by supplying your own headers and fetching the function pointers. But if you want anything special like a core context, you have to call a special function provided by one of those pointers... which you can't get without a context.

So you have to create an OpenGL window, then use it to get a function pointer to create the window you actually want, then close the first window. This is why we use GLFW or SDL.


And of course, XNA itself is essentially deprecated now, and isn't capable of targeting Metro either. A number of devs have started switching to MonoGame, but the support there isn't quite complete yet.


> http://webglstats.com

Neat. What happened in the summer of 2013?


It's probably Chrome that enabled the software renderer for users with unsupported GPU configurations. Note that the software renderer is extremely slow (it barely manages to and unsupported systems are more likely than not to have a weak CPU that further lowers performance. We're talking performance at a level barely enough to animate the cube at https://get.webgl.org/ . I don't know the value of that software renderer, other than making Chrome look better on statistics.


Wow, so money actually doesn't buy taste...


XNA is single most important technology that effectively enabled the indie movement. Shame they don't want to upgrade it.

http://visualstudio.uservoice.com/forums/121579-visual-studi...

https://twitter.com/hashtag/becauseofxna


I had a similar experience to the author, using XNA during undergrad to explore building a 3D game engine (though mine was not voxel-based). Much like him I spent a ton of time developing the terrain rendering - generating heightmaps with Perlin noise, stitching them together, texturing, chunking, frustum culling, etc. I fondly remember reading and re-reading papers on things like Chunked LOD and atmospheric scattering, and having those breakthrough moments when, holy shit, the scene actually rendered correctly, and when I move around the terrain seamlessly switches between levels of detail...

Point is, the XNA tools and community were the base that motivated and enabled all of that, and I too am disappointed that it died an awkward and unceremonious death.


It's not dead yet, still games being developed on it


It is a shame. Monogame seems to be moving to replace it, but it's not quite there yet. I wonder if MS has anything under wraps for Windows 10.


More than Unity?


More than Flash, Steam, or mobile app stores?


Unity3d caught the wave that XNA started.


> The voxel format is simply a 3D array represented as an XML string of ASCII 1s and 0s.

Oh ow. But otherwise, thanks for this wonderful tale of discovery and progress. I'm impressed that you stuck to it despite all such issues. My damn perfectionism would have had me dump everything in a fit of angst at the first sign of trouble.


In the first voxel engine I worked on, I also attempted RLE (for network performance). Doing it in 3 dimensions is not trivial, as you show. I decided for some reason that doing it in a single pass was the "smart" way to do it.

Important thing is that you are having fun and learning. Your first attempt(s) will usually not serve much more purpose than this.

Anyhow, Lemma looks promising (and pretty amazing for your first game to hit shelves) - I was aware of the game but interesting to see how it came about. :)


FTA: "At this point, I'm saving and loading levels via .NET's XML serialization. Apparently XML is still a good idea in 2010. The voxel format is simply a 3D array represented as an XML string of ASCII 1s and 0s. Every time I load a level, I have to re-optimize the entire scene. I solve this by storing the boxes themselves in the level data as a base64 encoded int array. Much better."

Sarcasm...please tell me this is Sarcasm!


I think this is rather a stream-of-consciousness style blog of the progress he made -- note that later on he said, "I eventually got a job in industry, and learned that everything I thought I knew was wrong".


Some people learn things by being told them, others learn things by intuition or deduction, and most people learn by making mistakes.


Brilliantly written. Loved all the shockingly bad decisions. :D


Looks like an advertisement for C/C++.

I agree that some language features are nice to have, but you won't convince me that C/C++ is bad because it's old, or because it's too down to earth. In short: you can't replace simple tools like a simple hammer and screwdriver. You don't always need them, but they're not replaceable, so to me I'll prefer learning a down to earth language and suffer the right consequences of that. Having to manage memory is the bread and butter of any programmer. I hate going around avoiding gimmicks of new languages and API just because the language pretend that it's doing something magically, there's always something that will come back and bite you.

And for the minecraft argument: minecraft is a huge PITA when it comes to memory and I've seen servers just burn because of memory management. Notch buying large house won't convince me to use java or any other managed memory language. The argument "that guy got rich, so it's an okay language" is not okay with me...

I'm very conservative when it comes to technology and programming languages. I prefer using old tools, because if they're old, that means that they stayed.


"simple hammer and screwdriver"

To complete the analogy I would say both also have a blade attached to handles, so that you need to be using them v-e-e-ry carefully and only if truly needed.


Every programmer need to learn how to do his job. if a programmer don't like it, he can use something else, but he might not achieve the same result. What kind of risk are you talking about ? You won't kill anybody. If there's a bug, just fix it. C and KISS ane just not taught enough.

I find it pretty weird to see people refusing to use simple tools, because it's "too difficult". That's why I think Linus Torvalds is right about many things. I just can't like the many diverse and abstract concepts of programming that pops around now and then.


People don't dislike learning C or C++ because the languages are supposedly hard to learn. They dislike them because the tools used to build and distribute their work are archaic.


I read this, all these tribulations over years, and then just chuckle as I remember all the people who love to call Notch incompetent.


I wrote a voxel renderer in 2008 and found some good resources from Ken Silverman (Build engine used in Duke3 etc). There used to be a forum here:

http://www.jonof.id.au/forum/

I don't know if any of that stuff got mirrored but digging around for "Voxlap" shows some related results.


> The CLR's floating point performance is absolutely abysmal on Xbox 360

Can anyone speculate as to what went wrong here? I thought the Xbox 360 would have very good FP performance.


XNA Vector3 doesnt use the simd instructions.

Further every

a+=b; (where these are vectors)

calls a function, which calls a constructor

You can tell by getting significant speed ups by inlining by hand.


I relate. I so relate. We need some kind of support group.


Is Lemma going to target other platforms than Windows?


I'd love to, but no plans for it currently. I did contribute some code to MonoGame[1] a while back which opens some exciting cross-platform possibilities. It requires a lot more development before it will support Lemma though. So, "maybe". :)

[1] http://www.monogame.net/


Interesting article.

I checked your repo, maybe use nuget for packages such as Newtonsoft.Json.


great writeup, thanks for sharing the progression




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: