Hacker News new | past | comments | ask | show | jobs | submit login

Carmack has been thinking about functional programming for a while and posted his thoughts on applicable lessons for C++ a year ago:

http://www.altdevblogaday.com/2012/04/26/functional-programm...

He's a great developer and has always pushed boundaries. I look forward to his postmortem after this project is finished.




thanks for the excellent link. the comments on the article are also very nice. here is one from "NathanM" (nathan-c-meyers perhaps ?):

And yes, I'd love it if the compiler (or other static code analysis) could detect how pure various bits of code are, and give reports. For far too long, compiler authors have treated compilers as a big opaque box that end users (developers) submit code to, and the compiler hands out code as if from on high. Smart developers want to have a 2-way communication with their compiler, learning about all sorts of things -- functional purity, headers over-included, which functions it decided to inline or not (especially in LTCG), etc. It's not the 1960s anymore -- developers aren't bringing shoeboxes of punchcards of source code to submit for offline processing. Let's get closer to a coffee shop where we can talk in realtime.


I think things are trending towards being more interactive.

In the immediate future, GHC is going to become more interactive by adding "type holes". Essentially, you can just leave out parts of your program and the compiler will tell you what type needs to go there. So instead of writing your program and checking if it typechecks, the type system can actually help you formulate the code in the first place!

Further afield, a bunch of people at the lab I'm working at are working on interactive systems that use a solver running in the background to solve problems for the programmers. These can be used to do all sorts of things from finding bugs to actually generating new code. Being interactive lets the solver suggest things without being 100% certain--the programmer can always supply more information. This also makes the solvers easier to scale because if it's not terminating quickly, it can just ask for more guidance from the programmer.

I think the general trend towards more interactive development is pretty exciting.


There's already a very primitive version of "type holes" available, namely, undefined. I realize it's not as advanced as what's to come, but I find myself using it somewhat frequently.

(For non- or fledgling Haskellers, "undefined" has any type, so if you define a function that plugs into your code and make its return value "undefined", then you can look at the type signature of the function and learn what the compiler proved about the type of that function. Pretty handy!)


Type holes themselves are already included in the HEAD of the GHC trunk, and will be included with the next release I believe. Undefined is useful, but you can't get the types of a specific subexpression easily -- with type holes, you can.


Slight upgrade: turn on the -XImplicitParams flag and then use ?nameGoesHere instead of undefined. Detailed type information will leak out in the errors or, if it can infer all of the types, the type of the top level expression that contains your ?implicit.


Whose postmortem, the project's or Carmack's? With Haskell, you never know...


Hey Haskell never killed anyone that we know of. That would be an observed mutation of state.

However, it may be (if you'll pardon the expression) garbage collecting people that we don't know about, or cloning them in such a way that their multiple representations are indistinguishable.


Maybe if all of the objects just referenced a read only version of the world state, and we copied over the updated version at the end of the frame… Hey, wait a minute...

This sounds like a game development reference that I'm missing. Can anyone explain?


He's referring to the utility of immutable data for solving certain parallelism issues - rather than attempt to coordinate all the code that uses a data structure, you can double-buffer it and queue up the write events for the "next frame" instance.

This is a hugely successful pattern throughout a number of aspects of gaming, graphics being one of the most classic examples. Double-buffered graphics don't suffer as much from tearing and other display artifacts.


> This is a hugely successful pattern throughout a number of aspects of gaming, graphics being one of the most classic examples.

Not really, no. Immutability comes at the cost of performance compared to mutability. The gap is shrinking between the two, but it's still wide enough that using pure immutable structures for frame buffers, shaders and other graphical concepts is simply not an option to write games.

Haskell is interesting in the sense that it doesn't prevent you from using mutable structures (e.g. Lenses, Writer) but it encodes this information in the type system. I'm really curious to read the conclusions that Carmack will draw from his experience but I wouldn't be surprised to read that at the very low levels, mutable structures are just unavoidable for high performance games.

Also, mutable structures accessed by concurrent threads is a problem that's much less difficult than most people claim, and it's often much easier to reason about locks and semaphores than about immutable lazily initialized structures.


I don't know where to start.

> using pure immutable structures for frame buffers, shaders and other graphical concepts is simply not an option to write games.

Seeing as people have written games in Haskell, this is clearly not true.

> Haskell is interesting in the sense that it doesn't prevent you from using mutable structures (e.g. Lenses, Writer)

Lenses and Writer both only use immutable data. It is possible to use actual mutable data in the ST and IO monads.

> but it encodes this information in the type system.

This is true of IO, but not of ST. With ST, runST :: (forall s. ST s a) -> a, hides the effects.

> it's often much easier to reason about locks and semaphores than about immutable lazily initialized structures.

I don't know what you mean by this. In terms of functional correctness, immutable data-strucutures, lazy or otherwise, are much easier to reason about. If you are talking about resource-usage, sure, it's a little harder to reason about lazy data-structures than strict ones, but give me a space leak over a race condition to track down any day.


It’ll be interesting to see how the performance issues play out, no? In order to get reliable memory behaviour, you still have to go through a certain amount of voodoo to appease the gods that govern the interplay of laziness and GC. There are comparatively few people who really know how to optimise Haskell code from top to bottom—in part because there is such a distance between top and bottom.


"It’ll be interesting to see how the performance issues play out, no?"

Not really. There's no question whatsoever that GHC can run a fine Wolf3D on fractions of a modern hardware setup. You could do it in pure Python with no NumPy. There's tools to help with the laziness stuff and a 3D rendering loop will fit those perfectly.


Sure, Wolf3D is almost a quarter of a century old by now.

But the performance limits of immutable structures for simulation and graphics are certainly interesting to me.


Absolutely, but it is certainly possible.


>It’ll be interesting to see how the performance issues play out, no?

Not really, 3d rendering in haskell via opengl is not new or interesting at this point. Frag is 8 years old for example: http://www.haskell.org/haskellwiki/Frag


Modern OpenGL exploits immutable data for parallelism all over the place. It also lets (and expects!) you to upload model data (vertices, colors, texture-coords, etc) to the GPU, so you only need to re-upload things that have changed.

You can even stream textures asynchronously using PBOs (pixel buffer objects), and use dual PBOs like double buffers (or using copy-on-write techniques to only re-upload dirty rectangles...)


He's alluding to frame buffers.


Do a lot of other objects read the "front" frame buffer besides the video output?


I think the whole point of a "front" framebuffer is that its only purpose is to be written to the screen. You're only ever writing to the back buffer, which is then flipped, at which point you're writing to a new buffer and it's the next frame.

[edit: If I'm wrong... ouch. But it's been a while.]


@obviouslygreen that is why "all of the objects just referenced a read only version of the world state" doesn't make sense to me as a frame buffer analogy...


It sounds like he's talking about double-buffering "model" data - like an array of all actors and their positions. You can't have one thread reading the data while another writes to it, but you can have the reading thread work on an "old" copy of the data while the writing thread modifies the live data.

Games often want physics/model threads run with a consistent timestep, but have the rendering thread run as fast as possible.


Then I guess I was just ignorant. For me he was always one of the C icons and diehards, similar to Linus Torvalds. He's incredibly conservative about games and doesn't value creative game mechanics.

But it's nice to hear, hopefully John Carmack and id will make at least one more great in the future.


From what I can see iD have been an engine company since the days of Quake, it just so happens that they occasionally release a game to show off the new engine they're peddling.

That doesn't make Carmack any less of a great developer in how he pushes the limits of current hardware.


C icons?

His games have been C++ for quite some time now.


If we count major engine releases, then only the last two major engines (id Tech 4 and 5) from id Software have been C++, the previous three were all C (id Tech 2 and 3, as well as the Doom and Wolfenstein engines).

Another factor is that the C code that comes out of id Software is just damn good. Go ahead and read the Quake 3 Arena source code: it's one of the better reads out there, as far as C is concerned. The Doom 3 source code is C++, but it's kind of weird C++ and I would be wary of learning C++ from it. Carmack has spoken about how his C style is just so much more mature than his C++ style, and this is exacerbated in these examples because Q3A is the last game in C, but Doom 3 is the first game in C++.

Yes, he's an icon in the C world.


> If we count major engine releases, then only the last two major engines (id Tech 4 and 5) from id Software have been C++, the previous three were all C (id Tech 2 and 3, as well as the Doom and Wolfenstein engines).

This is what I mean for quite a while, id Tech 4 was released in 2004!


To be honest, it was more "C with classes" style of C++.


I meant C as in "C family of languages". C++ is still closer to C than to Java. I admit mentioning him alongside Linus was misleading. Haskell is a paradigm shift.


>I admit mentioning him alongside Linus was misleading

And quite insulting.


to whom?


Mr. Carmack. Linus is famous, but not a particularly good programmer, and doesn't use C over C++ for technical reasons, but for "ability to offend people" reasons.


I'm a big big supporter of this. I had one thought [1] I wanted to add to the excellent linked article.

I'm not sure if my thought is obvious or insightful, but I like it. I try to write all my new general/reusable code as pure functions whenever possible (which is almost always).

[1] https://twitter.com/shurcooL/status/327249579189870592


One of his best posts imho. Worth reading and reading again for any Object Oriented developers.


"Large scale software development is unfortunately statistical."

Too true.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: