Hacker News new | past | comments | ask | show | jobs | submit login
3D Game Shaders for Beginners (github.com/lettier)
391 points by lettier on May 12, 2019 | hide | past | favorite | 44 comments



That's a very cool intro to a variety of shader techniques, especially post-processing effects.

What's even more wild, is that modern engines include all of this stuff out of the box, you just have to tick a few check boxes to enable them in your game. :) For example, check out these postprocessing filters built into Unity and Unreal - a ton of person-hours went into these:

https://docs.unity3d.com/Manual/PostProcessingOverview.html

https://docs.unrealengine.com/en-us/Engine/Rendering/PostPro...


I agree with you, however in almost all commercial games that I worked on, we do write our own shaders. For simple shaders, artists use directly a graphical tool included in unity for shaders authoring without writing any code, for complex ones we write our own code.


Oh sure, and especially for custom materials with specific lighting or masking properties, or for tweaking instanced meshes, etc. The standard materials only get you so far.

But post fx like bloom or vignette? I'm okay using a stock shader for that, it's not worth spending the time on building one from scratch. :)


do you write them completely from scratch or do you write custom nodes for whatever node-based shader editor you use? I am a novice when it comes to shaders but I wonder about the performance difference between hand written shaders vs generated from a node-based editor.


Nowadays it's more valuable to create custom nodes to enrich the node-based editor library. Sometimes we write shaders from scratch if we need to implement some original lighting, materials, etc


that's great but there's lots of reasons you may want to roll your own engine


Especially if it's a 'smaller' game with mechanics that are unique. I just wonder if there are any good resources for the basics that a game engine needs.


The advantage of writing your own "game engine" is that it would only have the exact set of features you need for your game, and it would make the things you need to achieve as fast and as easy as possible.

This means traditional advice for "the basics of a good game engine" doesn't really apply, because if you're just making a bog standard game engine you're wasting your time and making a crappier version of Unity or Unreal.

https://geometrian.com/programming/tutorials/write-games-not...


Agreed, but I guess I'm saying basic architecture stuff like:

- "How do I render things?"

- "How do I integrate sound?"

- "How should I handle events?"


THREE.js[0] is high enough level that it solves some basic problems (hierarchies of objects, basic materials, basic lighting, rendering pipeline) and low enough level that you still have high granularity control (you can roll your own shaders, lighting, render order, etc).

I don't know if anything like THREE exists for OpenGL, though.

The upshot of THREE is its super easy to develop with (alls you need is a browser). The downside is that it slows down very quickly for even simple things.

[0] https://threejs.org/


I've been looking for something like THREE.js for C++ for years now, to the point where I actually started developing my own. But due to my inexperience with large codebases I have a hard time doing something like it that looks good and works well.

(I am a security analyst, not an engineer, but I'm transitioning to engineering roles now, so this might change in future, who knows :) )


For a general reference I would look at Jason Gregory "Game Engine Architecture".

For a deep dive into all game engine topics I would recommend the Handmade Hero dev streams: https://handmadehero.org

(Note that Handmade is not a useful model for an actual game production, it's more like a platform for investigating cutting-edge engine topics, so you can pick and choose parts to get what you need. It's been going for years and remains mostly a tech demo.)


Am I just out of touch with what the kids today are doing, or is this formatting really as bonkers as it seems?

  int main
    ( int argc
    , char *argv[]
    ) {
  
    // ...
  
    LColor backgroundColor =
      LColor
        ( generateLightColorPart(207, 1)
        , generateLightColorPart(154, 1)
        , generateLightColorPart(108, 1)
        , 1
        );
  
    // ...
  
  }
I can (sort of) see that this makes it easy to comment out parameters - except for the first one...


In languages that don't allow trailing commas, I prefer this comma-first style. It makes it easy to avoid missing or extra commas, which shortens my feedback loop a bit. But I usually don't use it, because a lot of people react the way you do, and I want you to be able to read and enjoy my code despite your irrational and bizarrely extreme prejudice.


I had never really thought about commas that way, but I can totally get why someone would want to use this style now. Thank you for taking the time to explain it!


I've noticed this formatting in sql scripts before too - the biggest benefit I see is making source control diffs only show one line changed rather than two when adding/removing lines.


Unless you're inserting as the first line ...


The benefit (compared to more classic formatting) only occurs if you're inserting at the end of the argument list.

If adding stuff to the end of a class or of an argument list is your most frequent case, you might have a software design problem.


Is that a benefit?


Yes. When you git blame a line you won't get a commit just adding/removing a comma changing on that line.


Some languages/DSL's allow for a comma at the end of the list too, thereby getting the same result but without the slightly strange comma at the start of the line. Adding additional lines is simple then and doesn't cause the line above to be pulled in to any git action


Maybe git blame should get better.


What’s bonkers specifically? Is the the leading commas, alignment, one identifier per line, capitalization, long names, same line braces, something else?

FWIW, everything seen here has options in Clang Format: https://clang.llvm.org/docs/ClangFormatStyleOptions.html

Personally, I like what this looks like. Leading commas are nice in situations where the first argument is fixed, but the remaining arguments move a lot while I’m tweaking & refactoring. I also like leading operators in long multi line math expressions, it’s easier to see at a glance how everything is combined when the operators are in front. I also like seeing arguments aligned, it’s easier to spot mistakes, easier to move things around, easier to use column selection, etc.

I rarely actually use the formatting I like because at work my team uses clang format so we’re standardized. Despite whatever formatting opinions I like, I love not arguing about formatting and having the formatter dictate and handle it for everyone.


I write this way to make it easier to see when I’ve missed a comma, and also for editor macros.

I’ll often reformat when I’m done so it fits our code review style.

Also I’m usually working systemverilog so 5-10 parameters are common and 50 or more isn’t rare.


If you do a lot of adhoc sql with an editer that supports multiline editing, it allows for pasting in big lists of text and quickly formatting it, trailing commas and ragged edge right data suck to edit in bulk.


I see that occasionally in my works codebase. I don't mind it so much. Sometimes I'll see

    If(predicate
       && nextPredicate
       && lastPredicate)
    {
        DoStuff();
    }
Which I don't mind but also don't write. However, I do prefer ternary expressions like

    int someNum = somePred
        ? firstVal + 42
        : 1000;


It's a hacky workaround for languages that don't support trailing commas


And more widely; languages that burden the programmer with such extraneous grammar in the first place.


Reminds me of how Elm lists are formatted; I have a hunch it may be something more common in Haskell, but that's pretty much a shot in the dark.


Yeah this is Haskell formatting


The part that has me twitching are the magic numbers in the calls to generateLightColorPart :P


A great way to play around with shaders (without booting up your editor, compiling, etc) is https://www.shadertoy.com and Shadron (http://www.arteryengine.com/shadron/)


This is really nice! The same techniques can be used in WebGL 2.0 which is at ES 3.1 on canary versions of WebKit. I had a hell of a time using the experimental compute shader API to implement a cellular automata framework [1]

The big issue with using WebGL at the current moment, to do large scale sims, is that fragment shaders dont have an entire workload barrier capability - so you cant do interdependent work.

Compute shaders have full workload execution and memory barriers.

Just something to be aware of if you are trying to jerry rig fragment shaders to do more advanced things like HDR via min/max over all colors..there is no way to these kind of aggregates without calling the shader twice, unfortunately.

[1] https://github.com/churchofthought/Grautamaton


I have been wondering why WebGL isn't getting updated to ES 3.1 (and 3.2). Good to see at least 3.1 parity is coming: https://www.khronos.org/registry/webgl/specs/latest/2.0-comp...

(I didn't know it was on the horizon until I saw your link). I wonder if getting to ES 3.2 is on the horizon as well? That would bring geometry and tessellation shader support.


I refer to the Mali Performance Guide for wisdom into Geometry Shaders: https://static.docs.arm.com/100019/0100/arm_mali_application...

> Using geometry shading will generally lead to worse performance, high memory bandwidth, and increased system power consumption. Assert if geometry shaders are used

Geometry shaders are not what you want. They should not have been included in GL ES, and I will fight their inclusion into WebGL 2.1.

Tessellation shaders are better, but still kinda ultimately useless.


Yes, I agree the performance is an issue in most use cases, but wouldn't you agree there is value in there being parity between WebGL and OpenGL ES? It's valuable for those of us who work with Emscripten to cross-compile apps for the web.

And there are things like line drawing algorithms that have uses for geometry shaders: https://github.com/paulhoux/Cinder-Samples/tree/master/Geome...

It's not like geometry shaders are specific to OpenGL ES. Desktop OpenGL has them, as does DirectX and Vulkan.


A good talk at Google I/O that shows how shared memory can reduce shader work in a WebGPU context: https://youtu.be/K2JzIUIHIhc?t=1329


Cool! Is there a tutorial for cell shading? (The Borderlands 2 or Paper Mario look.)


Those are two drastically different artstyles. Neither of them really are engine shader stuff, just someone drawing good textures in Photoshop.

Paper Mario is all hand-authored baked vertex colors and very simple textures. Artists spent a lot of time in Maya hand-tweaking each vertex individually, e.g. see my viewer here https://noclip.website/#ttyd/jin_00

Borderlands has some basic engine technology with outlines -- running a basic Sobel on the depth buffer to find depth discontinuities and drawing lines there, but most interior lines are on the texture itself. Lighting is also modified with a ramp -- the "raw" incoming radiance from lighting is thrown into an artist-authored lookup table and tweaked before being thrown to the shader. Normal maps are seldom used.


>find depth discontinuities and drawing lines there, but most interior lines are on the texture itself

I honestly thought a lot more was going on here but now that I look back at it you seem to be correct. Impressive how simple it is.


Borderlands isn't cel shaded, it's just nice textures. To get that look you you need:

1) good texturing (including some linework and hatching)

2) diffuse only (more or less)

3) draw edges in black

Which is pretty vanilla techniques, shader wise. The hard part is doing the textures well.

Somebody on reddit did a character model walkthrough back in 2014: https://www.reddit.com/r/gamedev/comments/2bftyc/i_reverse_e...


A very simple implementation of cell shading is to run the result of traditional lighting through a lookup table in the form of a 1D texture. Having runs and steps of solid color posterizes the smooth lighting input so that it resembles painted cell artwork.

A very simple implementation of outlining is to draw the object a second time solid black, with the vertex positions extruded out in the direction of the vertex normals, and with backface culling flipped so that you see only internal faces. Those black, internal faces peeking out around the edge of the regular mesh form the outlines.

That's a good starting point. Many iterations and improvements from there gets you to this: https://www.youtube.com/watch?v=yhGjCzxJV3E


There’s a section on posterization, which may get you close


Cool, looks like it briefly covers a lot of topics




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: