Disclaimer: I've played with toy deferred renderer a bit, not a graphics guy.
This not about "traditional" deferred rendering, but textureless deferring rendering. No textures, surface normals, material information or other data typical to deferred renderers is baked in the per pixel MRT buffers. Just barycentric coordinates and a "pointer" to the geometry and material data.
In this technique, and texturing (unlike in traditional deferred rendering) is only done at the final texturing and lighting pass (or passes). Thus no texturing or lighting will ever need to be done for occluded objects.
Textureless deferred rendering technique can get away with rendering significantly less and with a tiny 64-bit (8 bytes) per pixel buffer. Only 2x 16 bits for barycentric coordinates and triangle + instance id stored in 32 bits -- as a memory pointer or with some type of packing, like 16 bits for both.
This saves a lot of buffer memory (2-4x), computation (each screen pixel is lit and textured just once) and memory bandwidth.
Traditional deferred rendering is used so that lighting and material calculations (fragment shader) needs to be done just once per rendered screen pixel. It requires a huge buffer, usually either 16 or 32 bytes per each screen space pixel. Or in other words, 128/256 bits.
Deferred renderers bake texturing in the pre-lighting passes as well as information about material, surface normal, etc., which the final lighting pass will use. That's why they have to store so much per pixel data. Just 16-bit per component RGB alone is 8 bytes. That's why they're limited to relatively small number of materials (or at least parameters for them), because the final lighting step fragment shader must get all of the lighting related data from the per pixel buffer.
For people with just some basic computer graphics background that don't know what this "deferred rendering" is: when this technique became popular, these [1] slides were popular and inspired many.
It even inspired a WWII plane game to use deferred rendering, which optimizes for having many lights on an object. Even tho they for the most part had only one light source (the sun)...
Many lights is not the only advantage. There are some deferred decalling techniques that I could see be very useful to a flight simulator for terrain drawing, for example.
I've been looking at the game engine 'Leadwerks', and impressed with what I see. Apparently uses deferred rendering. Means bigger environments and more lights possible from what I read. Good hardware is needed though as it eats VRAM. Beyond that, as a designer-dev, I suppose I don't need to know too many details about it.
In tiling rendering architectures like you usually see them on mobile GPUs you can also use pixel local storage[1] to have G-buffer information never leave the on-chip memory.
The Vulkan api also introduces subpasses that implicitly support this.
This not about "traditional" deferred rendering, but textureless deferring rendering. No textures, surface normals, material information or other data typical to deferred renderers is baked in the per pixel MRT buffers. Just barycentric coordinates and a "pointer" to the geometry and material data.
In this technique, and texturing (unlike in traditional deferred rendering) is only done at the final texturing and lighting pass (or passes). Thus no texturing or lighting will ever need to be done for occluded objects.
Textureless deferred rendering technique can get away with rendering significantly less and with a tiny 64-bit (8 bytes) per pixel buffer. Only 2x 16 bits for barycentric coordinates and triangle + instance id stored in 32 bits -- as a memory pointer or with some type of packing, like 16 bits for both.
This saves a lot of buffer memory (2-4x), computation (each screen pixel is lit and textured just once) and memory bandwidth.
Traditional deferred rendering is used so that lighting and material calculations (fragment shader) needs to be done just once per rendered screen pixel. It requires a huge buffer, usually either 16 or 32 bytes per each screen space pixel. Or in other words, 128/256 bits.
Deferred renderers bake texturing in the pre-lighting passes as well as information about material, surface normal, etc., which the final lighting pass will use. That's why they have to store so much per pixel data. Just 16-bit per component RGB alone is 8 bytes. That's why they're limited to relatively small number of materials (or at least parameters for them), because the final lighting step fragment shader must get all of the lighting related data from the per pixel buffer.