Fascinating article, especially all the political reasons as to why Microsoft introduced Direct3D in the first place.
"Direct3D was not an alternative to OpenGL [...] it was a designed to create a competitive market for 3D hardware"—and how it ultimately failed, with Direct3D becoming like OpenGL, more confusing, and paving way for the alluring simplicity of developing for a single hardware specimen—the Xbox.
Really interesting post which I enjoyed immensely, and a good point about being able to create custom renderers with DirectCompute/OpenCL and skip all the legacy baggage while harnessing the ultimate powah of modern GPUs. (Reckon we'll see a bit of this sort of stuff in some PS4 games later on in the console generation.)
Not a new idea, but one I'm glad to be reminded of.
I'm totally going to put some time into playing with this. Anyone here already doing something cool in the custom GPU renderer space?
I know multiple devs on PS3 were using the SPUs to rasterize low resolution depth buffers, so that they could then do occlusion culling without hitting the GPU at all. I expect we'll see similar approaches using GPU compute in the future, assuming people can get the latency down low enough so that it still beats native GPU occlusion queries.
Several titles probably did it (I only know of one for sure) but it does not mean it's any good technique.
You see, on PS3 GPU there are no pixel shader constant registers. This means you need to patch shader code if you want any alterations in your pixel shader. A naive approach, with patching on CPU when submitting every primitive, is horrendously slow because the CPU sucks at moving large amounts of data. A slightly faster approach, using the GPU DMA, is still unbelievably slow, because even though the GPU moves large amounts of data very quickly it takes a lot of time to switch between pushing triangles to copying mem ranges and back. Luckily, on the PS3 there are also SPUs, pretty good at moving large amounts of data and having no penalty for doing so. Patching shaders on SPUs is so fast you forget about it. Also very easy to write.
So the mind boggles when you see people using GPU patching and, to somehow save peformance, do some complicated scheme with occlusion culling via SPU.
No version of DX mandates any specific hardware architecture. Otherwise you would not be able to run DX9 games on modern hardware that also does not have constant registers.
As someone who has felt mostly sour grapes toward Microsoft for introducing D3D in the first place instead of adopting OpenGL, it's nice to read a convincing rationalization for why that wasn't done initially, coming from someone who was involved in the process.
The 'caps bits' problem is really the core of it. OpenGL in that era was a nightmare for anything resembling game rendering. The OpenGL we have now is pretty reasonable, but back then, Direct3D was a breath of fresh air and DirectDraw was a much saner way to push pixels around efficiently as well. It was hard for me to ever understand why people preferred OpenGL at the time (other than the obvious benefit of theoretical portability). Vendor-specific shader bytecode, UGH.
It was hard for me to ever understand why people preferred OpenGL at the time
OpenGL was elegant and very simple to use, quickly becoming close to invisible. DirectX, in comparison, was layers upon layers of COM book-keeping code.
Of course OpenGL has become like DirectX in more recent iterations, as in the end immediate satisfaction is less important than flexibility.
For me the definition of 'layers' was having to juggle dozens of interacting state flags and mutually exclusive vendor-specific extensions just to draw a dual-textured triangle or render to a render target.
COM really wasn't that much of a hassle in comparison. A couple smart pointer templates and you're off to the races. I can see how a C developer would really resent it though - nothing but a pain compared to GL's regular C, 'everything is void*' api.
It's not a gigantic deal, but it can bubble up and fuck things in your content pipeline, in shaders, in physics, in other places. It's just an annoying arbitrary thing you have to remember and occasionally you run up against it (especially if you're writing engine code instead of just using something off-the-shelf). It also can screw up math and memory layouts when talking with other libraries.
At least with endian issues, for example, there was once maybe a compelling reason to do it from a hardware standpoint.
If the handedness decision really was "arbitrary" and OpenGL had set a precedent, why select the exact opposite handedness for a brand new system? Given Microsoft's vicious competitiveness in the 1990s, I can't help but think that they wanted to make code portability much more difficult for software developers. Is it really a surprise that "all other graphics authoring tools adopted the right handed coordinate system standard to OpenGL"?
The article says he chose left-handed out of personal preference. Not everything by Microsoft is a sinister conspiracy, sometimes it's a garden variety screwup. They were probably merely insular enough to ignore the rest of the industry, rather than arrogant enough to deliberately subvert it.
There is some logic to left-handedness, which is a bit more intuitive in some ways for computer graphics. Left-handed means the Z coordinate increases with depth into the screen, that the viewer is somewhere near Z = 0 and looking towards positive numbers. And projection space has 0.0 at the near plane of the view frustum and 1.0 at the far end. Right-handed means that either your projection space coordinates go negative or that your projection matrix includes a negation for the Z coordinate.
Left-handed does however have the enormous disadvantage of working against almost all (non-computer screen) representations of 3D space. Draw your X and Y axes on a sheet of paper in their customary orientation. It's much more intuitive to interpret positive Z as altitude above the paper rather than as depth into your desk. That's right-handed, and that's why OpenGL and everyone else chose that.
Answering a different parent, it's more than just one line of code in a library to change coordinate systems. The depth buffer check needs to compare in the opposite direction, for one.
Incidentally, Microsoft has learned from this mistake: XNA uses right-handed coordinates.
According to Microsoft Unity is the way forward, BUILD 2013.
XNA and its former incarnation, Managed DirectX, always suffered from the internal political differences between .NET and native tools development groups.
So what I'm missing is the Fahrenheit project I got into.
MS and SGI were cooperating on "Fahrenheit", a highlevel scene graph sdk/api. Eventually it never got released and both parties bailed.
Fahrenheit was a horrible microsoft api, and more like a scenegraph than a low level api. There were photos (probably still somewhere on reality.sgiweb.org) of it all being burned. Burn API burn!
Honestly between WildTangent and from seeing the sheer hubris and sociopathy on display in most of St. John's interviews (and the fact that he just posts hundreds of private emails on the internet), I don't have a very high opinion of him anymore. It's a shame, these stories are super interesting - the posts about Talisman describe stuff I didn't know even happened. Talking about mocking colleagues during their presentations and actively doing really showy, rude stuff to execs...
"Direct3D was not an alternative to OpenGL [...] it was a designed to create a competitive market for 3D hardware"—and how it ultimately failed, with Direct3D becoming like OpenGL, more confusing, and paving way for the alluring simplicity of developing for a single hardware specimen—the Xbox.