It's definitely helpful seeing everything laid out tutorial-style. As simple and well-written as glfx.js is, there are still some quirks of WebGL and shaders that are not at all obvious to the uninitiated JS hacker.
That is the first time I've seen WebGL: what a mess.
Don't get me wrong, I'm certain there is a segment of this community and the web-dev population at large that is thrilled by its standardization, but yikes.
> That is the first time I've seen WebGL: what a mess.
That's like saying "That is the first time I've seen assembly code: what a mess."
Yes, it's a mess - it's the lowest level. You don't want to write that kind of code, normally speaking. Instead, use a nice high-level WebGL engine like CubicVR.js, three.js, etc.
It's not that messy when you remember that this is the one area left in contemporary programming where you're almost directly talking to hardware which knows only the most basic primitives but can do them blazing fast and where minor coding decisions can have huge performance impacts... remember assembler/C in the olden days for normal software? What a mess. Hey, I'm already glad the shader language is a simplified version of C rather than pure assembly -- in fact, earlier I heard that the very first generations of shaders were indeed assembly-like, glad we advanced since then.
There are many many frameworks that "take away the pains of core WebGL coding" and give you "as easy as jQuery" higher-level JS APIs on top of WebGL. But you get the most control over your own hardware-accelerated program if you write it directly in WebGL. It's not too complicated, only most of us aren't used the real concepts in computer graphics.
The issue is that everybody does the minimum modifications needed to translate the C api into their language rather than the design the API as you would in that language.
The issue is also that, since graphics are a performance intensive and ergo low-level task, there isn't really a one-size-fits-all way of wrapping it up in a higher level api.
I can't speak for WebGL, but for OpenGL there are various frameworks and such you can use but at the end of the day most people end up needing specific access to the low level guts and you just end up writing your own code to interface with it.
No, the OpenGL API is a mess. It's not because it's an API to do a complicated task but merely because of it's 1990's legacy. Just compare the OpenGL API to the Direct3D API and you should see a clear difference.
The best (or worst) example of this is the OpenGL context, which is basically a global (actually: thread local) variable that tells which GL context and window should be drawn to. This context then stores a bunch of other global state, such as texture bindings.
The API was (probably) designed like this because in the early 1990's it was actually a major optimization to avoid pushing parameters to the stack when calling functions several times a frame. Many graphics API's of that era did the same thing. Also there usually was only one screen with one fullscreen "window", so why not make the "display" a global variable? Now 20 years later we are stuck with this shit because it became an industry standard.
OpenGL is a pain in the ass, not only for programmers but the implementors of the API. For example, take a look at OpenGL "texture completeness" rules (textures must have consistent mipmap levels, etc). Direct3D does not have an equivalent, because the API is designed so that all textures are always complete.
Unfortunately we seem to be stuck with OpenGL for a while. There's not much interest for redesigning the API and as far as I can tell (I'm a Khronos member), no ongoing effort to do so. Direct3D is a much better API but unfortunately it supports a very limited number of platforms (none of which I actually like to use).
There are several examples where something like this has gone horribly wrong. High level API's for "common use cases" tend to work only for writing one page examples for text books but are completely useless for anything more serious. One example of this is the CUDA API. They have a low level C API, which gives you a good but a little verbose programming API (because GPGPU is complicated). Then there's a high level API where you write the GPU code mixed together with the CPU C++ code. It somewhat worked for the examples in the Cuda manual but was useless for anything else.
OpenGL did have a library called GLU that did some high level things such as quadrics (for spheres, cylinders, etc) and tesselation of polygons. It was completely written on the CPU on top of OpenGL. It worked decently for some stuff. Direct3D has also some helpers, f.ex. to do some matrix math and load 3d models.
A library like OpenGL is probably best left to be a low level API for a low level task. A high level API in this case would be a 3d engine on top of OpenGL and there are plenty of those out there.
People are already making games in OpenGL and C. This was totally impossible before, using JavaScript. Redesigning the whole API would delay the process, and it would have to use OpenGL as it's own backend anyway, so you'd inherit a lot of the problems.
If you want something that's easier to use, wait until game engines for JS appear. This is supposed to be low level.
With NaCL you might end up being able to use OpenCL to construct your own 3D rendering stack and use it in the browser.
As far as I know, no-one has yet shipped a product that has a graphics sw stack written with a GPGPU api like OpenCL. It's an interesting project to do and you can do really cool stuff. However, practical it is not.
You also shouldn't forget that the GPU has dedicated hardware for graphics that cannot be accessed through a GPGPU API. The framebuffer can't be manipulated with a GPGPU API. Window compositing is largely done by dedicated hardware. Some GPU's still have dedicated hardware for vertex shading. If you try to write a graphics stack with a GPGPU API, all this hardware is left unused.