This is really cool. As far as boilerplate goes WebGPU is really not that bad (one just needs to look at Vulkan to see how fun it can get) but really anything to lower the barrier to entry is nice.
I am curious about the API overhead here. For example if I run two separate passes using bare WebGPU I could use a single command encoder. Would two function calls here result in two command encoders?
Another question - from the example code this doesnt look asynchronous, does it wait for everything to finish before returning? I could also be misreading the code I dont know typescript very well.
Shadeup will try to fit it all into one command encoder but may need to split if the coder:
1. Manually flushes
2. Reads/downloads a buffer on the CPU
Most calls are lazily evaluated, and a small dep graph is built as you reference/draw things.
That being said, shadeup does make some perf sacrifices in the name of simplicity. I'd say it's well suited to prototyping and learning. Of course, once you have a working prototype of something porting it to native wgsl should be easy
Have you thought about having an "eject" command? For example, Create React App had an eject command to go from the self-contained version to a full customisable Webpack app.
Users can already kinda do this by showing the compiled output, but the js engine itself is still tucked away. At some point, I would probably have a tree shaken full output for each shadeup.
WebGPU main pain point is forcing everyone to rewrite their shaders, no wonder most Web native game engines are going the shading language abstraction route, when only Chrome does WebGPU on desktop, and it isn't even an option on mobile years to come.
Thus cross API WebGL / WebGPU shading middleware is needed.
Safari Technology Preview finally turned it on again. Of course when Apple will ship it on Mac and/or iOS is unknown but at least they showed some progress after hiding for 6 or more months
Don’t be fooled by “web” in the name. Just like WebAssembly is/will be a better JVM, WebGPU is already posed to be a better OpenGL/Vulkan. It is a full graphics API unconstrained by browser implementations, and available without a browser runtime.
I am curious, in what ways is WebGPU better than Vulkan or Metal? From my point of view, WebGPU going back to an interpreted shading language like OpenGL makes it conceptually closer to JavaScript than WebAssembly.
HLSL has in practice become the shader language for Vulkan, and Metal has always had a high-level shading language. The GPU market is simply too heterogeneous for a low-level ISA like SPIR-V to be applicable and performant across all architectures. A modern GPU driver effectively has to "decompile" SPIR-V binaries, then recompile it to the actual vendor-specific architecture used on the installed GPU, and perform both high-level and low-level optimizations. Standardizing on a single high level shading language, but one purposefully designed to be cross-platform, is actually the simpler architecture here.
I wrote a big visualization application with WebGPU and I'm happy with the choice. It's much cleaner than OpenGL, without as much complexity as Vulkan or Metal. It runs on Mac & Linux, and I got a demo version working in the browser with emscripten fairly easily. It doesn't have the trouble with OpenGL version compatibility that used to make supporting multiple platforms a hassle.
I'm not sure what's 'interpreted' about it. On Mac, the wgpu-native driver transcodes to the Metal shading language and it gets compiled.
I gave up on Vulkan simply because it's too overengineered and cumbersome. WebGPU, on the other hand, is relatively easy to use, even though it is build on many of the same concepts.
WebGPU IS basically Metal, but cross platform. When the W3C decided to standardize a new graphics API, Apple submitted a proposal that was basically Metal with a name change. That proposal became the basis of WebGPU, though like any committee standard it has undergone multiple revisions since then.
Metal itself was basically Apple looking at Vulkan and saying “Lol, no way in hell. That’s a stinking mess of complexity.” Then they went off and made a simpler, less boilerplate, easier to code but just as performant and powerful API for Apple platforms. Now WebGPU makes all those improvements available everywhere.
And ignore the Web prefix: WebGPU is a lower level API, like DX12, Vulkan or Metal. It just came out of the W3C, hence the name. Unfortunately the name makes people think that it is a JavaScript browser extension or something, which is is not. You can call it from a browser, yes, and the API is designed to enable sandboxed instances for that purpose, but that’s a higher level integration, just like how WebGL exposes OpenGL ES.
In fact maybe that’s a better way to explain it. Until now if you wanted 3D in a browser you had to use WebGL which gave you a OpenGL ES compatible API. Except OpenGL is ancient and GLES doesn’t support vendor extensions, so this was a really shitty situation. W3C decided “Let’s make a new low level graphics API to replace OpenGL ES, and a new JavaScript browser extension to expose that API [replacing WebGL]” and they decided to confusingly call at various times both things, and the whole stack together when integrated with a browser “WebGPU.”
However most of the excitement and work surrounding WebGPU right now is around Rust and C++ frameworks that are using WebGPU as a platform and device independent middleware framework, since the standard implementations will performantly transform WebGPU calls into Metal, Vulkan, or DX12 system calls based on what the end user system supports, making it an excellent middleware that is really easy to target. So really, WebGL is a replacement for OpenGL in a way that Vulkan was not.
OP is right. WebGPU is targeted towards the lowest common denominator, which is fairly old mobile phones. It therefore doesn't support modern features and is basically an outdated graphics API by design.
Mesh shaders, raytracing, DirectStorage, GPU work graphs, C++ features on shading languages, some of the post 2015 features that aren't coming to WebGPU any time soon.
The features you mention are mostly applications of compute shaders, which are fully capable of being written in WebGPU, as WebGPU supports buffer -> buffer compute shaders when the underlying GPU supports it.
I've personally implemented mesh shaders in my own project, and there are plenty of examples of WebGPU real time raytracers out there. DirectStorage I had to google and it looks like a Windows DMA framework for loading assets directly to GPU ram? That's not even in scope for WebGPU, and would be handled by platform libraries. Linux has supported it for ages.
Seriously I get the impression from your posts that really don't have any experience with WebGPU at all, and are basing your understanding off of misinformation circulating in other communities. Especially with your continued nonsensical statements about not supporting "post-2015 features." 2015-era chipsets are a minimum supported feature set, not a maximum.
Please just take the L and read up to inform yourself about WebGPU before criticizing it more.
I bet those implementation of yours weren't done in WebGPU actually running on the browser, otherwise I would greatly appreciate being corrected with an URL.
DirectStorage started as a Windows feature, is actually quite common in game consoles, and there is ongoing work to expose similar functionality in Vulkan.
Yes, I do have WebGPU experience and have already contributed to BabylonJS a couple of times.
Maybe I do actually know one or two things about graphics APIs.
WebGPU is not a browser API. It is a lower level API with official, supported implementations in C++ and Rust with zero browser dependencies. See for example:
Apart from the examples given by the other user: 64 bit integers and their atomics which are the bread and butter of efficient software rasterizers such as Nanite, or for point clouds which can be rendered multiple times faster with 64 bit atomics compared to using the "point-list" primitive; subgroup operations; bindless; sparse buffers; printf; timestamps inside shaders; async memcpy and copy without the necessity for an intermediate copy buffer and so much more. One of the worst is that they're adding limitations of all languages, but not the workarounds that may exist in them. Like WebGPU actively prohibits buffer aliasing or mixing atomic and non-atomic access to memory because of Apple's Metal Shading Language, but Metal supports it via workarounds! I mean... seriously? That actually makes WebGPU even worse than the lowest common denominator.
One of the turning points for the worse was the introduction of WGSL. Before WGSL, you could do lots of stuff with spirv shaders because they weren't artifically limited. But with WGSL, they went all in on turning WebGPU into a toy-language that only supports whatever last decades mobile phones support. I was really hopeful for WebGPU because UX-wise, it is so much better than anything else. Far better than Vulkan or OpenGL. But feature-wise, WebGPU so limited that I had to go back to desktop OpenGL.
In one way WebGPU really has become a true successor to WebGL though - it is a graphics API that is outdated on arrival.
the elegant lightgl.js, which is an abstraction layer on top of WebGL that makes it much nicer to work with. Unlike THREE.js, it doesn’t make any assumptions about you wanting any concept of a camera or lighting or that you’re working in 3D at all.
Key highlight for anyone unfamiliar with web graphics technologies reading this comment and interested in trying one out themselves: as mentioned above, lightgl only abstracts WebGL and this is different than the more flexible WebGPU Shadeup abstracts. One example of the difference is the fluid sim example in lightgl/WebGL has to be written as pixel based shaders (meant to do operations on pixels in textures) while Shadeup/WebGPU let you do this as a pure general purpose compute shader (more like a generic bit of code that happens to run on a GPU). This is both much more flexible in terms of what you can do as well as much more efficient/scalable in execution.
The particles effect on this page is a great example of "old web" feel in a page with "new web" looks in that it's just so fun and impractical solely because it can be.
The "browse" button towards the bottom has a few editable examples as well, including one for the effect seen on the main page https://shadeup.dev/zv5twftezv2y
This is interesting, I'm working on something similar but I'm starting from an existing gpu language and working my way backwards to add more support and allow for efficient cpu side execution. It's more geared as a game engine language to support both the game script (allowing easy parallelization) and the rendering side of things (as it's already a gpu language). Mine is currently more geared towards vulkan though.
Looking through the examples on your website does give me some ideas though, I might try and adopt some of them myself.
Your project sounds interesting as well. Do you have a social channel or GitHub for this project? Or is there something I can follow to be updated when you release it?
I'd love to learn more about what you're working on as I've been wanting to integrate shadeup with existing game engines like unreal/unity but realizing it would be a massive undertaking.
I will warn that it's mostly been rewritten at this point, but most changes haven't made it to the repo due to being held up by my current employer's legal team. Most of the code is also poorly written due to it mostly being written during half asleep/burned out state. With all the "don't judge me to hard"'s out of the way, https://gitlab.com/Cieric/gellang
I don't currently have plans to integrate with an existing game engine, just my own. I have however considered in the past integrating with Godot, so that would be my first target if I do ever attempt it.
I also was assuming no one would use my project professionally so the idea was to be a superset of glsl so any glsl code could be pasted into a script and be compiled without any modifications. That would aid in debugging gpu code in an actual debugger without needing to rewrite it.
Once all the paper work is finished additional updates should start rolling out again since it would just be reviewing the new changes not everything again. Hope your expectations are in line, I'm not trying to under or over sell my project. I was just wanted to fix my own gripes with the projects of the time.
My startup has been working on WebGPU support for Unreal Engine 5 for the past several years, and we've already achieved multiple demos that will be going live relatively soon.
I learned WebGL three years ago but before I dove into the underlying concepts I used GPU.js [1] to quickly prototype my project. Eventually, the abstraction prevented necessary performance optimizations so I switched to vanilla GLSL and these vanilla GLSL "shaders" were initially ejected from GPU.js.
Writing JS code then looking at the generated WebGPU output is a great way to get familiar with WebGPU. Thanks for this.
webgpu hype is pretty cringe,even if you want to run in a browser (and most customers don’t) most game engines can still bake WebAssembly/WebGPU packages.
I think the main feature that's exciting for me is the gpgpu potential.
Even just looking at the ability to accelerate llms in the browser on any device without an installation is awesome
For example: fleetwood.dev has a really cool project that does audio transcription in browser on the GPU: https://whisper-turbo.com/#
Hah that was me ~12 years ago trying to get WebCL (OpenCL) through the same gate keepers. Meanwhile, in Python, we are doing multi-node multi-GPU. Maybe OpenAI's and soon Apple's success with LLMs will change the economics for them.
This is why I don't like Khronos APIs, even when actually those are the ones I know relatively well, the way they work end up being a much worse experience than writting backend specific plugins ourselves with much better tooling, also the extension spaghetti ultimately doesn't save us from multiple code paths anyway, given the differences between some of those extensions.
To pick your example, something like PyTorch ends up being a much better developer experience, similar to game engines, than relying on Khronos APIs.
I am curious about the API overhead here. For example if I run two separate passes using bare WebGPU I could use a single command encoder. Would two function calls here result in two command encoders?
Another question - from the example code this doesnt look asynchronous, does it wait for everything to finish before returning? I could also be misreading the code I dont know typescript very well.