It would be helpful to see a https://vulkan.gpuinfo.org dump for the current version of NVK, reaching conformance with Vulkan 1.3 core is an important milestone but Vulkan has an enormous number of optional features which a conformant implementation technically doesn't have to support, and in practice Vulkan applications usually don't bother to target core as a baseline, especially on desktop where there's a massive gulf between what core requires and what the hardware can actually do.
Those extensions are part of it, but to get the full picture you also need to see the features and limits. E.g. an extension may gate its functionality behind feature flags, so a conformant implementation of the extension may just be a stub that says "feature not supported", or only support a subset of the extensions functionality.
That's annoying. Sounds like vulkan stumbled into exactly the same problem opengl had. There was the core, then a maze of vendor features and extensions differently supported.
The feature maze isn't too bad if you only care about running on modern-ish AMD and Nvidia hardware with their official drivers, most things you'd typically want are implemented in portable extensions supported by both, but god help you if you're writing Vulkan for mobile.
Yeah, it is OpenGL spaghetti all over again, where it is hardly an improvement versus having a pluggable backend for specific APIs, thus the irony of having a pluggable backend for Vulkan flavours.
One improvement in Vulkan vs. OpenGL is that it effectively decouples software updates from hw generations.
As of today, you can write Vulkan 1.3 code (which is significantly better and less verbose than 1.0) and run it on 10+ year old hardware on all desktop platforms with GPUs from all vendors as long as you've got up to date drivers.
Such a thing was never possible in the OpenGL days when the API version was tied to certain hardware features (which you probably didn't use), and you were stuck with 10+ year old API version if you wanted to run on 10+ year old HW.
It ain't perfect but it's much better than back then.
That's still very much a thing, except that the new HW features are not as fundamental as before. For example, if you want to use ray-tracing, you're on the same boat as before: it won't work on 10-year-old HW. Same for using certain non-32bit variable sizes in shaders. Or the newer shader features that help AI.
But you still get the latest API version and you can use all the features available on your hardware (at runtime). You don't need to stick with Vulkan 1.1.
Ray tracing, mesh shaders, etc need hw support. But my potato Intel laptop from 2013 has Vulkan 1.3.
What's the point of "using 1.3" if you can't use the coolest new extensions because they're optional? Yeah you get dynamic rendering and, uh, nothing else new? Saying "it has Vulkan 1.3" is not really saying a lot, since way too much stuff is optional.
2013 Intel iGPUs pretending to have Vulkan support are actually one of the worst targets (on par with bad mobile phone drivers), because the hardware doesn't fully support vulkan, the implementation is buggy and incomplete (and Linux only, though to be fair buggy and incomplete applies to all intel gpu drivers, at least for dx12+vulkan), and it doesn't pass CTS (any sane vulkan app will block running on non-conformant drivers because they're a nightmare).
Fake news: Intel Graphics on Linux submits its drivers for Conformance Testing and gets the official stamp by Khronos for its products: https://www.khronos.org/conformance/adopters/conformant-prod... . The drivers are good and often conformant on launch. I'd argue that Linux is better than the Other OSes if you want Intel Graphics.
...did you even bother to search through that? There aren't any Haswell iGPUs in there, because they're non conformant.
I'd agree that overall Intel's Linux drivers work better (though afaik they've had a lot of problems with Alchemist, and seem to have prioritized Windows performance over Linux for them).
The Vulkan Roadmap definitions are supposed to address this problem for the 95% use case.
If you want to be at the bleeding edge of technology, you're always going to have to put in some extra work. At least Vulkan makes that possible: vendor extensions often expose features earlier than they appear in D3D12 at all. (Though D3D12 vendor extensions also exist -- if anything, they're messier than the Vulkan counterpart.)
There are higher levels. In the Rust world, there's "Wgpu", which provides a unified Vulkan-like interface over Linux (Vulkan), Windows (Vulkan or DX12), Apple's Metal, Android, and Web (WGPU). The desktop systems all work pretty much the same, but the Android and Web models run into the limited threading model of those platforms and need some special cases. This level is well supported.
At the WGPU level, you're still managing buffers in the application. The next level up is Rend3. This hides the underlying layers. The application creates meshes and 2D textures, and feeds them to Rend3, which puts them in the GPU and takes care of GPU memory allocation and safety. The application also creates materials, which collect up all the parameters and textures needed for an object, and Objects, which take a material, a mesh, and a transform and put the object on screen. At this level you're in safe Rust and can ignore what platform you are on. It's like old-school OpenGL, but all retained mode. This level needs more people working on it.
If that's not enough abstraction, there are game engines.
Looking at DirectX 12 Ultimate features still missing, doesn't really look like they come first to Vulkan.
Then there is the whole mess of shading languages going on, with the stagnation of GLSL, using HLSL as kind of alternative but not quite due to HLSL semantics, and then the explosion of alternatives.
Which DX12 Ultimate features are missing from Vulkan?
They may be missing from Roadmap definitions, but that wasn't my point. Vulkan is often first for innovation being accessible / possible in some form, but DX12 is usually first for "platform standardization", loosely defined.
Your comment about languages even reinforces the point: there isn't really an ecosystem for language innovation on top of D3D12.
Trying to make Vulkan a single API that covers everything from low-end mobile to high-end desktop was probably a mistake, in practice it has split into wildly distinct dialects for low-end and high-end targets anyway. Maybe one day we'll get a Vulkan 2.0 which ratchets up the mandatory baseline hardware features to something reasonably modern and removes all the API cruft needed to accommodate less capable hardware.
In hindsight, adding the RenderPass API to in Vulkan 1.0 support mobile chips was probably a mistake.
Vulkan 1.3 without render passes ("dynamic rendering") is so much easier, it reduces hundreds if not thousands of lines of code from the boilerplate code required to draw a triangle. And it gets even better in a practical project because you'd need to set up new render passes up front for every new thing you add.
Mobile vendors are catching up and providing means of getting advantage of tiler GPUs without the old verbose render pass API, but it's not there yet. And it'll take a while until mobile drivers propagate to the hands of consumers.
What's desperately needed is a way for compute shaders to read on-chip tile memory like Metal Tile shaders on macos/ios. That would finally bridge the gap, getting rid of the render pass mess altogether and provide a way to get the best perf on mobile chips without having to write a whole separate path for compositing/deferred lighting etc.
Because currently you get the best perf on desktop with compute shaders but mobile requires you to use fragment shaders to benefit from tile memory.
That said, Vulkan 1.3 is a massive improvement over 1.0.
I don't see a benefit in making a version 2.0 that's incompatible with 1.x, because it's not like vendors could stop shipping 1.x due to all the content using it out there.
> I don't see a benefit in making a version 2.0 that's incompatible with 1.x, because it's not like vendors could stop shipping 1.x due to all the content using it out there.
If we're going to keep piling features onto Vulkan 1.0 forever then I think we are least need much better onboarding resources, there's still a ton of introductory Vulkan material out there which dives into the older more verbose approaches and probably scares most people away immediately. The API is intimidating enough without immediately dumping render passes on beginners.
I totally agree, and so do the people working on it as well as some of the volunteers who write tutorials.
There's an ongoing effort to create beginner friendly introductory material which was discussed in the recent Vulkanised conference. And an effort to make a better documentation site that's easier to browse than the specification https://docs.vulkan.org/
On the volunteer front, there's a Vulkan 1.3 -based introductory tutorial (work in progress) over at https://vkguide.dev/
With regards to your comment, I think that "piling features on top of 1.0" is a mischaracterization, because a lot of the new features essentially replace old features, often making things a lot easier. For example, timeline semaphores (new sync primitive) essentially replace binary semaphores, fences and events (old sync primitives). Dynamic rendering replaces render passes. And so on.
I think there should be a Vulkan tutorial that doesn't start with the boring stuff of initialization and window creation. It's stuff that you write once and forget about, and nothing particularly interesting happens in it.
Looking at my hobby project, excluding the boring stuff (which is reusable), a "hello compute" example is around 100 LOC and a "hello triangle" around 120 LOC. GLSL shader sources included.
Maybe someday I'll get around to writing a "learn Vulkan the hard way" blog post with examples.
In hindsight the separation of GL and GLES was pointless IMHO (at least up to 2.x).
It led to GPU vendors to only offer GLES drivers besides the GPU supporting GL (according both the marketing and https://github.com/ptitSeb/gl4es) and made porting standard linux apps to arm platforms needlessly difficult. Note that even with GL4es you have to port the shaders manually to GLES.
> Trying to make Vulkan a single API that covers everything from low-end mobile to high-end desktop was probably a mistake, in practice it has split into wildly distinct dialects for low-end and high-end targets anyway.
Lots of people are continuing to make this mistake.
Things like "windowing init" libraries, for example, want to abstract over desktop, mobile, and web/WASM.
Sadly, the interaction mechanisms and lifetime management are sufficiently different that either you wind up with huge complications or the least common denominator is really low.
Even the way the profiles got introduced, I must think it was a joke on us.
Instead of dealing with specific API settings, like in proprietary APIs, there is the expectation to generate JSON configurations, parse them, generate code calling the profile API to configure the desired profile.
At Vulkanised 2024 there is a 1h long session on how to use them.
> - how would I switch between the OpenGL and Zink implementations when running an app today?
There are environment variables to control this. Or you just install one driver. Usually Zink is cool when you don't have an OpenGL driver for a specific hardware. You write a Vulkan driver and then Zink is a GL implementation that uses Vulkan.
I'm running it on my optimus laptop now. There have been some funky bugs with mixed GPU systems but things are mostly working these days if you have a new enough kernel.
This is honestly pretty huge. I'm hoping this will seriously improve the Nvidia situation on the linux desktop in the long run! I'll probably always need the proprietary driver for cuda though
> As of today NVK is now a conformant Vulkan 1.3 implementation on Turing (RTX 2000 and GTX 1600 series), Ampere (RTX 3000 series), and Ada (RTX 4000 series) GPUs