Hacker News new | past | comments | ask | show | jobs | submit login

It can be argued that they already did. AMD and Apple worked with Khronos to build OpenCL as a general competitor. The industry didn't come together to support it though, and eventually major stakeholders abandoned it altogether. Those ~10 wasted years were spent on Nvidia's side refining their software offerings and redesigning their GPU architecture to prioritize AI performance over raster optimization. Meanwhile Apple and AMD were pulling the rope in the opposite direction, trying to optimize raster performance at all costs.

This means that Nvidia is selling a relatively unique architecture with a fully-developed SDK, industry buy-in and relevant market demand. Getting AMD up to the same spot would force them to reevaluate their priorities and demand a clean-slate architecture to-boot.




Maybe because Apple got pissed on how Khronos took over OpenCL, AMD and Intel never offered tooling on par with CUDA in terms of IDE integration, graphical debuggers and library ecosystem.

Khronos also never saw the need to support a polyglot ecosystem with C++, Fortran and anything else that the industry could feel like using on a GPU.

When Khronos finally remember to at least add C++ support and SPIR, again Intel and AMD failed to deliver, and OpenCL 3.0 is basically OpenCL 1.0 rebranded.

Followed by SYCL efforts, which only Intel seems to care, with their own extensions on top via DPC++, nowadays openAPI. And only after acquiring Codeplay, which was actually the first company to deliver on SYCL tooling.

However contrary to AMD, at least Intel does get that unless everyone gets to play with their software stack, no one will bother to actually learn it.


Well, Apple has done nothing to replace the common standard they abandoned. They failed to develop their proprietary alternatives into a competitive position and now can't even use their own TSMC dies (imported at great expense) for training: https://www.eteknix.com/apple-set-to-invest-1-billion-in-nvi...

However you want to paint the picture today, you can't say the industry didn't try to resist CUDA. The stakeholders shot each other in a 4-way Mexican standoff, and Nvidia whistled showtunes all the way to the bank. If OpenCL was treated with the same importance Vulkan was, we might see a very different market today.


Yes they did, it is called Metal Compute, and everyone using Apple devices has to use it.

Vulkan you say?

It is only relevant on GNU/Linux and Android, because Google is pushing it, and still most folks still keep using OpenGL ES, no one else cares about it, and already turned into the same spaghetti mess as OpenGL, to the point that there was a roadmap talk at Vulkanised 2025 on how to sort things out.

NVidia and AMD keep designing their cards with Microsoft for DirectX first, and Vulkan, eventually.


> NVidia and AMD keep designing their cards with Microsoft for DirectX first, and Vulkan, eventually.

Not really. For instance NVIDIA released day 1 Vulkan extensions for their new raytracing and neural net tech (VK_NV_cluster_acceleration_structure, VK_NV_partitioned_tlas, VK_NV_cooperative_vector), as well as equivalent NVAPI extensions for DirectX12. Equal support, although DirectX12 is technically worse as you need to use NVAPI and rely on a prerelease version of DXC, as unlike Vulkan and SPIR-V, DirectX12 has no mechanism for vendor-specific extensions (for good or bad).

Meanwhile the APIs, both at a surface level and how the driver implements them under the hood, are basically identical. So identical in fact, that NVIDIA has the nvrhi project which provides a thin wrapper over Vulkan/DirectX12 so that you can run on multiple platforms via one API.


An exception that doesn't change the rule, where are the Vulkan extensions for DirectX neural shaders, and RTX kit?

As a more recent example, not feeling like enumerating all of them since DirectX 8 shader model introduction, and collaboration with NVidia where Cg became HLSL foundation.

Exactly, proprietary APIs don't have extension spaghetti like Khronos APIs, that always end up out of control, hence Vulkan 2025 roadmap plans.

Khronos got lucky that Google and Samsung decided to embrace Vulkan as the API to be on Android, Valve for their Steam Deck, and IoT displays, basically.

Everywhere else it is middleware engines that support all major 3D APIs, with WebGPU becoming also middleware outside of the browser due to the ways of Vulkan.


> An exception that doesn't change the rule, where are the Vulkan extensions for DirectX neural shaders, and RTX kit?

DirectX "neural shaders" is literately the VK_NV_cooperative_vector extension I mentioned previously, which is actually easier to use in Vulkan at the moment since you don't need a custom prelease version of DXC. Same for all the RTX kit stuff, e.g. https://github.com/NVIDIA-RTX/RTXGI has both VK and DX12 support.


And how does that prove that NVidia has not designed that together with Microsoft first in DirectX prototype?

Additionally, naturally Intel and AMD will come up with their extensions, if ever, followed by a Khronos common one. Not counting mobile units into this extension frenzy.

So then we will have the pleasure to chose between four extensions for a feature, depending on the card's vendor, with possible incompatible semantics, as it has happened so many times.


> it is called Metal Compute, and everyone using Apple devices has to use it.

Sounds like a submarket absolutely teeming with competition. Like, you have Metal Compute, and Apple Accelerate Framework and MLX all sitting there in the same spot! Apple is really outdoing themselves, albeit in a fairly literal sense.

> It is only relevant on GNU/Linux and Android

Hmm... someone ought to remind me of the first stage of grief, I've forgotten it suddenly.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: