Hacker News new | past | comments | ask | show | jobs | submit login

Can someone give me a non-graphical use-case for learning Vulkan or just GPU-based programming in general? I've heard of hardware acceleration. Is it something like writing your routines in a language like Vulkan and offloading the computation to the GPU?



Massive SIMD works great on the GPU. Compute shaders are already a non-graphical use-case on the GPU.

You get high bandwidth but also high latency, so its best for a few large batches rather than lots of small batches.

There is also overhead to uploading and downloading data to and from the GPU. So you want to save more time than you spend there.


The use cases for GPU compute are fairly narrow. You need something that is embarrassingly parallel and deals with almost nothing but floats. You also need something isolated enough that you're ok with paying the PCI-E bandwidth overhead to send it to the GPU & receive the results back from the GPU.


> You need something that…deals with almost nothing but floats.

This hasn't been true for years. GPU integer compute is quite good these days.


GPUs will handle ints just fine, but it's not what they are best at. They are best at fp32, and depending on the GPU the gap is rather substantial. The performance characteristics of integer ops is also kinda weird.


AMD GPUs actually have identical performance for int32 and fp32, except for full 32-bit integer multiplies and divisions. I think that's a big part of why cryptocurrency miners like them so much.


We can solve this by looking at the published latencies:

http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.h...

And indeed 32 bit integers and 32 bit floats are essentially the same, except for multiplication where it's fuzzy, but still quite fast.

Certainly modern GPUs are fast enough at integer ops that you shouldn't just assume your problem will be slow on the GPU just because it's integer based. Bitcoin mining (as much as I hate to bring it up) is an obvious counterexample, for instance.


WebGL now has integer types in shaders as well.


Anything that involves a massive amount of independent floating point computations can easily be offloaded to the GPU. It is more complicated, when each floating point computation depends on the result of other floating point computations because it is not easy to tell the GPU about the floating point computation's relationship with one another. And this cannot be done, without slowdown, on the CPU.


This is true, but there are cases where certain very branchy/interdependent problems can be pushed to the GPU (with enough effort). The Bullet physics engine's GPU-based rigid body physics pipeline is a good example of this working out pretty well.


Yep recent advances surely make this easier, but if your supporting older version of opengl you options are limited.


Numerically solving PDE on large scale.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: