Can someone give me a non-graphical use-case for learning Vulkan or just GPU-based programming in general? I've heard of hardware acceleration. Is it something like writing your routines in a language like Vulkan and offloading the computation to the GPU?
The use cases for GPU compute are fairly narrow. You need something that is embarrassingly parallel and deals with almost nothing but floats. You also need something isolated enough that you're ok with paying the PCI-E bandwidth overhead to send it to the GPU & receive the results back from the GPU.
GPUs will handle ints just fine, but it's not what they are best at. They are best at fp32, and depending on the GPU the gap is rather substantial. The performance characteristics of integer ops is also kinda weird.
AMD GPUs actually have identical performance for int32 and fp32, except for full 32-bit integer multiplies and divisions. I think that's a big part of why cryptocurrency miners like them so much.
And indeed 32 bit integers and 32 bit floats are essentially the same, except for multiplication where it's fuzzy, but still quite fast.
Certainly modern GPUs are fast enough at integer ops that you shouldn't just assume your problem will be slow on the GPU just because it's integer based. Bitcoin mining (as much as I hate to bring it up) is an obvious counterexample, for instance.
Anything that involves a massive amount of independent floating point computations can easily be offloaded to the GPU. It is more complicated, when each floating point computation depends on the result of other floating point computations because it is not easy to tell the GPU about the floating point computation's relationship with one another. And this cannot be done, without slowdown, on the CPU.
This is true, but there are cases where certain very branchy/interdependent problems can be pushed to the GPU (with enough effort). The Bullet physics engine's GPU-based rigid body physics pipeline is a good example of this working out pretty well.