Hacker News new | past | comments | ask | show | jobs | submit login

I'm trying to use 16 bit floats for matrix multiplication on x86-64. I found solutions for ARM and some NVIDIA GPUs but none for any X86-64 chips. Any pointers in this direction would be helpful.



Here is a little sample code I threw together, it shows the whole cycle, from conversion to half-floats to the conversion back to floats and performing a simple multiplication of the values:

https://godbolt.org/z/FYu_rK

Hope it helps.


Thank you.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: