I'm trying to use 16 bit floats for matrix multiplication on x86-64. I found solutions for ARM and some NVIDIA GPUs but none for any X86-64 chips. Any pointers in this direction would be helpful.
Here is a little sample code I threw together, it shows the whole cycle, from conversion to half-floats to the conversion back to floats and performing a simple multiplication of the values:
"Based on your organization's access policies, this web site ( http://drona.csa.iisc.ernet.in/~chiru/datascience/iisclectur... ) has been blocked because it has been determined by Web Reputation Filters to be a security threat to your computer or the organization's network. This web site has been associated with malware/spyware."