Hacker News new | past | comments | ask | show | jobs | submit login

I've long held the assumption that neurons in networks are just logic functions, where you can just write out their truth tables by taking all the combinations of their input activations and design an logic network that matches that 100% - thus 1-bit 'quantization' should be enough to perfectly recreate any neural network for inference.



1-bit 'quantization' is enough to create ANY function you'd like...

See also: Hadamard transform, Walsh functions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: