I've long held the assumption that neurons in networks are just logic functions, where you can just write out their truth tables by taking all the combinations of their input activations and design an logic network that matches that 100% - thus 1-bit 'quantization' should be enough to perfectly recreate any neural network for inference.