Hacker News new | past | comments | ask | show | jobs | submit login

Relying on laws leaves a lot of wiggle room for bad actors, slippery slopes, and political opinions changing over time. Laws are based on trust in institutions (do you _really_ trust large governments?).

Laws are probabilistic, whereas math & source code is deterministic. You can verify that computer code does what it says it does. Laws depend on enforcement and complicated judicial systems (based on humans) to interpret and apply the laws, which means they can effectively change over time, and the goalposts are never stationary.




I agree that laws are not enough, independent verification must be possible. But your right to use secure software, and to audit it without risking to spend your life in prison or being killed, is ensured by laws.

This is why moving the goalposts and further normalizing surveillance is extremely dangerous. The rights that you enjoy today are not universal, and can obviously be eradicated in less than a generation.


Agreed! Thankfully software is protected by the 1st amendment in the US[1]. If not for that, I don't know where we'd be.

[1]: https://www.eff.org/deeplinks/2015/04/remembering-case-estab...


There is no deterministic, technological solution to the problem that all technological solutions can be banned and its users threatened with draconian punishment.

There is no mathematical escape hatch from society. All we have is a messy assortment of technological mitigations that change the cost of surveillance.

These mitigations work best in combination with constitutional rights that limit what the government of the day can do, triggered by the latest outrage in the news.


It isn't either-or. You can have reasonable laws that protect people's rights and privacy, and compile and check the source if you wish to do so.


Yes, and we should strive to use all tools for a defense in depth of our rights. Laws are the first line of defense, then politics/media, then technology from software to hardware, and finally trust in our fellow humans and ourselves. That way if one (temporarily) fails, we can fall back to the others while repairing the breach.


Auditing a large codebase is also probabilistic. Oversights happen, and there are ways to write code that looks like it does the intended behavior while also doing a second, nefarious thing. See https://www.ioccc.org


the backdoor could be in the hardware - the logical conclusion of your position is that we should all fabricate our own computers from scratch so that we're sure that they're secure.

This is clearly a straw man, no one wants to do this, or is suggesting that we do this. But at some point, even the most hardened OSS advocate has to trust someone (usually the hardware manufacturer). You cannot verify that the device you're on doesn't spy on you, you have to rely on the manufacturer's word that it doesn't. And the manufacturer's suppliers, of course, because the manufacturer is trusting them.

Somewhere along the stack, we all have to draw a line and say "beyond this point, I trust that I am not being spied on". You choose to draw that line at the hardware point. Others choose to draw the line at the software point.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: