The Chinese experimented with strict enforcement in the Qin Dynasty, following the philosophy of Han Feizi (Legalism). It sounds like it was a big failure. I'm sure there have been other such experiments. The Law is not a computer program.
The Law is full of legacy cruft, and is based on philosophies that largely predate empiricism. Has there ever been an attempt to design a legal system based on modern engineering principles? Laws are as much about systems as about people. When you want to understand people you might rely on a politician, but to understand systems you should rely on engineers.
The engineers that designed the RBMK reactor used for Chernobyl, the ones that designed the USS Thresher which was crushed with all hands after a combined failure of its emergency ballast tank blow and steam propulsion system, or the ones who designed the space shuttles Challenger and Columbia?
This is not to say that we can't improve the legal system. But if engineering has taught me anything, it's that it will always be prudent to design a relief/safety valve into the system.
Chernobyl, Challenger, and Columbia are examples of systems where engineers were constrained by politicians.
I hadn't heard of the USS Thresher before today, but I did just read a bit of the Wikipedia article.
When a disaster occurs in engineering, the lessons learned are applied to the next design. When a disaster occurs in politics, the other party gets its turn to cause the next disaster.
Engineering isn't engineering without constraints... it's just spending an unbounded amount of time and money on a thing ( or science, in some terms). Engineers are obligated to not work on a project which cannot be safely completed within guidelines. You can't push off that obligation elsewhere: if you design a building, and that building falls over, you need to justify why. You can't say they asked you to make it cheaper; you either find a way to make it safe and cheaper, or you don't do it.
This isn't to say that nobody makes mistakes, but it's worth remembering that engineering is applied science. It's not practiced in a vacuum, it takes into account economic and political influences and balances them against public well being. The buck stops with engineers.
Of course everything has constraints, and a social system will have political constrains.
Yet, those systems had "accidents" only after politicians overrided the engineers and acted against their recommendation. Using them as evidence that engineers create systems that are as flawed as the ones created by politicians is wrong.
Anyway, it seems that we still can't engineer social systems. It may be either because the political interference inherent on it makes it impossible, or because we just don't have the right technology... Or maybe we can, and PRISM is the kind of tool that makes it possible. Engineers create all kinds of systems, for all kinds of reasons, well intented or not.
My point is that engineers have no recourse to complaining about being 'overridden'. Politicians don't act; they don't build space shuttles, they don't design nuclear reactors. They give a high level set of orders which engineers either fulfill safely, or not at all. As an example:
In the case of Chernobyl, the problem was actually inexperienced plant operators running an experiment about disaster recovery on a reactor with a positive void coefficient and poorly designed control rods. The positive void coefficient was a limitation of our nuclear design capabilities at the time, the control rods were just poorly designed by some engineer. Either of those factors could have prevented the disaster, and neither of them can be attributed to any politician.
>When a disaster occurs in engineering, the lessons learned are applied to the next design.
And how does that work for software engineering thus far?
What we know of it, is that errors tend to repeat themselves, and little is learned from design to design, especially across projects and teams.
Software engineering is much more like law than civic engineering.
So, it's quite invalid to compare law (which deals with people, perceptions, norms, ever shifting societies and situations, intentions and other delicate situations) with some well known ways to build things with very tangible, hard physical constrains and behavior.
And how does that work for software engineering thus far?
Quite well, in the long run. Ten years ago we had mountains of buffer overflow-ridden web software written in C++ being replaced by SQL-injection-vulnerable Perl and PHP. Twenty-five years ago the Morris worm easily found its way onto thousands of systems.
Now, tools like Valgrind help detect buffer overflows and other memory access problems, ORMs help prevent SQL injection, stringent standards are used for development of safety-critical systems (e.g. MISRA-C), and automated testing is much more widespread. Naive software developers still make naive mistakes, but as a whole, the industry is much better protected against known mistakes.
This. I've been thinking a lot about this (not particularly hard, admittedly) lately. I suppose the main problem would be that those in power, i.e. politicians, will mostly be out of a job once such a task were to be undertaken.