I agree, but when you do that, you need to actually make an accounting of the costs/benefits. If you look among programmers and security specialists on say HN there is not even a debate or discussion about this, but rather an absolutist position that this is good and that the only reason to think this is bad is if you are a totalitarian government wanting to oppress your people.
I think you're conflating two different positions, which do admittedly co-occur in many people: 1. The technical, that any such "backdoor" is necessarily a backdoor, with all that implies, and thus to be eschewed on a "fundamental principles of good security" basis, and 2. The moral, that any such backdoor is crime against humanity, or whatever, because some of the people who have the technical capability will be leveraging it in order to oppress, and all of them will be doing so in order to act in a manner contrary to the user's interests.
Who do our tools serve? Is it just that they should be made to serve someone else, against us? Where, exactly, is the line on one side of which it's justified, but on the other it's abuse? How do you build a system that prevents abusive uses, but allows appropriate ones?
Decrying absolutist positions is all well and good, but it is a nigh-on tautology-level truth that a system with a flaw or backdoor, will be exploited — usually in multiple ways, and well beyond any potentially intended such.
Yeah yeah, but ultimately this all comes down to whether you think government interception of private messages is ultimately better than the messages staying private. People here obviously don't feel the same way you do about that.
Myself, I'm not quite so convinced as you seem to be of the harm of "absolutist" positions. Maybe the truth isn't always somewhere in the middle