> there's absolutely no reason why consumer devices should be burdened with these backdoors.
Beyond the obvious economy of scale argument, why don’t consumers want that? I thought things like Apple’s remote lock/wipe feature were selling points for anyonel concerned with theft.
(As an aside, it doesn’t feel accurate to term this a backdoor without evidence that it’s being used without the owner’s consent)
The article mentions they've already found a remotely exploitable security hole. Backdoor or not, nobody wants that.
We would all be fine with IME if we had access to it, could define the keys, and enable/disable features of it. Locking users out of their own hardware is not acceptable. I'm the only one that should be able to remote lock/wipe my phone. It should not be possible for Apple. They shouldn't have the keys. It's my phone. Even if I did trust Apple, I don't trust the government.
Look, I’d also like this to be open and documented but hyperbole isn’t helping that. If it’s undocumented, say that rather than pretending anyone is locked out of their own hardware — nobody has lost the use of their device due to IME. Similarly, calling it a backdoor and implying that the NSA is behind it without any proof is only going to reflect badly on the person making those claims.
The other thing to remember for effective advocacy is thinking about what normal people experience. If you say Apple shouldn’t have the keys, you’re also saying the average person has to be good at key management; most people are happy to outsource that. Conflating the issues of mass surveillance with control over your own hardware is great if your goal is confusion but I don’t see it producing results.
Whether it is actually being used is irrelevant. Security is about confidence that unauthorized access is impossible, not about whether unauthorized is happening right now. An unlocked door is not secure against unauthorized entry merely because noone has opened it yet, what makes it insecure is the fact that anyone who wants to can walk through it whenever they want.
It is not even relevant whether the door was left unlocked intentionally. An unlocked door is a security problem, whether it was put in place intentionally or due to negligence.
That’s not what we’re talking about, though, is it? Unless the claim is that vendors are shipping IME enabled with secret credentials it’s not a backdoor.
A backdoor does not necessarily have to be in the form of a door, a wall made from cardboard will do just fine as a backdoor.
Putting a complex piece of software that you cannot disable, that is potentially reachable from the network, and that won't be updated into every machine is the software-equivalent of building a safe with a cardboard wall: Even if it's not intended as a backdoor, it still has to be treated as a de-facto backdoor for security planning purposes, and whoever uses cardboard to construct a safe wall is to blame.
Also, you do not judge security based on what has already happened nor on what you can prove to be insecure. The default assumption is that things are insecure, unless you can demonstrate that there are good reasons to believe that it's not--just as everywhere else in reliability engineering. A bridge is not assumed to be safe to use until it collapses or is demonstrated to be unsafe--a bridge is assumed to be unsafe to use until it is demonstrated that to the best of our current understanding of how to build reliable bridges, there is no reason to expect failure. A bridge where the builder makes a secret out of how the bridge was built is never considered safe.
Imagine your pc vendor shipped windows with remote desktop enabled, you can't disable it, and you can't see if there are any local accounts in the group, and also they could be administrators, and also it was far worse than even that, because the computer doesn't have to be on or connected to a lan. imagine if dell shipped with logmein and wol but it was permanent. how are these not security vulnerabilities?
They're backdoors because they allow for remote access[1] and we (as in the users who ostensibly "own" our own devices) can't turn them off without going through great difficulty.
Please read the piece linked above on Computrace. I am not very familiar with the inner workings of Find My Mac and other AppleID-associated features but I'm going to go out on a limb and say that those features 1. don't involve injecting code into system processes in the exact same manner as a BIOS-level rootkit and 2. aren't preactivated without the owner's knowledge or consent so the attack surface they create is likely much smaller than that of something like Computrace.
Also, if you like, please elaborate on why the economy of scale argument is "obvious".
Edit: Management coprocessors are also rightly called backdoors because they operate below ring 0 meaning that if we cannot trust them (and we can't because they are largely black boxes to us) then we cannot trust our systems as a whole.
Beyond the obvious economy of scale argument, why don’t consumers want that? I thought things like Apple’s remote lock/wipe feature were selling points for anyonel concerned with theft.
(As an aside, it doesn’t feel accurate to term this a backdoor without evidence that it’s being used without the owner’s consent)