(disclosure: I am a Mozilla employee but not commenting in any official capacity)
"Give me control over what code I run on my computer" (meaning "provide a switch to disable the requirement that extensions be signed") keeps coming up over and over. And perhaps it hasn't been clearly stated but the problem is this: if there's a switch that a user can flip, the browser has to record the state of that switch somewhere (presumably on disk). If such a switch becomes available, we'll quickly be flooded with malware that flips that switch without users' consent. At that point, there's no way to tell the difference between savvy users making an informed choice to enable unsigned extensions and malware doing it behind their backs. The browser can do various things to obscure the way that setting is stored, but ultimately any method the browser uses to read and write the state of that switch is something that other software can easily mimic.
This is not a theoretical concern, a modern web browser target is an irresistible target for all sorts of get-rich-quick scammers -- if you don't experience this day-to-day its due in no small part to the fact that browser vendors among others are constantly working to keep the bad guys at bay. But make no mistake: the bad guys are out there and they quickly find and exploit any opportunities that are available to them.
So as to the problem of how to let users disable signing but ensure that they have made a conscious decision to do so, there is a stark tradeoff here: giving the most savvy users that switch necessarily makes other users less safe. The solution that Firefox has opted for here is to handle this tradeoff differently on different channels. The release channel (aka the stable channel, or the thing you get by default when you download Firefox) is intended for a very wide audience, and so it handles this tradeoff by favoring safety for all users regardless of their level of technical knowledge. The developer edition and nightly channels are intended for more technically savvy users and they handle this tradeoff differently; specifically they do provide a switch for disabling extension signing.
If there are other (practical and effective) ways to solve this problem of determining true user intent, I (and I'm sure many many others) would be very interested in hearing about them. In the mean time, using the mass-market versus developer-focused channels as a signal for users' preferences on the risk-configurability continuum seems like a reasonable way to handle this.
a), b) and c) are the exact opposites of what open source software is meant to stand for. Firefox is slowly losing its unique position of being an amazing open source browser in favor of what seems to me a negligible increase in user security. In my mind, Mozilla is wasting time on micromanaging user risk instead of actually innovating.
To put it this way, every time I go out biking, I can get hit by a car. It is a known and well understood risk, one that I have to consider whenever making a turn. However, riding a bike also provides chances to go faster, meet new people and so on. Should Firefox aim to reduce my risk of being hit by a car? No, because I get to choose the level of risk in my life, not Mozilla.
Imho, making it available via about:config switch would be entirely sufficient. There are dozens of settings that already affect security, like ssl handling, safe browsing, firstparty isolation, and tracking protection.
But where is the evidence that malware has ever switched off safebrowsing for example?
Your entire premise of extension signing and AMO store moderation rests on the premise that this is actually helpful for keeping extensions safe, but then you say nothing is safe.
There is only one gateway for malware to change the about:config settings in the first place, and that is through your signed extension process.
How safe should things be?
Edit: Maybe you could allow disabling the signing process via enterprise policies under the condition that the about:config settings are locked, which in my understanding would make it basically impossible for extensions to change anything. Would that help make it more secure?
"Give me control over what code I run on my computer" (meaning "provide a switch to disable the requirement that extensions be signed") keeps coming up over and over. And perhaps it hasn't been clearly stated but the problem is this: if there's a switch that a user can flip, the browser has to record the state of that switch somewhere (presumably on disk). If such a switch becomes available, we'll quickly be flooded with malware that flips that switch without users' consent. At that point, there's no way to tell the difference between savvy users making an informed choice to enable unsigned extensions and malware doing it behind their backs. The browser can do various things to obscure the way that setting is stored, but ultimately any method the browser uses to read and write the state of that switch is something that other software can easily mimic.
This is not a theoretical concern, a modern web browser target is an irresistible target for all sorts of get-rich-quick scammers -- if you don't experience this day-to-day its due in no small part to the fact that browser vendors among others are constantly working to keep the bad guys at bay. But make no mistake: the bad guys are out there and they quickly find and exploit any opportunities that are available to them.
So as to the problem of how to let users disable signing but ensure that they have made a conscious decision to do so, there is a stark tradeoff here: giving the most savvy users that switch necessarily makes other users less safe. The solution that Firefox has opted for here is to handle this tradeoff differently on different channels. The release channel (aka the stable channel, or the thing you get by default when you download Firefox) is intended for a very wide audience, and so it handles this tradeoff by favoring safety for all users regardless of their level of technical knowledge. The developer edition and nightly channels are intended for more technically savvy users and they handle this tradeoff differently; specifically they do provide a switch for disabling extension signing.
If there are other (practical and effective) ways to solve this problem of determining true user intent, I (and I'm sure many many others) would be very interested in hearing about them. In the mean time, using the mass-market versus developer-focused channels as a signal for users' preferences on the risk-configurability continuum seems like a reasonable way to handle this.