> they're trying to make it hard for malware overall to integrate with Chrome
That's a reasonable argument, and you're probably right about their motivations. But I'm not convinced that's a realistic goal, because the definition of malware/spyware changes depending on the context/user.
The big reason moderation doesn't scale is because you're forced to balance everybody's needs at the same time -- you can't optimize for any particular user. If the end-consequence of an exclusive web store is that it's much harder for the Chrome team to ban shifty apps without everyone on Twitter asking for a bullet-pointed list explaining why, then the Chrome team isn't really making the world that much safer.
In general, I would advocate that it's better to try and build safe spaces rather than safe worlds. That's kind of a pragmatic philosophy: I'm having a hard time thinking of an existing safe world that I think runs well. All of the major app stores (including Apple's) have malware problems to at least a certain degree. Most giant social networks are not doing a good job of moderating content. Package managers for languages like Node and Ruby are running into the same issues.
Maybe the web itself? But the web doesn't get its safety from moderation, it gets its safety because of sandboxing.
If I'm thinking purely as a consumer, what I really want is an extension store where I know 100% that everything on it is fine. I don't want to have to think or read reviews or look up the author before I install an extension. I want it to be clear when I'm being safe and when I'm doing something dangerous. I suspect that's what a lot of consumers want, and I just don't see any realistic path for Chrome to provide that with their current strategy.
I get that "somebody might choose to leave the safe space and install malware anyway" feels bad, but if the consequence of avoiding that is, "everybody gets kind of substandard protection all the time", maybe it's worth questioning whether Chrome's malware goals are worth pursuing in the first place.
That's a reasonable argument, and you're probably right about their motivations. But I'm not convinced that's a realistic goal, because the definition of malware/spyware changes depending on the context/user.
The big reason moderation doesn't scale is because you're forced to balance everybody's needs at the same time -- you can't optimize for any particular user. If the end-consequence of an exclusive web store is that it's much harder for the Chrome team to ban shifty apps without everyone on Twitter asking for a bullet-pointed list explaining why, then the Chrome team isn't really making the world that much safer.
In general, I would advocate that it's better to try and build safe spaces rather than safe worlds. That's kind of a pragmatic philosophy: I'm having a hard time thinking of an existing safe world that I think runs well. All of the major app stores (including Apple's) have malware problems to at least a certain degree. Most giant social networks are not doing a good job of moderating content. Package managers for languages like Node and Ruby are running into the same issues.
Maybe the web itself? But the web doesn't get its safety from moderation, it gets its safety because of sandboxing.
If I'm thinking purely as a consumer, what I really want is an extension store where I know 100% that everything on it is fine. I don't want to have to think or read reviews or look up the author before I install an extension. I want it to be clear when I'm being safe and when I'm doing something dangerous. I suspect that's what a lot of consumers want, and I just don't see any realistic path for Chrome to provide that with their current strategy.
I get that "somebody might choose to leave the safe space and install malware anyway" feels bad, but if the consequence of avoiding that is, "everybody gets kind of substandard protection all the time", maybe it's worth questioning whether Chrome's malware goals are worth pursuing in the first place.