Hacker News new | past | comments | ask | show | jobs | submit login

As an extension developer who recently got one of these offers, your analysis is a good verbalization of my gut feeling on the matter. I turned it down, of course.

While users may feel bad about extension developers not being compensated, I feel like the larger story here is that:

- Companies are buying extensions for nefarious purposes, which presents a huge security risk.

- Evidently, app stores are sufficiently bad at detecting this that it remains profitable.




Yes, agree - detecting extensions that have changed hands into someone who wants to "monetize" it and preventing those updates from getting to users definitely seems like a thing that the browser manufacturers should be doing (even if - and perhaps especially if - it lowers the apparent market value of extensions) and that the onus is on browser manufacturers / extension store operators to do so.


I'm not sure how one would automatically detect such a thing as a browser extension changing hands. If you require signing extensions to make a release, there's nothing stopping a developer from selling their keys. And even without changing hands, the threat to users remains the same if the company just goes from "we'll give you $10k for your extension" to "we'll give you $10k to link this library into your extension and not ask what it does" (one might counter that an ethical developer could accept the former option while rejecting the latter, but it's hard to believe any developer willing to sell their extension in the first place isn't fully aware of what that means for the future of their users).


As others mentioned, I'm primarily interested in noticing the extension gaining malicious code, such as often occurs in the wake of a transfer, not noticing the transfer itself. (Which would also capture the case of extension not actually changing hands but nonetheless shipping malicious code - including both your example as well as a targeted attack on an ethical developer.)

Also I think it's reasonable for browsers to require complete auditable source (even if there's some obfuscation happening before it gets to users), which would probably have a deterrent effect on weakly-ethical developers - it's harder to ship code you can plainly see is malicious than to just sell the extension and wipe your hands of it. (There are enough stories of founders who care about their companies selling startups to acquirers that don't that I think there is something in human nature that makes it easier to hand off your creation to someone who will do bad things with it than to do the same bad things yourself.)


It would not be about detecting the change of hands but about detecting the (malicious) monetization. Depending on the type of extension this might still be hard to do, but I could well imagine heuristics triggering a manual check from the extension store providers.


With many extensions having hundreds of thousands of lines of minified code, a manual check is unlikely to find anything nefarious if it's been well hidden.


IMO, extension authors should not be allowed to submit minified code alone to the extension stores for review. (Options include having standard minifiers that the extension stores run themselves, allowing extension authors to provide a .travis.yml or something where the sources to the build pipeline are themselves auditable, etc.) I can see an argument for withholding source from end users for things like paid extensions or clients to proprietary services (to be clear, I wouldn't agree with such arguments but I can see them), but I don't see the argument for withholding source from the browser manufacturers themselves.

Put another way, if the browser manufacturers run an extension store, they are (or ought to, at least) endorse the extensions in that store as reasonable to install. I don't see how they can do that without source. I think they could sort of make that endorsement without proactive auditing if they can remove extensions they discover are malicious, but if they can't even human-audit the source in response to a report of problems, I don't think there's any way they can responsibly offer extensions to their users.


IIRC Mozilla requires that you submit unminified code plus the minified code and how you compiled it, so they can reproduce it durign review. And they do Human Reviews using Volunteers.

Mozilla does a lot to protect users from malicious extensions, thousands times more than Google.


Minified code is easy to beautify. More important what side effects the extensoon does.


Just never allow such a transfer force a new user to create a new name. Ensure this is part of the terms of service. Prosecute violators that is to say buyer and dev under cfaa and ensure a hefty fine is levied with prison time. Nefarious parties will find few sellers thereafter.


You forgot to mention a small detail: force every other country in the world to enforce your ridiculous laws.


Thanks to the movie industry, this is mostly a solved problem.


Show whether extensions are certified by counties with similarly ridiculous laws and wait for that to become a required after the first 100 million people get pwned.


As long as an extension can be owned by a company, you can bypass restrictions on transferring the extension by transferring ownership of the company.

Unless you think plugins for HTTPS Everywhere, Google Translate, and LastPass should be signed by an individual developer - in which case you'll have to solve the problem of what to do when that developer moves between jobs :)


The point is not whether its possible to transfer ownership of the code it's whether its possible to transfer it in a way that causes a user who agreed to trust Bob or Bobco inc to automatically end up trusting Crook or Crookco inc.

A buyer company or individual ought to have to create a different name and convince users to install/trust their extension if ownership is transferred instead of merely taking control of the existing name and having the next automatic update install malware.

Extension systems and language specific package managers on the overall have garbage security and are going to be a way bigger problem in the future. They are low hanging fruit.


Right, but if a user has trusted an extension by Bobco Inc, and Bobco becomes a wholly-owned subsidiary of Crookco Inc, then Crookco can put what they like into the trusted extension.

I suppose you could argue that when Facebook buys Instagram the Instagram app should be uninstalled from users' phones and all their accounts and posts deleted, because they didn't agree to trust Facebook or consent to the data being shared with Facebook. There's a certain logic to that, but it would be a big change to how the current tech ecosystem works.


This should trigger a requirement for a new install to a new name the same as selling the extension to Crookco.


Indeed.

It's a superficially attractive answer, but the reality is that it's unenforceable.

Nation states get unstuck here on taxation, and bluntly it is unlikely any browser vendor will ever invest as heavily as nation states do on getting revenue.


If you can't enforce the rules on an extension/library developer because they are based in a non compliant nation just kick them off the market.

To be blunt most current software in use in US/Europe already comes from US/Europe.


\s

Right that solved pirating, torrenting and hacking.

None of the above happens on internet any more since cfaa was passed.

Shady people will do shady stuff, no mater what law says. And you will never catch all or even most of them.

So if it's all the same it would be better if we come with solution, that prevents abuse from happening in the first place


The point is not to stop anyone on earth from behaving in a shady fashion its to make it systematically challenging for the normal behavior of non malicious people to enable malicious people.

If you can buy or take over responsibility for extension foo and automatically exploit thousands of people who automatically get updated to foo 1.1 now with 1000% more malware then it will happen.

If you have me/foo and you want to take over maintaining foo and you have to create you/foo and convince people to remove the working version of me/foo and switch to your you/foo this is a small burden for legitimate users but a higher bar for scummy companies whose accounts will doubtlessly either have a bad history or far more likely none at all to lean on.

Further the time required to convince users to switch will be much longer leading to more time for something to be discovered as malicious before users are effected.

Best this whole process makes exploiting users much less valuable and thus you have less of it to start with.


You held the développer legally liable of abuses with thé extension if he did not notify mozilla or Google that he sold it


The developer is the company, not the individual.

Holding an individual responsible for the company is simply not plausible in most jurisdictions for anything short of actual criminal actions.


Legally, companies are individuals and the same sort of personal responsibility and accountability sanctions can be applied to both.


The fact that it has changed hands or not is irrelevant, there are valid use cases for ownership change, and there are many ways to "monetize" via malicious partner without selling the extension itself. What should be monitored are changes in the extension codebase that inject 3rd party scripts and/or modify the list of external endpoints the extension is exchanging data with.


app stores are toxic wastelands. there's on the order of 1000 useful legitimate apps yet the app store is packed with 2,000,000 (ie 99.9% spyware)


This should not apply to AMO, at least not to the same extent per https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/AMO....


> Evidently, app stores are sufficiently bad at detecting this that it remains profitable.

What percent of extensions connect/download from the internet? That could be an easy way to identify a much smaller group of vulnerable extensions. Make extensions request permission to connect, then you can see the smaller group and watch any new extensions that request the permission. Connections could also be domain restricted to a list defined before distribution (and IP connections banned.)

Also, Mozilla could force extensions to make requests through proxies it controls... although that would be a whole other issue.


The relevant extensions here (adblockers like uBlock vs. uBlock Origin, the author's extensions like adding a search-by-image button and removing +1s from GitHub) all have the ability to modify the web page that they're running on. If you have that ability, you therefore have the indirect ability to perform network access as that web page. Blocking the extension's own network access won't identify malicious behavior, and blocking indirect network access will get in the way of productive extensions.

Yes, you can imagine a way where extensions provide some declarative input to the browser saying how to block certain elements, and don't have any connection to the web page. That's more or less what iOS ad blockers do, as well as Chrome's new proposed ad blocker approach, and extension authors generally dislike it. And that only helps you remove elements, not add them - things like night mode extensions generally want to add either elements or at least CSS to the browser, and if you have that much access, you can definitely add ads or inject referral links into shopping sites.


This is a fundamental problem with all self-maintaining code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: