Hacker News new | past | comments | ask | show | jobs | submit login
pnpm: Fast, disk space efficient package manager for JavaScript (pnpm.io)
206 points by modinfo on April 5, 2022 | hide | past | favorite | 168 comments



I'm pretty immune to most JS ecosystem churn, but on package managers I'm feeling it.

All I want is a package manager that is correct, causes very few day to day issues and is going to work the same way for many years. Yet everyone seems to be optimizing disk usage and speed without even getting the basics (make it work, make it right) fully covered.

I don't understand why people are optimizing for disk space at all tbh. Like, have you ever edited a video project, used docker, installed Xcode? I cannot imagine what you must be doing for all node_modules combined to take up more than maybe 100 GB on disk.

pnpm seems to be the lightest of the bunch, which is nice but why even mess with symlinks and confuse other tools? Just put all the files in there, nest them and duplicate them and everything. I'll happily live with my 10 GB node_modules folder that never breaks and sometimes gives me a nice coffee break.

Possibly I'm actually just salty that Metro doesn't support symlinks and would otherwise be on the pnpm love-train.


NPM just has too much institutional inertia to avoid. The moment you make the decision to use something else, you are simply trading one set of warts for another. I can't even tell you how many projects I have seen waste countless hours of dev time on Yarn/NPM discrepancies. If you are working on anything with more than two people, you really need to just use the standard tooling that everyone is familiar with and that the entire ecosystem is based around. Anything else is yak shaving.


I have custom bash scripts named npm and yarn, they invoke pnpm for installing and uninstalling packages, and fallback on other commands (e.g. audit).

This way work well with other tools (e.g. I can force create-react-app to install packages with pnpm)


There's an npm package "narn" that uses commands akin to yarn's but will automatically use whatever package manager the current folder(/parent) is using. I almost never type npm or yarn or pnpm, just narn everywhere. Really handy.


I added the script to PATH, so other tools that hard-coded to use yarn or npm (e.g. create-react-app) are hijacked to use pnpm when installing packages


I went back to composer for a bit recently (PHP), and I was baffled when my install command did nothing other than install exactly the packages I’d specified. When I ran update, it didn’t modify any of my files, but went to the latest version matching the restrictions I’d specified in composer.json.

Such a breath of fresh air…


very much the opposite of my experience. i'd gladly have 30 copies of left-pad living in my project rent free if it meant i never had to see "Your requirements could not be resolved to an installable set of packages" ever again.


Until it breaks on a fresh install? Reproducibility is a must.


PNPM’s primary feature, even if it gets lost in all the optimization, is make it work, make it right. With an emphasis on the former serving the latter as the goal. FS links are an implementation detail. The thing it gets right is resolving only the dependencies you specify, unless you very clearly say otherwise. The way it does so just also happens to be efficient. If it’s confusing any other tools, they’re doing something wrong too. Links aren’t some wild new concept.


I'm not sure if this is the _main_ reason, but one thing that makes node_modules size more than an aesthetic concern is serverless.

Booting a serverless function with 100s of mbs of modules takes an appreciable time - to the point where some folks webpack bundle their serverless code


Are people actually downloading dependencies on the fly like this? IMO a bundler is absolutely essential. What if npm is down? What if the version of a dependency has been deleted for some reason? These are surprises you absolutely do not want when a function is starting.


You don’t download the modules when your function starts. You include 500mb of dependencies in your 50kb lambda function.

The difference in boot time between 50kb and 500mb is quite significant.


Or left-pad or that other one. JS ecosystem seems dangerous to no version pin, pack and maybe self-host the needed+known-good modules


Is this a thing? A buddy started at a new place and he was noticing webpack bundling for the backend.

Makes sense when you think about it I guess. Only used bundlers for front end stuff.


If backend services are written in TS (instead of JS) then you'll need to compile to run it. Be that TSC or webpack with ts-loader, you'll need something to perform that step. I don't think anyone is running ts-node outside of local dev environments.


If you want typescript on the BE then you at least need a build step.


I'm just now finding that webpack is mostly a foreign concept in backend nodejs tech, but it's mind boggling to me that the norm was to ship code with the node_modules folder instead of bundling and minifying. Seriously, why? It's so wasteful... :'(


I just bundle everything except binary dependencies. Faster deployments


You can use pnpm without symlinks by setting node-linker=hoisted

https://pnpm.io/npmrc#node-linker


That's great news! pnpm might be my answer after all.


100% agreed here. If your package manager confuses established tooling and libraries, it's garbage. I recently started working at a company that uses this, and, horrifyingly, uses a monorepo, and their serverless lambdas are all at least 70+mb. I can't even fix it by using the webpack plugin because pnpm breaks webpack. Also worth noting that 90% of the "problems" that yarn and pnpm try to address were addressed by later versions of npm. Node, being very much tied to npm, doesn't need more package managers, it needs consensus and collaboration to improve on that consensus, and without breaking libs.


Because symlinks solve all the pronlems of nesting and duplication while providing none that I can think of.

I've been using pnpm and rush in my projecrs, and going back to npm at my employer's every day is such a chore.


yarn1 used to have a global package cache (enabled by default), but now it's disabled, because many packages don't work if they are symlinked.


Other than the biggest OS pretty much choking on the node_modules black hole whenever you do a global operation on it (try deleting a js project on Windows)


Some years back, when npm itself didn't do deduplication of modules, it was impossible to work with some projects on windows due to OS path length limit.


rimraf is your friend


Summarizing the 3 major JS package management approaches:

* Classic node_modules: Dependencies of dependencies that can't be satisfied by a shared hoisted version are nested as true copies (OSes may apply copy-on-write semantics on top of this, but from FS perspective, these are real files). Uses standard Node.js node_modules resolution [1].

* pnpm: ~1 real copy of each dependency version, and packages use symlinks in node_modules to point to their deps. Also uses standard resolution. Requires some compatibility work for packages that wrongly refer to transitive dependencies or to peer dependencies.

* pnp[2]: 1 real copy of each dependency version, but it's a zip file with Node.js and related ecosystem packages patched to read from zips and traverse dependency edges using a sort of import map. In addition to the compatibility work required from pnpm, this further requires compatibility work around the zip "filesystem" indirection.

In our real-world codebase, where we've done a modest but not exhaustive amount of package deduplication, pnpm confers around a 30% disk utilization savings, and pnp around a 80% savings.

Interestingly, the innovations on top of classic node_modules are so compelling that the package managers that originally implemented pnpm and pnp (pnpm and Yarn, respectively) have implemented each others' linking strategies as optional configs [3][4]. If MacOS had better FUSE ergonomics, I'd be counting down the days for another linking strategy based on that too.

[1] - https://nodejs.org/api/modules.html#loading-from-node_module... [2] - https://yarnpkg.com/features/pnp [3] - https://github.com/pnpm/pnpm/issues/2902 [4] - https://github.com/yarnpkg/berry/pull/3338


> 1 real copy of each dependency version...

npm showed me that I lack creativity, for I could not imagine anything worse than maven.

The ~/organization/project/release dir structure is the ONE detail maven got right. (This is the norm, the Obviously Correct Answer[tm], right?)

And npm just did whatever. Duplicate copies of dependencies. Because reasons.


Node is doing the right thing: if two dependencies in maven have conflicting dependencies, maven just picks an arbitrary one as _the_ version, which results in running with an untested version of your dependency (the dependency is actually depending on a version the developers of that dependency didn’t specify). Because node allows the same dependency to be included multiple times, npm and friends can make sure that every dependency has the right version of its dependencies.


> Node is doing the right thing

Node does a different thing. It can coalesce two different versions into one if the two things are within a certain semver range, but there's nothing that enforces whether things within a semver range are actually compatible. The most prominent example is Typescript, which famously does not follow semver. Another notable example of how NPM itself does things wrong is that it considers anything in the `^0.x` range as compatible, whereas semver distinctly says the 0.x range is "anything goes".


Incompatible libs, you say? Try this one on: once upon a time a handful of years ago a package-lock.json I worked on drifted so far from package.json that you could not remove package-lock.json and rebuild purely from package.json. The versions specified in the package.json were incompatible with each other, but the package-lock.json had somehow locked itself to a certain permutation of versions that it somehow just worked.

I always shudder to think that different versions of packages live in node_modules and one library produces an object that somehow makes it to the other version of the library and... I'd rather not think of all these implications or I would go crazy.


I agree another the 0.x thing. The rest is basically a result of people refusing to use the versioning system the way it’s designed to be used, which is a problem with a package not with the specified behavior of npm here: violating the rules of semver is UB


I would definitely put part of the blame on the design of the system. It allows anyone to write stuff like `"lodash": "*"`, which is a perfectly valid range as far as semver goes. And then there's things like yarn resolutions, where a consumer can completely disregard what a library specifies as its dependencies and override that version with whatever version they want. And there's aliases (`"lodash": "npm:anotherpackage@whatever"`) and github shorthands and all sorts of other wonky obscure features. And we haven't even touched on supply chain vulns...


>> maven just picks an arbitrary one as _the_ version

No that’s never been the case. If you have conflicting versions of a dependency in your dependency graph, maven chooses the “nearest neighbour” version - it selects the version specified least far away from your project in the transitive dependencies graph.

Pinning a particular choice is easy too - you just declare the dependency and specify the version you want instead of relying on transitive deps.


This is what I mean by an arbitrary version: it’s not determined by the dependency but by some characteristic of the dependency tree. And, this is only necessary because the JVM can’t load two versions of the same dependency (ignoring tricks like the maven-shade-plugin)


The JVM can load the same class any number of times through different class loaders -> only the (class, classloader) tuple has to be unique.

I guess the reason they didn’t went the duplicative direction is that java has safe class loading semantics at runtime and due to valuing storage/memory capacity (which was frankly a sane choice, like 10x bigger java projects compile faster than a js project that pretty much just copies shit from one place to another?)


That’s kind of incredible that yarn pnp out performs pnpm. If that’s generally true across most projects then I’m really glad that turborepo decided to use it for project subslicing.


The practical disk usage difference between pnp and pnpm appears to be almost entirely accounted for by the fact that pnp zips almost all packages. Both store ~1 version of each package-version on disk; it's just that one's zipped and one's not. The mapping entries for package edges in both cases (.pnp.cjs for pnp and the symlink farms for pnpm) are very small in comparison.


Disk utilization is only one metric. The trade-off for Yarn PNP is that it incurs runtime startup cost. For us (~1000+ package monorepo), that can be a few seconds, which can be a problem if you're doing things like CLI tools.

Also, realistically, with a large enough repo, you will need to unplug things that mess w/ file watching or node-gyp/c++ packages, so there is some amount of duplication and fiddling required.


Problames long solved before, but problems that don't matter to the javascript crowd.. I think they actually love that things take so long. It makes them thing they're doing important work.. "We're compiling and initting"


If your file system supports compression, (e.g. zfs and btrfs) then the actual disk usage of pnp and pnpm should be similar ?


We recently started sponsoring pnpm[1] as well as adding zero-config support for deployments and caching. I think pnpm is an incredible tool. Excited to see it grow further.

[1]: https://vercel.com/changelog/projects-using-pnpm-can-now-be-...


Thank you!


As a person who uses npm just for some hobby coding projects, it's quite frustrating that there are new partly incompatible package managers for the javascript ecosystem: npm, pnpm, yarn, yarn 2.

Some packages need one, some another, so I tried to switch to yarn (or yarn 2) for a package that I wanted to try out, but then other packages stopped working.

If there are clearly better algorithms, why not refactor npm and add them in experimental flags to npm and then setting them to default as they mature (with safe switching from one data structure or another)?


Generally I've found sticking with npm to be best. It's not the super-slow thing that it was before, and I can't remember the last time a package didn't install because it wasn't compatible with npm.

I tried pnpm and it didn't just work, so I gave up. I would revisit it, but npm works.

These days I don't really see a reason to use yarn (but would like to hear them).


Yarn's workspaces provide some cool benefits to monorepos which AFAIK npm hasn't matched. You can get there with Lerna + npm, though.


I have so far had very good npm workspace experience. Just define "workspaces" property in package.json and your off. https://docs.npmjs.com/cli/v8/using-npm/workspaces

Right now only pain-point with npm is that "npm link" can't be forced to install peer dependencies so I'm unable to easily test typesscript built libraries within other projects.


For what it's worth Yarn 3 implements essentially all modes. It can do standard node_modules, its own Plug'n'Play, as well as pnpm-style hardlinking to a global cache.

Edit: I just learned from another comment that PNPM also supports Plug'N'Play :) Thanks steven-xu!


pnpm as well supports all three modes. But I think it is better to use Yarn for the PnP mode and pnpm for the symlinked mode.

Here is a feature comparison: https://pnpm.io/feature-comparison


So it should just be backported to npm to show that the authors are serious about backwards compatibility.


Both pnpm and Yarn are independent projects maintained by the community. I personally think that these are better projects than npm CLI because they can make their own decisions. Not decisions dictated by business needs of a company.

I was OK to merge pnpm into npm in the past. They have never suggested me this opportunity. Instead, they decided to re-implement pnpm's algorithm into npm and call it "isolated mode".


I see, I tried it now, it looks great.

Most of my problems were created by the material UI libraries (I wanted to use them with SvelteKit), but I just got rid of it as those libraries were making development harder instead of helping.

I still wish there would be a nice UI library for Svelte, but I guess that's the disadvantage of not going with the mainstream frontend toolkit.


> ...material UI libraries...making development harder instead of helping.

I'm glad I'm not the only one who has had that experience.


Material UI (MUI for react) not just impact the DX, it also bloats the runtime impacting the UX


That's literally xkcd #927: let's make a standard that encompasses all previous standards, what can go wrong?

Now you have to maintain three different code paths, two of which depend on the behaviour of external projects, so you're always playing catch up.

That's such a bad idea on so many levels.


Given what a dumpster fire npm ecosystem is security wise, it's best to run the whole build chain in a container anyway, at least for frontend apps. This way you also don't care about the chosen package manager or node.js version - you can just set it as you wish in the Dockerfile. It does take more disk space though, but to me it's a nice compromise.


Containers don't provide much protection from malware, unless you're running it rootless under an unprivileged user (no sudo access, no ssh keys or anything else interesting in the home directory, etc; and even then it's limited because the attack surface is enormous).


I mean, of course? Especially, why would I put ssh keys and similar in the container?

This still doesn't mean that one can install just any package, but it does make it much more difficult for it to do much harm. Breaking out of a container is not as trivial as it once was. That said, it is not a perfect solution, so I'd be happy to hear of better ones. Any suggestions?


No ssh keys or anything else interesting available to the user you're running the container engine under (and containers themselves). Not the user _inside_ the container, but on the main system.


gVisor, VMs


> If there are clearly better algorithms, why not refactor npm and add them in experimental flags to npm

While node_modules has many flaws, in the current ecosystem all modes have their own pros and cons, and there isn't a "clearly better" algorithm: node_modules has less friction, PnP is sounder, and pnpm's symlinks attempt to be kind of an in-between, offering half the benefits at half the "cost".

Like in many computer science things, it's tradeoffs all the way. Part of why Yarn implements all three.


pnpm also implements all three: https://pnpm.io/feature-comparison

But I think it is best to use Yarn for PnP and pnpm for the symlinked node_modules structure.


Because npm maintainers do not, and apparently have never known, how a good package manager is supposed to work.

I have zero trust in NPM.


I recently migrated a fairly large monorepo (20+ packages) that used Lerna and npm to pnpm, and the improvement in developer experience was pretty massive.

Dependency install times went down by a huge amount, and all the strange issues we had with lerna and npm sometimes erroring out, requiring us to remove all node_modules folders and re-install everything, are just gone.

We even noticed that the size of some of our production bundles went down. Before some dependencies that were used in multiple packages were being duplicated in our webpack bundles needlessly, but the way pnpm symlinks dependencies instead of duplicating them fixed that as well.

The non-flat node_modules structure did break some things as well, since in some places we had imports pointing to packages that were transitive dependencies and not defined in package.json. I see this as a positive though, since all those cases were just bugs waiting to happen.


I experienced the same. The overall dev experience / responsiveness of pakage management makes it very unlikely I would like to go back to npm ever.


Why not use rush as well?


Probably because Lerna is already working for them. We're just using straight Yarn workspaces ourselves. Rush is great, but for our project it's overkill (lerna was as well).

Though we're using git submodules + yarn workspaces right now. The submodules will likely go away eventually.


I've migrated from yarn to pnpm two days ago and I can tell the difference when I first hit install. I am working on a workspace so I have multiple packages with nested dependencies. For this case, I thought yarn (the classic version) is the ideal solution that was until I discovered pnpm. Thanks to its non-flat algorithm the packages have clean dependencies. Previously in yarn if you install `foo` in any package you can reuse it later in the workspace even if it's not listed in the package dependency. With pnpm it's not the case, which means a clean dependency tree and serving the purpose of what's workspace is meant for. If you want to share the dependency you can install it in the root which makes sense to me. Another big advantage is the recursive/parallel command something I couldn't do without Lerna. And it's fast. install once and it's there in the disk, so if you manage multiple projects, dependency installation is not something you wait for it's just there.


It works very fast in CI, its cache is smaller and it builds node_modules much faster. I don't feel comfortable caching node_modules folder itself, because I got side effects even causing incidents before, not sure if its supposed to. Speed difference is 15s vs 35s for our use case which is pretty significant.


This is my default one, _much_ faster and disk space reduced dramatically when I have lots of node_modules in use. Can't recommend enough.


Were you coming from yarn or npm?

I guess their benchmarks cover both [0]. But I'm also curious about independent figures.

[0] https://pnpm.io/benchmarks


used to use npm, less often with yarn, now it's all pnpm


Gotcha. Yeah yarn has always been way faster to me than npm. And indeed in these benchmarks pnpm is much faster than npm and only slightly faster than yarn.


I recently migrated a large project from yarn to pnpm and the speed difference is insane. Everything related to dependencies runs must faster during local development AND CI. T The only tricky thing is that we had some issues with some dependencies that could not work properly. But using the “—-shamefully-hoist” flag did the trick. Everything works.


You can potentially solve this more narrowly with hoist-pattern[1] so you’ll still benefit from the stricter structure overall. Or possibly even overrides[2] depending on the issue.

1: https://pnpm.io/npmrc#hoist-pattern

2: https://pnpm.io/package_json#pnpmoverrides


Slightly unrelated, but it always amazes me how big the dependency tree in js world can get. Hundreds of megabytes for SPAs. I don't understand js development or how anyone can live with that, but since I do need to edit it here and there I stumbled upon this handy tool.

https://github.com/voidcosmos/npkill


It is unrelated because you don't have this issue with pnpm. pnpm uses a central content-addressable store and each unique file is written only once on a disk. It doesn't matter in how many projects you install the same dependency.


I think GP was amazed by the sheer number of dependencies, not by the how many of them are duplicated. It also amazes me. People talking above about having 10s of GB of dependencies. Crazy to me.


This amazes me as well. At work we have a simple website which has 1 GB of node_modules and the actual source code is well under a megabyte.


I'm very curious to know how in the world a node_modules directory can grow above 1GB! Which dependencies are the worst offenders? Is it dominated by a few huge deps?


pnpm is awesome except when it does not work - which happens very rarely but it is a nightmare to debug. For all the other times, it is way faster and lightweight than npm.


I was unable to use pnpm with a project that used Electron (~2 years ago), IIRC because some spawned process was incompatible with symlinks. It's the only time it caused me trouble though, it's indeed much faster than npm. I'd love to use it at work too.


In any cases where pnpm doesn't work, you may set the node-linker=hoisted option and it will work:

https://pnpm.io/npmrc#node-linker

With node-linker=hoisted, pnpm creates a traditional hoisted (aka flat) node_modules, without using symlinks.


Good to know, thanks!


We migrated from pnpm to yarn3 with node_modules linker. pnpm focus on wrong things, speed, disk efficiently is less important than stable, reproducible, and developer experience of yarn3.


We did exactly the same, tried to uneject from CRA recently and just could not get it to work in PNPM, switched to Yarn and it just worked.


If you have a pnpm monorepo, you need to know about https://pnpm.io/filtering#--filter-since which allows to run your test/lint/etc on only the packages that have been impacted by changes from master.


I didn't know about this, thanks for posting! We have been using a custom Bash script to retrieve the list of sub-project directories that had been changed since the last commit by recursively scanning its sub-paths and running Git commands to find the last update on each, but this looks much, much better.


I use pnpm only for monorepos for which I find it works quite well. Although at times there have been issues, mainly with symlinks getting messed up or linking to wrong dependencies. Well, mainly having multiple versions of same packages.

But all in all I'm glad they are moving the JS/TS ecosystem forward and other managers are catching up to their innovations and likewise. Great monorepo support I feel is a big necessity for a manager and well-working symlinking to prevent the insane node_modules sizes.


Is there any gotcha if I switch from npm to pnpm?

And can I use pnpm when working in a team where other devs use npm?


pnpm uses its own lock file (pnpm-lock.yaml), while npm uses package-lock.json

You definitely should use the same manager (npm, yarn, pnpm, whatever) as your teammates' or you're going to run into problems, either with them or with your CI workflow.


thanks!


note: You can use `pnpm import` for converting a lockfile.json to a pnpm-lock.yaml

https://pnpm.io/cli/import

(This is not an endorsement of or recommendation of pnpm which I personally don't use daily)


I ran into glitches around the pnpm script that wrap binaries. I had to change VS Code launch settings to use node directly to invoke the package bin.

Some create scripts invoked via `pnpx create...` exit the terminal without showing prompts. No issues running those scripts when using npm or yarn. IIRC, that was happening on Windows + Windows Terminal.

The name is too close to npm. I'll inadvertently type `npm` and that starts downloading all dependencies again. Of course I press `Ctrl+C` which has led to corruption of the node_modules folder on a few occasions.

I would not recommend mixing pnpm and npm. pnpm uses its own lock file. There's no guarantee your app/library will be using the exact package versions everyone else is using. That leads to it-works-on-my-machine kind of bugs.


Pnpm doesn't auto install peer dependencies, which is annoying and forces you to unnecessary add them to package.json. Npm@7 (and newer) with auto install of peer deps is much easier to use in this regard.


Isn't that exactly how it's supposed to be? If you're installing a package that has peer dependencies, you should already depend on them. Otherwise they are not "peer dependencies" anymore, just a normal dependency with a version range.

The whole peer dependencies story is another clusterfuck. Everyone simply ignored the invalid peer dependency warnings, and now npm itself will just install all of them in 'whatever works' version. To get real peer dependency resolution you need a --strict-peer-deps flag. Not exactly a "feature" in my book.


I want to specify only those deps that I use directly. Imagine a `foobar` package, that has `babel` peer dependency because it does some transpiling or whatever. For me as a user of `foobar`, that Babel requirement could have been regular dependency instead of peer. I don't care, I don't use Babel. In another words - if a package manager has all the information necessary to install all dependencies, why should I add another, redundant information to my package.json?


As the post you shared yourself explains, peer dependencies were meant for plugins/extensions. It would make no sense for you to depend on “foobar” directly without having babel as a dependency already.

If foobar can be used standalone, then babel is a standard dependency, not a peer dept.


Technically true, but not how it is actually done. People very often specify packages like Babel, React, TypeScript, GraphQL, etc. as peer dependencies even when they shouldn't.

Anyway, in any case, auto installing peer deps solves both situations. There really is no reason to not auto install them.


There really is - not everybody is content with having 600MB of unused dependencies being pulled in for no reason. This approach slows things down, including CI, increases surface area for security risks and makes your dependency tree inscrutable. All to account for obviously wrong use of the package manager.


By "autoinstalling peer deps" I don't mean "installing unnecessary deps" - those peer dependencies are required, you still have to install them, I just don't want to manually add them to my package.json.


We’ll, it’s hard to argue with that, we simply have very different expectations. You want NPM to automatically fix what is clearly user error so that installing random plugins “just works”, and don’t care that 3rd+ level deps might end up pulling a hundred extra packages you never asked for; I want it to follow its own dependency management rules to the letter and not have anything installed by surprise.

Clearly there is an audience for the former.


Nothing is installed by surprise. Peer dependencies are not optional (you have to specify them as such). There is no user error and there is nothing for npm to fix.

I have some app:

  {
    "name": "some-app",
    "dependencies": {
       "foo": "^1.0.0"
    }
  }
foo specifies some peer dep:

  {
    "name": "foo",
    "peerDependencies": {
       "bar": "^1.0.0"
    }
  }
  
Now some-app doesn't directly use bar, so I didn't add it to package.json. Npm@7 and newer will install everything: foo and bar. If I used package manager without auto installing peer dependencies, I would have to manually update my package.json:

  {
    "name": "some-app",
    "dependencies": {
       "foo": "^1.0.0",
       "bar": "^1.0.0"
    }
  }
  
But in both cases node_modules will contain foo and bar. There are no "extra packages you never asked for". Adding bar as dependency of some-app is completely redundant information.

Now, it's possible that there are packages that don't really require some peer dependency installed, and therefore thery are installed needlessly. But that's problem of those poorly developed packages, not mine. Why should I waste time to manually specify what should and should not be installed?


That implies babel was a dependency not a peer one.

Peer dependencies are for extensions and for plugins to an existing stack, as material-ui has a peer of react or io-ts on fp-ts


I am so confused about the purpose of a "peer dependency" in the first place then.


You can read here why the peer deps were introduced: https://nodejs.org/es/blog/npm/peer-dependencies/

Imagine this structure of packages:

  your-app/
  ├── dep-a/
  │   └── dep-c
  ├── dep-b  
  └── peer-dep
Very simplified: `dep-c` is dependency of `dep-a`, so it is installed in its node_modules, but `peer-dep` is peer dependency of `dep-a`, so it is in node_modules of `your-app`. `dep-b` could also define `peer-dep` as its peer dependency, so it is installed only once. When npm switched to flat node_modules structure, peer deps become somewhat redundant, but not quite. Pnpm, which uses symlinks to achieve proper node_modules structure while avoiding long filenames, combined with auto install of peer deps would be ideal package manager.


Yeah - this peer dependency thing always makes issues, when I try to upgrade a big Ionic project...

Peer dependencies added more problems than they solved in my eyes... Most peer dependencies warnings come because some maintainer forgot to update the package.json and not really because two packages don't work with each other...


The point of peer dependencies is to allow users more flexibility in choosing the specific version of the dependency. It’s not intended to specify any kind of conflict between the packages, it’s effectively BYO sub-dependency.


regular traditional dependencies can already be specified as a range, yes? Any range the specifier wants (and can legitimately work with), yes?

What is it about peer dependencies that give the host more flexibility in choosing the specific version?

Real question, not a challenge! I really don't understand this stuff, I've always found javascript dependency management very confusing.


It allows packages to specify a wider version range for downstream users than a pinned version used in `devDependencies`. This is a common pattern if (say) your tests depend on certain less stable APIs but your published package only uses a smaller more stable subset. It’s especially useful if you want to support older semver-major versions, or even newer ones if you’re confident that the APIs you use will remain stable.

The example used in this post[1] on the Node blog is plugin systems, which is a very common expression of this pattern.

> Real question, not a challenge! I really don't understand this stuff, I've always found javascript dependency management very confusing.

I appreciate the clarification but it was clear to me the question was sincere FWIW! And yeah, a lot of this was hard for me to get a solid grasp on for quite a while. I think this tends to show up more in the JS ecosystem because semver was so eagerly adopted there, but I suspect it’s of similar benefit where semver is used and dev/prod dependencies is a proscribed distinction generally.

1: https://nodejs.org/es/blog/npm/peer-dependencies/


I could be wrong but I think it is used to alert you when you have incompatible dependencies.

For example if you have a dependency with a peerDependency `something: 2.x.x` and you currently have `something: 1.0.0` installed as a direct dependency, it will fail rather than allowing multiple versions to be run.


We switched our monorepo from yarn classic to pnpm and the install speeds have been night and day. Sometimes it feels like I ran the wrong command because its so fast.

pnpm also helped us solve issues with packages in our monorepo importing dependencies declared in other packages because of node_modules hoisting behaviour with yarn. With yarn, all of your packages' direct and indirect dependencies can be resolved from any package in your monorepo. This makes it hard to isolate packages from each other. For example, this was causing issues when we wanted to generate docker images for some of our packages since we only wanted to copy specific directories into the image rather than the entire monorepo to avoid having to bring the entire monorepo to the image. pnpm also allows us to use features like git sparse-checkout because we can be confident there are no implicit dependencies between packages.

pnpm makes this possible by only exposing in the node_modules direct dependencies. In monorepos that don't use pnpm, you can remove a dependency and your monorepo can break in unexpected places because other packages in the repo implicitly depend on this dependency. This is more common than you think because people tend to rely on IDE auto-completion when writing imports and IDEs just tend to read node_modules instead of package.json so its very easy to take on an implicit dependency when all directs/indirects are hoisted into a root node_modules.

Yarn supposedly fixes this issue in their newer releases with Plug n Play but from from what I understand it basically monkey patches Node's require statements with its own behaviour. This just didn't feel good to me from a tooling compatibility perspective which is why I went with pnpm and I'm glad I did. I'm honestly puzzled at Yarn's popularity and why pnpm isn't as popular as it should be.


pnpm has been my goto JS monorepo package manager + script runner for a couple of years now. IMO it has almost zero downsides and huge upsides. I'd be surprised if npm doesn't end up adopting the pnpm node_modules directory hierarchy in the next 5 years or so.



pnpm is great. I love `pnpm store prune`. Also it's great for monorepos, check out https://github.com/panoply/mithril-demo for an example.


For us peasants stuck in angular land: https://github.com/pnpm/pnpm/issues/3410

Seems this might not work with angular (yet)?


As long as you are on Angular 13, and all of your dependencies are updated and use the Ivy view engine, you should be good to go. You'll get really weird, hard to debug issues if any of your dependencies are using the older view engine.


pnpm is really good for monorepos and there are many big open source projects that use pnpm: https://pnpm.io/workspaces#usage-examples


we're part way through switching all our monorepos from lerna to pnpm & simply could not be happier & more excited.

lerna has soo many issues, is slow & cumbersome. #2142, no way to update dependecies in monorepo subprojects? how is there a monorepo tool where projects cant update their deps? everything on pnpm's built-in monorepo support just works, nice & easy & fast.


I’m using PNPM in a side project to try it out. Overall, I like it. Yes, it does feel faster than Yarn (1, I gave up on Yarn 2 pretty quickly when it wouldn’t let me install a TypeScript beta even with PnP disabled). I’ve found its commands a little fussier to use than Yarn however.

If I were to pick for a new project today, I’d probably go back to Yarn. But I’m glad there’s PNPM as an option.


Love that this continues to be shared. One of the most exciting projects in the Node ecosystem.


When I tried pnpm last, I was on a react native project (react 14 maybe), and something about the symlinks from pnpm caused me to not be able to build the project (and I went back to yarn). Anyone know if this is still an issue?


with its linking strategy, pnpm allows for multiple versions of the same package.

I wish there was something like that in PHP's Composer, where I have repeatedly hit situations where different packages have a dependency that was pinned to a different version (such as one package using Guzzle 6 and another Guzzle 7), and therefore my composer.json was un-buildable.

(I have less experience with NPM so I don't know if they have a different solution for this.)


Problem is that it can’t be done in PHP without on-the-fly rewriting of source files.

In JavaScript the caller decide how a module should be used by importing it to symbol, thus different versions of the same library can exist simultaneously.

In PHP it is the callee that decides how to be imported because of namespaces.

One and only one class with that exact name and namespace can exist at a given time, thus different versions of the same library can’t be loaded simultaneously.

That is why modules are far superior over namespaces.


Does this use symlinks to a single copy of each dependency?


Not symlinks. Hard links. Or copy-on-write on systems that support them.


[EDIT: (thanks ttybird2! b^)] We might need to change the following diagram?

https://pnpm.io/motivation

It appears to show several symlinks? Thanks for pnpm!


You are actually replying to the pnpm maintainer :p


A single copy of each required version of each dependency.


So is that a yes? Or does it still pull a copy for each projects local node_modules?


As you can see at the URL in sibling, pnpm maintains a private area where everything is stored. Everything in a local (or "global", really) node_modules tree is a link to the private store.


The way that key JS ecosystem programs get rewritten tells me nothing is made well. Most ecosystems don’t have this much constant change.


I see many mentioning using pnpm for monorepos, why not use rushjs by microsoft which is its natural companion for that use case?


Please note that pnpm is currently blocking all traffic from Russia and Belarus https://twitter.com/pnpmjs/status/1498306992577957890


It seems the main maintainer is from Ukraine so I can see how he got there. https://github.com/zkochan


"We will unblock it when you stop the war and de-occupy all the Ukrainian territory."

There's what, maybe a handful of people who can make that happen?


That was the deal a few weeks ago. After the atrocities that their army has committed in my country, I do not think I will ever unblock traffic from Russian Federation.


If that's how it works, there's a lot of people on the planet who have good reason to block traffic from the USA.

Fortunately for me as a developer in the USA, I guess most of them, in the Global south, aren't developers, or know they can't be successful as developers blocking traffic from the USA, no matter how many atrocities the US military or intelligence have committed in their countries. :(. I guess if they wanted to try, it's an interesting question if that would be some kind of effective action in changing US atrocious behavior. Probably not. :(

That said, this is basically a form of boycott. There does seem to have been some significant change of opinion around the value and ethics of the tool, from when the main example we had was the BDS movement called by Palestinian civil society against Israel. (Which by the way, does not in fact call for doing things like blocking all network access to those in Israel, what people are doing against Russia is way broader and less targetted than what the... more controversial(?) Palestinian-led BDS has done or called for against Israel. Which is interesting.)


In the USA there are different people. Good and bad. There are states that are very conservative and there is California. There are some basic values that everyone agrees upon. I think nobody can claim that everyone is bad or good in the US. Same goes for any other democratic society. Same goes for Ukraine. There are a lot of people that I don't like in Ukraine.

In Russia there are some good people but they are in extreme minority. Extreme. Even the liberals in Russia are supporting the annexation of Crimea. Maybe the dictatorship is the reason, maybe the propaganda. I don't know and I don't care. This is how it is and I do what I can to exclude them from my life.


There's different people in Russia too. No one is blocking US citizens because they dropped bombs killing 300 people shopping in markets or 200k civilians in Iraq or for the many abuses like Abu Grahib.

Actually america has been by far the biggest aggressor on the planrt in tge last decades, and it killed the most civilians outside Africa.

That's my problem with this massive anti russian hysteria in software.

You only appear careless, racist and naive. It's better to avoid politics in software and business, because if you don't you have to basically ban anyone.


You have the audacity to call it anti Russian hysteria, when I should hide from air raid attacks sometimes several times a day?

Please don't use pnpm. Use Yarn or npm, those are better package managers for you.


I am sorry for that situation but my opinion does not change.

OS and politics should not be mixed. Your actions do nothing but say that you will "weaponize" an open source library to match one goal "punishing Russians" but not punishing other perpetrators.

If every software maintainer started to apply its own political views and ban some users the entire ecosystem would implode.

Should muslim developers ban all chinese users for the uighur concentration camps, and myanmaris for their treatment of Rohingya?

Should I, a Polish developer, ban Ukrainians because they hail as a hero and title streets to a criminal like Bandera that killed my own polish family in eastern galicia during world war two?

Where and when would this hysteria end?

I have sympathy for your situation, but not for such solutions. They damage everyone, including OSS.


"Even the liberals in Russia are supporting the annexation of Crimea"

- Supporting an annexation that has already happened is not the same as supporting a war.

- That being said, even the liberals in the US supported the various invasions and occupations. Don't forget that Libya happened under Obama.

- Even the liberals in Turkey are supporting the annexation of northern Cyprus, even the liberals in China support the annexation of Tibet, etc. It is not something unique to Russia.

"In Russia there are some good people but they are in extreme minority"

Bush's approval rating was the highest when he declared the invasion of Afghanistan (90%) and when he declared the invasion of Iraq (70%).

In addition to that this is typical racist rhetoric, I have heard exactly the same thing about Mexicans, Blacks, Albanians, Jews, Roma, etc.


Racist rhetoric? Don't be ridiculous. There is no Russian race. I don't block Russians if they leave Russia. I have many Russian friends in Ukraine. Only Russians that live in Russia are blocked. Russians that pay tax to a government that is waging war against my country.


Maybe "racist" was not the best choice. Feel free to replace it with "discriminatory" instead.

Anyway, this does not explain your plan to keep the restriction permanently. After all they will not be paying taxes that fund the war effort after the war is be over.


As someone who lives in the USA and is a USA citizen, I'm frequently surprised by how much people don't hold US voters responsible for the actions of the government.

The US has and continues to do some pretty awful things around the world. Turning some parts of Pakistan into hellscapes where an invisible drone could kill you from the sky at any time and regularly does kill women and children with no warning, would be one example.

The vast majority of citizens in the US either pay no attention to this at all, or think it's a a good thing. There is a minority who think it's awful. (I suspect these proportions are pretty similar in Russia).

Since the US is theoretically a democracy, we citizens could, one would think, easily stop these things by voting for different people, you'd think people would hold us more accountable and be really mad at us. But somehow they don't, they're like, eh, most people in the US are good people I don't blame them for their government!

And to be fair, I wish I knew how to get my government to do something different -- or how to get more people in the country to pay attention to the really catastrophically criminal things my goverment does, I don't feel like I have much control over it either. (Although surely more than Russians do, in the really not a democracy of Russia?) I do what I can, I am politically involved where I can find the energy to be. It doesn't feel like enough. I'm not sure I or my country-mates deserve the dispensation to not be held responsible.

I think most governments do awful -- really horrendous, murderous -- things. I think tmost people are fundamentally good people, but many citizens of powerful countries doing bad things have these days been hoodwinked to ignore them or support them.

I wish I knew what to do about it. One thing I am personally sure of is that it starts from not judging people by their ethnicity or nationality -- that kind of thinking is what helps governments convince their residents that the violent things they are doing to someone else are ok. That's the problem not the solution.

But I'm not opposed to boycotting as a tactic. I do support BDS against Israel. The BDS organizers have been very careful (and learning from experience) at trying to figure out how to do it in a way that is ethical and maximizes effectiveness. (I think the BDS campaign has been effective, relative to anything else done to try to support Palestinians, although not nearly as effective as one would like, which goes without saying as Israel continues it's decades-long occupation). But reading what they have to say on the topic of boycotts against invaders, Israel, and Russia, is in my opinion worthwhile: https://bdsmovement.net/Hypocrisy


> it starts from not judging people by their ethnicity or nationality

Russians that leave Russia are not blocked.

I am Hungarian by ethnicity. I will be probably also judged for the terrible policy of Orban. I feel deep shame for it. So I understand (a little) in what situation those "good" Russians are. And I know that those Russians support my decision and understand it.


[flagged]


> Iraq, Afghanistan, Libya, Syria, all within the very recent history, plus countless more US-backed color revolutions - including the one in your very own country, which realistically is the very reason for the current state of affairs in there.

Oh God! You and your conspiracy theories are annoying. If the U.S. is so powerful, why hasn't it organized a color revolution in Russia or at least in Serbia, maybe in pro-Russian Hungary? I was at both revolutions in Ukraine and I know why they arose, they were supported by the Ukrainian people, self-organized. And it's been like that all through our history. You don't know shit about the political situation in Ukraine and its history, but you think you have it all figured out. I'm sure you don't even know your own history. So don't talk bullshit!


yeah, things like McCain hanging out with the Euromaidan crowd in 2013 is not enough of a smoking gun. he just did it as a private citizen, same as he did with Syrian rebels. he was just quirky like that, right?

>If the U.S. is so powerful, why hasn't it organized a color revolution in Russia or at least in Serbia, maybe in pro-Russian Hungary?

you conveniently left out Belarus and Kazakhstan, who had attempts to do that literally the last year. both crushed with Russian help, which is probably the reason why Loukashenko is so friendly with Putin now

as for why they don't do that in Russia, why, they did manage to do that before. they even bragged about getting Yeltsin reelected on the cover of the Time magazine. the guy who oversaw a default, a defeat against Chechnya, and selling away the Soviet wealth to the emerging oligarch class for pennies on the dollar. not sure if the Russians are thankful for that help though

>I was at both revolutions in Ukraine and I know why they arose, they were supported by the Ukrainian people, self-organized. And it's been like that all through our history. You don't know shit about the political situation in Ukraine and its history, but you think you have it all figured out.

I know enough. you've had an equivalent of Jan 6, funded and armed from across the Atlantic. we just don't call that an insurrection when its convenient to us, just like we don't call the guys with swastika tattoos and flags "nazis" when they serve our interests


> US-backed color revolutions - including the one in your very own country,

I see

I hope you won't use pnpm and we won't speak again.


As much as you disagree with it, posting it on twitter to harass and get your followed to dunk on that person is not acceptable. https://twitter.com/ZoltanKochan/status/1511638290608308228

Also please do not misunderstand, people mention the US acts in order to point out the hypocrisy shown, not to justify the invasion.


ah, I was wondering why did this comment chain get flagged so long after the fact


I don't intend to, and yes, hopefully we won't


> Trump, the only president in recent history who did not start any wars

All of this "Trump is the ambassador of peace" rhetoric may almost convince you to forget that 90% of his electorate are gun-crazed maniacs.


yeah sure, 90% of 50% of eligible voters in the US are gun-crazed maniacs. and still, his record is minus one war, unlike every other president all the way back to... shit, I can't even tell without looking it up. I wasn't even around the last time there was one.


Reagan I think. That being said this is likely because he served only one term. Obama bombed Libya on his second term.


Oh, I thought that you were Hungarian (after reading https://github.com/pnpm/pnpm/issues/1080#issuecomment-373872...), but I guess you are both? Your reaction makes more sense in that case. Although I do hope that you will reconsider the idea of a permanent block, I believe that open source (and the world in general) would be much worse and divides between nations would be greater if Greek software blocked Turkey, Israeli software blocked Germany, middle eastern software blocked the US, etc.


> After the atrocities that their army has committed in my country, I do not think I will ever unblock traffic from Russian Federation.

Are you gonna ban Israeli for their apartheid, chinese for the uighurs concentration camps, myanmari for ethnic cleansings of Rohingya? What about americans and the 200k civilians killed in iraq in an illegal and unprovoked aggression?

See what's the point of doing lame politics like that? You end up declaring to the world that 800 villages burned and 50k civilians thrown in fire matter none to you because they aren't white.

I feel disgust at these double standards, at showing to the world that there are tier 1 and tier 2 victims.

Or do you just care a


I'm sure Putin is reeling that he can't use an obscure package manager for his web projects


the idea is to anger the citizens so that they then turn their anger against Putin. it doesn't matter that Putin is not directly impacted.

I fundamentally disagree with the blockade. this moves makes this project incompatible with open source licensing.

if anyone is interested in this sub thread, it diverge from the discussion around the tool itself, but note that all the project standing for Ukraine in this manner are: - breaking one of the fundamental principle of open source - pretty risky to use. what if your locality also becomes block-listed. - demonstrates a poor judgment from their authors, who more often than not are reluctant to debate/reconsider their position.

Reactions to extreme circumstances can understandably be emotional, it doesn't imply that logical criticisms are necessarily insensitive.


Russian TV is wall to wall coverage about their highly successful campaign to eradicate Ukrainian Nazis. An action like this might at least be a small hint to everyday Russians that things aren’t as they appear.


“error 1020” is not a very descriptive thing and I bet that a regular person would just think the site is broken


And then they suggest that Russian users use a VPN to get through. I don't understand this line of thinking at all.


If Russians get used to using VPNs, they might have more opportunity to check independent news sources and see how the war is going, and what people in other countries think.

Ironically, blocking Russian IP addresses could be seen as a form of non-violent protest against Russian web censorship.

https://en.wikipedia.org/wiki/List_of_websites_blocked_in_Ru...


People in Russia have access to millions of independent sources.


Слава Україні!


Героям Слава!


"Fast": compared to what? What is "fast"?

"Disk space efficient": again, compared to what?

Javascript: ...oh




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: