I see this as an absolute win. The state of micro dependencies of js was a nightmare that only happened because a lot of undereducated developers flooded the market to get that sweet faang money.
Now that both have dried up I hope we can close the vault door on js and have people learn how to code again.
Oh god, without tree shaking, lodash is such a blight.
I've seen so many tiny packages pull in lodash for some little utility method so many times. 400 bytes of source code becomes 70kb in an instant, all because someone doesn't know how to filter items in an array. And I've also seen plenty of projects which somehow include multiple copies of lodash in their dependency tree.
Its such a common junior move. Ugh.
Experienced engineers know how to pull in just what they need from lodash. But ... most experienced engineers I know & work with don't bother with it. Javascript includes almost everything you need these days anyway. And when it doesn't, the kind of helper functions lodash provides are usually about 4 lines of code to write yourself. Much better to do that manually rather than pull in some 70kb dependency.
> 400 bytes of source code becomes 70kb in an instant,
This only shows how limited and/or impractical dependency management story is. The whole idea behind semver is that at the public interface level patch version does not matter at all and minor versions can be upped without breaking changes, therefore a release build should be safe to only include major versions referenced (or on the safe side, the highest version referenced).
> Its such a common junior move. Ugh.
I can see this happening if a version is pinned at an exact patch version, which is good for reproducibility, but that's what lockfiles are for. The junior moves are to pin a package at an exact patch version and break backwards compatibility promises made with semver.
> Experienced engineers know how to pull in just what they need from lodash. But ...
IMO partial imports are an antipattern. I don't see much value in having exact members imported listed out at the preamble, however default syntax pollutes the global namespace, which outweighs any potential benefits you get from members listed out the preamble. Any decent compiler should be able to shake dead code in source dependencies anyway, therefore there should not be any functional difference between importing specific members and importing the whole package.
I have heard an argument that partial imports allow one to see which exact `sort` is used, but IMO that's moot, because you still have to perform static code analysis to check if there are no sorts used from other imported packages.
> Any decent compiler should be able to shake dead code in source dependencies anyway, therefore there should not be any functional difference between importing specific members and importing the whole package.
Part of the problem is that a javascript module is (or at least used to be) just a normal function body that gets executed. In javascript you can write any code you want at the global scope - including code with side effects. This makes dead code elimination in the compiler waay more complicated.
Modules need to opt in to even allowing tree shaking by adding sideEffects: false in package.json - which is something most people don't know to do.
> I don't see much value in having exact members imported listed out at the preamble
The benefit to having exact members explicitly imported is that you don't need to rely on a "sufficiently advanced compiler". As you say, if its done correctly, the result is indistinguishable anyway.
In my mind, anything that helps stop all of lodash being pulled in unnecessarily is a win in my books. A lot of javascript projects need all the help they can get.
> The whole idea behind semver is that at the public interface level patch version does not matter at all and minor versions can be upped without breaking changes, therefore a release build should be safe to only include major versions referenced (or on the safe side, the highest version referenced).
... Sorry, what does that have to do with tree shaking?
I agree the JS standard library includes most of the stuff you need these days, rendering jquery and half of lodash irrelevant now. But there's still a lot of useful utilities in lodash, and potentially a new project could curate a collection of new, still relevant utilities.
It was more useful before when browsers didn't support array.map and fromEntries. That's the origin of all these libraries, but browsers caught up. Things like keyBy, groupBy, debounce, uniqueId, and some others, are still useful.
The problem with helper functions is that they're often very easy to write, but very hard to figure out the types for.
Take a generic function that recursively converts snake_case object keys to pascalCase. That's about 10 lines of Javascript, you can write that in 2 mins if you're a competent dev. Figuring out the types for it can be done, but you really need a lot of ts expertise to pull it off.
Not really familiar with TS, but what would be so weird with the typing? Wouldn't it be generic over `T -> U`, with T the type with snake_case fields and U the type with pascalCase fields?
Turns out in TypeScript you can model the conversion of the keys themselves from snake_case to pascalCase within the type system[0]. I assume they meant that this was the difficult part.
Unless you're part of the demoscene or your webpage is being loaded by Voyager II, why is 70kb of source code a problem?
Not wanting to use well constructed, well tested, well distributed libraries to make code simpler and more robust is not motivated by any practical engineering concern. It's just nostalgia and fetishism.
Because javascript isn't compiled. Its distributed as source. And that means the browser needs to actually parse all that code before it can be executed. Parsing javascript is surprisingly slow.
70kb isn't much on its own these days, but it adds up fast. Add react (200kb), a couple copies of momentjs (with bundled timezone databases, of course) (250kb or something each) and some actual application code and its easy to end up with ~1mb of minified javascript. Load that into a creaky old android phone and your website will chug.
For curiosity's sake, I just took a look at reddit in dev tools. Reddit loads 9.4mb of javascript (compressed to 3mb). Fully 25% of the CPU cycles loading reddit (in firefox on my mac) were spent in EvaluateModule.
This is one of the reasons wasm is great. Wasm modules are often bigger than JS modules, but wasm is packed in an efficient binary format. Browsers parse wasm many times faster than they parse javascript.
It isn't but then everyone does it and then everyone does it recursively and 70kb become 300MB and then it matters. Not to mention that "well constructed, well tested, well distributed" are often actually overengineered and poorly maintained.
Your first sentence cheers that we're moving from NPM micropackages to LLM-generated code, and then you say this will result in people having to learn to code again.
I don't see how the conclusion follows from this.
There will be many LLM-generated functions purporting to do the same thing, and a bug in one of them that gets fixed means only one project gets fixed instead of every project using an NPM package as a dependency.
There will be many LLM-generated functions doing the same thing, in the same project, unless the human pays attention.
I've been playing a lot recently with various models, lately with the expensive claude models (API, for the large context windows), and in every attempt things are really impressive at the beginning, and start going south once the codebase reaches about 10k to 15k lines of code. Even with tools split out into a separate library and separate documentation at that point it has a tendency to generate tool functions again in the module it's currently working on over taking the already defined one in the helper library.
I don't quite understand your argument. Wasn't the post about how users might replace transparent dependencies with transparent LLM drop-ins? I don't see how having an LLM to do the same job would enable someone to learn more. They're probably the kind of person who will ask the LLM to perform a refactor when problems arise, so they won't learn that much through osmosis.
Yeah, you're sort of swapping in one issue for another. I agree that micro dependencies needs to be tackled, but perhaps not using LLMs.
I could fear that rather than keep pushing for a Javascript standard library, which would encompass all these smaller function, we now just get more or less the same defective implementations generated by LLMs, but hidden in thousands of repos, where tools won't find security issues. At least with NPM we can pull in updated versions with NPM tells us that we're running an outdated version. Who is going to traverse your proprietary code base and let you know that the vibe coded left-pad Claude put in three years ago is buggy?
Now that both have dried up I hope we can close the vault door on js and have people learn how to code again.