Hacker News new | past | comments | ask | show | jobs | submit login

> Of course you can. I can not understand the details of something like the webextensions surface area, but still can look at the polyfill and understand that it's well written, that the tests aren't useless, that the popularity isn't faked, and that the maintainer is worthy of trust.

No you can't, at least not in the general case. You can trust Mozilla because it's Mozilla, but ultimately that's an "argument from authority".

> Looking at the chances of me getting a malicious developer or a very broken update that I didn't test against or see is much smaller than the chances that I'll write a bug and not test it, or I'll not fully understand a domain and will add subtle bugs into it, and won't even know how to test it correctly.

Perhaps, but unless the code you are writing is "safety-critical", those "low chances" are outweighed by the fact that a malicious actor can do far more damage than some bug you introduced.

Also, you have to multiply those low chances with the large amount of micro-dependencies that you might bring in through this attitude.

> You can't 100% vet all 3rd party code, and you can't run entirely on first party code.

Of course you can, unless you count the runtime environment as third-party code, which I don't.

> But we do! That's what dependency managers are for!

Yeah, well. Believe that if you must.

> Just like how you using a browser to post this comment which relies on a compiler which uses tools written in python which relies on a python interpreter which...

This is a slippery slope argument. I think there's a distinction to be drawn here.

CPython doesn't have a lot of dependencies, certainly no micro-dependencies. The amount of contributors is rather small and changes are extensively vetted. The standards for browsers are similarly high.

Your average NodeJS project, on the other hand, pulls in a thousand packages from a thousand authors, supposedly vetted by the community - but not really.

> don't just throw out statements like "a little copying is better than a little dependency". That's how you get dogmatism...

It's a proverb, I didn't make it up. I put in the "Often" just to make it sound less dogmatic. You took it out again, presumably so that your long diatribe doesn't look so misplaced.

Your mileage may vary.




>No you can't, at least not in the general case. You can trust Mozilla because it's Mozilla, but ultimately that's an "argument from authority".

I'm not saying absolute trust, I'm saying I trust them to write a better polyfill for extensions than I will. An argument from authority is okay when the alternative is an "argument from ignorance". I know I don't know the details, and I'm trusting someone who literally writes the runtime to know it better than me.

>Perhaps, but unless the code you are writing is "safety-critical", those "low chances" are outweighed by the fact that a malicious actor can do far more damage than some bug you introduced. >Also, you have to multiply those low chances with the large amount of micro-dependencies that you might bring in through this attitude.

I'm genuinely curious about this, because from my point of view the number of times that I've encountered a malicious actor is extremely small and is often dealt with within days if not hours. The number of times that i've encountered my own vulnerable code is much more common. And with non-malicious bugs/vulnerabilities the number is similar. It could be just because I find my own bugs easier, but I really think there is something to relying and trusting that the community is going to be able to do something better, faster, and more correctly than I will be able to. Add on the npm audit system which is constantly scanning and notifying about known vulnerable dependencies means I'll be able to find and solve bugs and vulnerabilities much faster than if I had written even most of it myself.

I'd love to see some actual studies done that aren't just relying on anecdotes to see how things really shake out, but I'm not sure how that would even work as there are so many other confounding factors here that make it really hard to make a hard rule about these things (there is a lot of garbage code on the internet and across most package managers, any study would need to separate the "nobody should ever use this" from the "looks good at first glance" code, and at that point you're just making an automated code reviewer...)

>This is a slippery slope argument. I think there's a distinction to be drawn here.

I completely agree, but often drawing those distinctions is arbitrary. You may draw it at the runtime, I tend not to because a runtime (or the tools that make the runtime, etc...) often have much more ability to cause problems than any runtime library does. Counting "number of dependencies" is extremely hard, because like I said cpython does have quite a lot of dependencies, it just depends on where you draw the line. cpython's compilation depends on quite a lot of smaller tools, and the compilers they support and use depend on even more. As you get further down the stack things slow down and are often vetted much more, but that itself doesn't mean much as the "payoff" for a malicious actor to get into it is often equally greater.

And even if you do draw the line at the runtime or compiler, does that mean that the JS dependencies in my package.json that are dedicated to compilation and building don't count toward your numbers? And if you don't count the transitive dependencies for something like cpython, then it seems disingenuous to include them in your statements about how a node package pulls in thousands of dependencies. And if you draw the line at some dependencies being "okay" and others not based on number of authors, dependencies, the amount of vetting, and more, well you're doing exactly what I was advocating for above!

>I put in the "Often" just to make it sound less dogmatic.

That is fair, and I did remove it from the quote. But I started this whole thing because you didn't describe the reasoning behind why you felt that way, and without the why it is literally just dogmatism.

And to be honest I still don't feel you've given a great answer for it besides the idea that as the number of some dependencies goes up the risk of a malicious actor goes up. Which I don't necessarily disagree with! I guess I just draw the line at a different spot. For me the risk of a malicious actor is miniscule compared to the risk of not completing the project from having to write so much code to avoid dependencies, writing bugs in domains that I don't understand, or copying code into my codebase that I don't fully understand breaking the dependency link that I can use to do automated scanning of. Not in every project, but in most.


> I'm not saying absolute trust, I'm saying I trust them to write a better polyfill for extensions than I will.

But that's not even the scenario here. You have the choice between bringing in a couple of lines of code you wrote yourself, or another dependency. Sure, in this case the vendor is entirely trustworthy and presumably competent. That's not the general case.

Let's say it was the general case, is it still worth increasing the complexity of your program for that little piece of functionality? Can you judge the runtime cost at all? What about bundle size? (Not applicable in this case, but still) Does the integrating the polyfill cause more work over a simpler solution?

Those are rhetorical questions, I don't want to keep on with the walls of text. The point is, the argument doesn't stop there.

> Counting "number of dependencies" is extremely hard, because like I said cpython does have quite a lot of dependencies, it just depends on where you draw the line.

CPython itself only has a handful of dependencies (libc, libffi, openssl, zlib, maybe a couple more) and almost all of them are optional. I'm drawing the line at actual dependencies of the program, not the operating system or the compiler (though neither GCC nor LLVM have a lot of dependencies either) or the basic build tools that ship with basically any UNIX-like system.

However, even if we added them all up I doubt we would have more "units" (programs, libraries) than in your average NodeJS project.

> And even if you do draw the line at the runtime or compiler, does that mean that the JS dependencies in my package.json that are dedicated to compilation and building don't count toward your numbers?

It doesn't matter, you're probably fucked either way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: