Hacker News new | past | comments | ask | show | jobs | submit login

I haven't looked at rust, and this sentiment is exactly what has kept me away. I watched perl, java, python, ruby, node and c++ (boost) fall into the trap of "we know better than the end-user/developers/sysadmins/os vendor, so let's reinvent dpkg poorly".

Why should cargo be any different? It is solving a problem I don't have (debian, ubuntu, openbsd, and freaking illumnos all have acceptable package management), and creating a massive new problem (there is a whole thread below this one talking about rust dll hell between nightly and stable, and the thread links to other HN articles!). From my perspective all this work is wasted just because some developers somewhere use an OS that doesn't support apt or ports.

Sorry this is so ranty, but I really want to know if anyone has had luck using rust with their native package manager.




TL;DR: I think language-centric package managers do a better job at versioning packages per-project. Here's an anecdote to explain what I mean.

-----

Let's say I want to build a piece of software that depends on some software library written in C at version 1.0.1. It's distributed through my system package manager, so I sudo apt-get install libfoo.

~~ some time later ~~

Now let's say I want to build a different piece of software that also depends on foo, but at version 1.2.4. I notice that libfoo is already installed on my system, but the build fails. After a quick sudo apt-get install --only-upgrade libfoo. This piece of software now builds.

~~ Even later ~~

When I revisit the first project to rebuild it, the build fails, because this project hasn't been updated to use the newer version yet.

I'm fairly inexperienced with system package managers, but this is the wall I always hit. How should I proceed?


I'm arguing that you should just have one package manager, so in my world the only way for this to happen is if both the packages you're installing are tarballs that no one has bothered to port to your system. If the language-specific package manager did not exist, then there would be a better chance the package would exist for your OS already.

Anyway, Debian/Ubuntu has multiple fallbacks for this situation:

a. ppa's

b. parallel versions for libraries that break API compatibility (libfoo-1.0...deb, libfoo-1.2...deb that can coexist).

c. install non-current libfoo to ~/lib, and point one package at it (not really debian-specific)

d. debootstrap (install a chroot as a last resort -- this is better than "versioning packages per-project" from an upgrade / security update point of view, but worse from a usability perspective -- you need to manage chroots / dockers / etc).

I suspect the per-project versioning system is doing b or d under the hood. b is clearly preferable, but hard to get right, so you get stuff like python virtual environments, which do d, and are a reliability nightmare (I have 10 machines. The network is down. All but one of my scripts run on all but one of the machines...)

A long time ago, I decided that I don't have time for either of the following two things:

- libraries that frequently break API compatibility

- application developers that choose to use libraries with unstable APIs that also choose not to keep their stuff up to date.

This has saved me immeasurable time, as long as I stick to languages with strong system package manager support.

Usually, when I hit issues like the one you describe, it is in my own software, so I just break the dependency on libfoo the third time it happens.

When I absolutely have to deal with one package that conflicts with current (== shipped by os vendor), I usually do the ~/lib thing. autotools support something like ./configure --with-foo=~/lib. So does cmake, and every hand-written makefile I've seen.

[edit: whitespace]


> talking about rust dll hell

No, what that thread is talking about is that somebody wrote a library to exercise unstable features in the nightly branch of the Rust compiler, and that inspired somebody else to write a sky-is-falling blogpost claiming that nightly Rust was out of control and presented a dozen incorrect facts in support of that claim, so now we have to bother refuting the idea that nightly Rust is somehow a threat to the language.

As for the package manager criticism, the overlooked point is that OS package managers serve a different audience than language package managers. The former are optimized for end-users, and the latter are optimized for developers. The idea that they can be unified successfully is yet unproven, and making a programming language is already a hard enough task that attempting to solve that problem is just a distraction.


From the thread, I got the impression that it is not trivial to backport packages from the nightly tree to the stable tree--people are talking about when packages will land in stable, but I'd expect that to all be automated by test infra, and too trivial for developers to work around to warrant a forum thread.

Anyway, it sounds like I stepped on a FUD landmine. Sorry.

It sounds like you work in this space. From my perspective, debian successfully unified the developer and end-user centric package manager in the '90s, and it supports many languages, some of which don't seem to have popular language-specific package managers.

What's missing? Is it just cross-platform support? I can't imagine anything I'd want beyond apt-get build-dep and apt-get source.


> Anyway, it sounds like I stepped on a FUD landmine.

That's the problem with FUD, it gets everywhere and takes forever to clean up. :)

> I got the impression that it is not trivial to backport packages from the nightly tree to the stable tree

Let's be clear: stable is a strict subset of nightly. And I mean strict. All stable code runs on nightly, and if it didn't, that would mean that we broke backwards compatibility somehow. And even if you're on the nightly compiler, you have to be very explicit if you want to use unstable features (they're all opt-in). Furthermore, there's no ironclad reason that any given library must be on nightly, in that boring old every-language-is-Turing-complete way; people use unstable features because they either make their code faster or because they make their APIs nicer to use. You can "backport" them by removing those optimizations or changing your API, and though that seems harsh, note that people tend to clamor for stable solutions to their problems, so if you don't want to do it then somebody else will fork your library and do it and steal your users. There are strong incentives to being on stable: since stable code works on both nightly and stable releases, stable libraries have strictly larger markets and therefore mindshare/userbase; and since stable code doesn't break, library maintainers have much less work to do. At the same time, the Rust developers actively monitor the community to find the places where relatively large numbers of people are biting the bullet and accepting a nightly lib for speed or ergonomics, and the Rust developers then actively prioritize those unstable features (hence why deriving will be stable in the February release, which will get Serde and Diesel exclusively on stable, which together represent the clear plurality of reasons-to-be-on-nightly in the wild).

> What's missing?

I've already typed enough, but yes, cross-platform support is a colossal reason for developers favoring language-specific package managers. Another is rapid iteration: it's way, way easier to push a new version of a lib to Rubygems.org than it is to upstream it into Debian. Another is recency: if you want to use the most recent version of a given package rather than whatever Debian's got in stock, then you have to throw away a lot of the niceties of the system package manager anyway. But these are all things users don't want; they don't want to be bleeding-edge, they don't want code that hasn't seen any vetting, and they really don't care if the code they're running isn't portable to other operating systems.


> From my perspective, debian successfully unified the developer and end-user centric package manager in the '90s

I think a more accurate assessment would be that both Red Hat and Debian extended their package support through repositories to enough packages that developers often opt for the easy solution and use distribution packages instead of package manager provided ones because it's easy to, and there are some additional benefits if you are mainly targeting the same platform (and to some degree, distribution, if that applies) that you are developing on.

Unfortunately, you then have to deal with the fact that some modules or libraries invariably get used by code parts of the distribution itself, making their upgrade problematic (APIs change, behavior changes, etc). This becomes problematic when using or targeting a platform or distribution that provides long term support, when you could conceivably have to deal with 5+ year old libraries and modules that are in use. This necessitates multiple versions of packages for a module or library to support different versions sometimes, but that's a pain for package managers, so they tend to only do that for very popular items.

For a real, concrete example of how bad this can get, consider Perl. Perl 5.10 was included in RHEL/CentOS 5, released in early 2007. CentOs 5 doesn't go end of life until March 2017 (10 years, and that's prior to extended support). Perl is used by some distribution tools, so upgrading it for the system in general is problematic, and needs to be handled specially if all provided packages are expected to work (a lot of things include small Perl scripts, since just about every distro includes Perl). This creates a situation where new Perl language features can't be used on these systems, because the older Perl doesn't support them. That means module authors don't use the new features if they hope to have their module usable on these production systems. Authoring modules is a pain because you have to program as if your language hasn't changed in the last decade if you want to actually reach all your users. Some subsert of module authors decide they don't care, they'll just modernize and ignore those older systems. The package manager notice that newer versions of these modules don't work on they older systems, so core package refreshes (and third party repositories that package the modules) don't include the updates. Possibly not the security fixes as well, if it's a third party repository and they don't have the resources to backport a fix. If the module you need isn't super popular, you might be SOL with a prepackages solution.

You know the solution enterprise clients take for this? Either create their own local package manager repo and package their own modules, and add that to their system package manager, or deploy every application with all included dependencies so it's guaranteed to be self sufficient. The former makes rolling out changes and system management easier, but the latter provides a more stable application and developer experience. Neither is perfect.

Being bundled with the system is good for exposure, but can be fairly detrimental for trying to keep your user base up to date. It's much less of a problem for a compiled language, but still exhibits to a lesser degree in library API change.

Which is all just a really long-winded way of saying the problem was never really solved, and definitely not in the 90's. What you have is that the problem was largely reduced by the increasing irrelevancy of Perl (which, I believe was greatly increased by this). Besides Python none of the other dynamic languages (which of course are more susceptible to this) have ever reached the ubiquity Perl did in core distributions. Python learned somewhat from Perl with regard to this (while suffering it at the same time), but also has it's own situation (2->3) which largely overshadows this so it's mostly unnoticed.

I'm of the opinion that the problem can't really be solved without very close interaction between the project and the distribution, such as .Net and Microsoft. But that comes to the detriment of other distributions, and still isn't the easiest to pull off. In the end, we'll always have a pull between what's easiest for the sysadmins/user and what's easiest for the "I want to deploy this thing elsewhere" developers.


Cargo isn't competing against nor replacing distribution package managers. Cargo is a build tool, not a package manager. You're free to package Rust software the same way you do non-Rust software for specific distributions. They are entirely different unrelated things with no overlap. Cargo solves a lot of problems that we've been facing for a long time. We have the Internet now, so let's use it to speed up development.


apt & ports don't follow you into other OSes. Language-specific package managers do, without requiring entanglement with the OS or admin permissions. All you need is sockets and a filesystem.

I think the language-specific ones will win for developer-oriented library management for platform-agnostic language environments.


Apt and/or ports follow you to every OS kernel I can think of (Linux, MacOS, Windows, *BSD, the Solarises), though the packaging doesn't always (for instance, the OpenBSD guys have a high bar, but that's a feature).

My theory is that each language community thinks it will save them time to have one package manager to rule them all instead of just packaging everything up for 4-5 different targets.

The bad thing about this is that it transfers the burden of dealing with yet another package manager to the (hopefully) 10's or 100's of thousands of developers that consume the libraries, so now we've wasted developer centuries reading docs and learning the new package manager.

Next, the whole platform agnostic thing falls apart the second someone tries to write a GUI or interface with low-level OS APIs (like async I/O), and the package manager ends up growing a bunch of OS-specific warts/bugs so you end up losing on both ends.

Finally, most package manager developers don't seem to realize they need to handle dependencies and anti-dependencies (which leads you to NP-Complete land fast), or that they're building mission-critical infrastructure that needs to have bullet proof security. This gets back to that "reinvent dpkg poorly" comment I made above.

In my own work I do my best to minimize dependencies. When that doesn't work out, I just pick a target LTS release of an OS, and either use that on bare metal or in a VM.

Also, I wait for languages to be baked enough to have reasonable OS-level package manager support. (I'm typing this on a devuan box, so maybe I'm an outlier.)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: