Hacker News new | past | comments | ask | show | jobs | submit login
ZeroVer: 0-Based Versioning (0ver.org)
102 points by dmitryminkovsky on July 8, 2023 | hide | past | favorite | 88 comments



People just love LTS and backwards compatibility too much. I'm one of them. But it slows the industry, when you can't do API refactors and have to keep bad decisions forever.

I think library authors should be more relentless and break compatibility every few years. We just need some conventions to not do so very often. Like new major version every year, deprecate API on the next major version, remove deprecated API on the following major version. So you have 1 year to rewrite your app if necessary.

And supporting old versions for those enterprises who would rather pay than upgrade might be a good source of income.


> So you have 1 year to rewrite your app if necessary

Multiply this by the thousands of dependencies modern apps have and the only thing you will ever do is rewrites.


Most applications (even large ones) do not have thousands of direct dependencies.


Good point. It's even better when a dependency multiple levels down gets a breaking change and the direct one is unmaintained.


Who keeps unmaintained, direct dependencies in their projects? Seems like basic hygiene to replace them.


*Node.js & Python developers enter the chat*


> People just love LTS and backwards compatibility too much. I'm one of them. But it slows the industry, when you can't do API refactors and have to keep bad decisions forever.

Hard disagree. API churn is (one of) the real cost of using libraries / external dependencies, so people would just rather reimplement them themselves or copy the library code directly into their project.


> I think library authors should be more relentless and break compatibility every few years. We just need some conventions to not do so very often.

I indeed did this years ago---I'm the original author of Chrono [1]---and it wasn't well received [2] [3] [4]. To be fair, I knew it was a clear violation of semantic versioning but I didn't see any point of strictly obeying that until we've reached 1.0 so I went ahead. People complained a lot and I had to yank the release in question. By then I realized many enough people religiously expect semantic versioning (for good reasons though) and it's wiser to avoid useless conflict.

[1] https://github.com/chronotope/chrono

[2] https://github.com/chronotope/chrono/issues/146#issuecomment...

[3] https://github.com/chronotope/chrono/issues/156

[4] https://github.com/chronotope/chrono/blob/main/CHANGELOG.md#...


I understand from the author perspective that everything below 1.0 is subject to change, from the hobby user perspective I see 0.3 to 0.3.1 and think "oh bug fix, that means I won't read it" without expecting semver.


All those things happened when the Rust crate ecosystem was still much in flux (back in 2017), and I had some good reasons:

- Serde had made a very slight but breaking change in 1.0, and at that time I think it was impossible to support both Serde 0.9 and 1.0 in a single crate without a hacky workaround (which I only learned much later). So if I had to pick only one version to support, it ought to be 1.0 as the change was trivial to resolve.

- Cargo's use of semantic versioning is, while documented, not strictly conforming because 0.x.y is considered compatible with 0.x.z where x > 0 and y < z [1].

- People complained a lot when Chrono went 0.2.x to 0.3.0 as well. This is IMO the biggest reason to issue a breaking change; if people would complain in either way, I wanted to make a choice that benefits the whole Rust ecosystem more.

If this happen today I would agree that I shouldn't have done that, but I think it was not that clear cut at that time.

[1] https://semver.org/#spec-item-4


OTOH you, as a library supplier, may be somewhat hamstrung to improve things you think ought to be improved but consider the productivity hit to your downstream consumers if you constantly break things for them. Stepping back to consider all parties, for even moderately popular projects the balance is obviously tipped in the favor of the consumers.

There are libraries out there (such as FFMpeg iirc) that will do a yearly major version with breaking changes. This is a good approach imo. FFmpeg consumers know what to expect and when to expect it.


> But it slows the industry, when you can't do API refactors and have to keep bad decisions forever.

It slows the industry when you're spending all your time rewriting code that already works.

The question is, who is slowed down: API creators or API users? If you make regular breaking changes to APIs, it's API users who get slowed down, if you don't it's API creators who get slowed down.

Given the entire point of things that have APIs (libraries, frameworks, centralized services, etc.) is that there are many users and few creators, it's pretty clear which slows down more people.

Additionally, with good API design, you can often maintain namespaced APIs in tandem with very little additional cost. I've got a /v1/blah API and a /v2/blah API on one of my clients' websites--the v1 directory hasn't been touched in 7 years, because all the bugs anyone cares about have been fixed. It still has users (at least officially, I haven't looked at the reporting to see how often they're actually hitting those APIs). The users simply don't care about the new features in the new API, and it's not our place to force them to care.

You can do similar things with libraries (think sqlite vs. sqlite3) but this is obviously harder with frameworks (which is one of the reasons to not like frameworks). It doesn't work everywhere but it works often.


It takes 3 years minimum, often 5, for non LTS Debian, to cycle through a library revision.

IMO, if you think distros are a thing of the past, or 3 years support of the biggest base distro is slowing things down, you're living in a bubble.

I know you prepended this with a statement saying you love LTS too, but to many, LTS is a decade or more.

And really, I have no interest running 'new shiny'. That is the absolute opposite of stable. That is where horrible, life altering mistakes live. If you want to increase your workload 100x, run bleeding edge.

And bleeding edge is anything that has any code change, outside of bug fixes and security fixes.

I know my position is not popular, but that doesn't make it wrong.


Like software that requires some time to master. I can't recount all the times when i invested much time into doing it properly, only to have it all become useless and invalidated because upstream decided to throw it all away.

With you having an attitude like you do, why should i even bother with reading your documentation? Give it 2 years and it will all be obsolete. Waste of time.

Fuck innovation. I want tools that exist long enough that i can master them.


> But it slows the industry, when you can't do API refactors and have to keep bad decisions forever.

That's not true. You can simply put any breaking changes into separate namespaces. Now you have limitless backwards compatibility and yet users can selectively upgrade whenever they want the new features.

Maintain a non-trivial node project for a while and you will see why some people like stability.


Libraries are not really a problem, as you can have multiple major versions alongside, so i can have gtk2, gtk3 and gtk4 alongside, each offering its API for applications that use it, while actively developed code (in library) does not need to handle deprecated API.

Bigger problems are demons with API, where it is usually not possible to run multiple versions alongside (as they would compete on the same resources or data), so one code have to offer multiple API versions internally.


> I think library authors should be more relentless and break compatibility every few years.

...isn't it just how most LTSs work? LTS is long-term support, not life-time support.


I don't think you should be afraid of removing backwards compatibility. Look at WordPress that has maintained backwards compatibility for too long. It will happily run plugins that were abandoned ten years ago.

When you break compatibility, you force out abandoned crap. I agree you don't want to do it too often; but not doing it at all is (IMHO) worse.


While this is probably satire, I sort of agree with it!

I always thought you just need two numbers, a.b

You increment b when you change something in a backwards compatible way.

You increment a when you make a breaking change.

If you are used to semver, it is like ditching the minor version and calling it a patch.

a.b is if course isomorphic to the 0.a.b system mentioned here.

The disadvantage is downgrading patch-only in semver may now be breaking change in twover but that is a rare edge case IMO.


> You increment b when you change something in a backwards compatible way.

Problem is that 'backwards compatible' is not a black-and-white criterion. Most non-trivial development could lead to changes in behavior (or at least in performance) that, while not part of API contract, could still be relevant for users.

For that reason, it makes sense to have a.b.c scheme, where 'b' is for regular backwards-compatible development, while 'c' is for targeted bugfix releases, which are hopefully devoid of such behavioral changes.


I think you have changed my mind, thanks!


It depends on the library. I personally like the minor vs patch distinction. If I see a patch version I might update immediately because I don't want a known bug in my application, but if I see a new minor feature version I might wait a bit


Same, but note a minor version encompasses all patches since the last release as well. You could have a single feature and countless bug fixes, but it will only show up as a meagre minor bump, with a suspicious zero in the patch field.


In theory the patch verison should be painless and always a given upgrade. The minor version might have at least the same number of important bug fixes, it's just coming with a different cost of having larger changes as well.

Some fixes only come with larger changes. Like the recently posted rust regex 1.9 release. Only a rewrite of the library fixed some long-standing issues.


I prefer negative versioning for prerelease software. It's like a countdown to when your project will be viable and ready to show to the world.

Currently on version -2.-0.-61 of my social media network for dogs. It's getting there!


how do you know when it'llbe finished, before the fact?


That’s not entirely unlike LaTeX versions approaching pi.


Maybe it is time to consider INvers, irrational number versioning, as used in eg TeX https://en.m.wikipedia.org/wiki/TeX

In TeX the version approaches Pi, every new version adds a decimal. Elegant, will hold forever!

TeX 3.141592653 is 45 years old. Its companion Metafont has version number 2.71828182, you can see where this is going.


Donald Knuth intentionally chose this versioning scheme to point out how much he trusts his software to never change.


Somehow we need a less horrible SemVer or a less horrible social contract around SemVer.


Or just use ISO8601-formatted dates as versions and derive huge benefits.

1. version numbers sort numerically and lexicographically in a sensible way, including across projects and packages which use the same format

2. users get educated that these preciously-held ideas they have about software version numbers are complete superstition. Like "something with a zero major number means not production ready", "something with a zero minor number means I should wait until there's a patch", "something with a major number increase means backwards-incompatible", "something with a minor number increase means backwards-compatible"

3. You know when a particular version (of everything) came out. "We started seeing a wierd bug on X date" no longer is impossible to figure out.


I've done this for personal projects for years, for exactly the reasons you state - other than user re-education, my projects don't have enough users to make an impact. For GitHub releases, I get the CI script to do ``git log -1 --format=%cd-%h --date=format:%Y%m%d-%H%M%S'' (producing output along the lines of "20230708-150500-1234567") and use that as the suffix. (Add a -prerelease suffix on if it's coming from a non-default branch.) This sorts nicely and saves me 2 minutes if I need to find the commit in the history.

(These are self-contained projects. I suppose semver does make some sense for libraries that you link with.)

Professionally, it's been 99% Perforce for about 15 years, so it's routine to use the submitted changelist number, submitted changelists being numbered in the order they were subsequently committed. Sadly not fixed-width, but at least Explorer sorts them sensibly.

Two difficulties I have had doing this with git:

- there doesn't seem to be a way to get git to enforce UTC, so the dates are in my local time zone (for my projects this is not really an issue, and my timezone is almost UTC anyway)

- the CI system runs separate builds for different targets, and using the git commit timestamp ensures all builds get the same time stamp. But it's then possible to end up with timestamps significantly different from the actual release time, or (worse) out of order. I could probably do something better about this than my current "solution" of doing nothing, but this has only happened a couple of times


That only works for software with simple linear versioning. If i have two major versions (say 2 and 3), i could still release a minor version to the older major version (so i would release 2.8, then 3.0, then 2.9, then 3.1).


When working with versioning software that requires a version in the format A.B.C, I like to use YYYY.MMDD.N, where N is the number of versions already released on that day.


My rules for 4-number semver:

1) major public API change. If you don't have a public API, this should never be anything but 1 and can be hidden from the user. End users don't want these they scare them.

2) minor: any planned release that doesn't break API. End users love these and plan around them.

3) revision: unplanned emergency hotfixes. Naming this way means the "next minor" we were talking about with all stakeholders is still the next minor. It also means our version numbers look like our git dag, since this one would be a branch from the last tag instead of main.

4) release: sometimes something goes wrong during the release itself. The first 3 numbers are public, this is internal-only and only appears in git tags and internal deployment notes. This way every push to prod has a unique version number, but all our change management documents are still accurate even if we had to push 2 or 3 times for a single release.

Why? First digit is about compatibility. All the other digits are about planning.

"We're working towards 1.3"

"we think this feature will be in 1.4".

"We had to release 1.2.1 because of an emergency somebody put Arabic text in their profile picture filename and that brought down the site."

"Turns out that trick with the release pipeline didn't work in prod so we had to make 1.2.1.1 while deploying".


That at least fixes the qualitative nonsense between minor and patch updates.

I think there needs to be better project definitions around what constitutes a major change.

Projects need to be able to define things like dropping support for old versions of the underlying language in minor versions. So that the last version of support that some people might get is "3.2" and "3.3" may not install at all for them. That means that technically they are in a state where they need to do work to upgrade and are "broken" in a sense, but the actual public API of the software has not changed between "3.2" and "3.3". Supported O/S distro versions should also be able to be abandoned in minor releases. Toolchain updates can also happen in minor releases. Pulling in major versions of dependencies which are technically breaking for anyone who hits a diamond dependency issue, but which produce no major breaking API changes should be able to happen in minor versions.

That means that the contract isn't "I can pull in minor versions and you can never force me to do work" but more strictly that the public API the software exposes won't update.

There's also the problem with semver pinning that projects do where they put hard floor and ceiling pins on all their dependencies, even though their software may be fine with a 5-year old version of the dep (they've just never tested) and it may work fine with the next major release of the dep without any changes at all. Ideally for that last problem, the compatibility matrix fed into the dependency solver should really be a bit more malleable, so that the engineer can realize that the next version of dependency breaks everything and they can retconn the compatibility of their software to pin to the last working version of that dependency. This breaks the perfect immutability of literally everything about a software release, but allows for not being able to predict the future.


What are the horrible things about SemVer? Can you give details?


Semantic Versioning requires you to declare a public API, which is not even remotely possible for many projects. If the public API surface is clear semantic versioning does indeed work well, but otherwise it doesn't give much information as users have no idea what the public API would be. Calendar versioning [1] or even a single-number version is more preferred in such situations.

[1] https://calver.org/


Yes! Thank you. Exactly this :)

Nearly every org I’ve worked in has used semver internally and nearly every time their version numbers were just incremented arbitrarily because there wasn’t an exposed API.

This lead to countless problems, not least of all because semver usually requires one to manually set the version number based on the change log and people are generally pretty bad at changing point releases.

So I’ve usually ended up changing the versioning scheme to build number (generated by the CI/CD tooling) plus some extra information like git hash and/or timestamp - depending on the application and whether that build information can be easily encoded as additional metadata or not.


In my opinion, Semver only makes sense for shared libraries, not for applications, or OSes, or for APIs exposed on the network.

For applications, it goes just like you said.

For APIs on the network, the caller should only get to control the breaking version their request gets routed to (the rest is abstractly owned by the service provider).


> Semver only makes sense for shared libraries, [...]

While it does make more sense for them, that it not even clear as it seems. Library authors rarely define the public API because it is very tedious and hard to complete---there are only some sort of fuzzy and implicit "common sense" definitions. Whenever "breaking" changes happen the definition gets stronger (but still incomplete), and over the time it would encompass every observable aspect, as the Hyrum's Law suggests.

In either case semantic versioning is not tremendously useful because either users have an incomplete expectation of what major, minor and patch versions mean, or they will be notified of every possible change and the version distinction becomes useless. Semantic versioning is still useful because it was a codification of existing practice where the expectation can be good enough to avoid most issues. There is no actual value added by the codification in my opinion.


Many languages have explicit access levels. Others have naming conventions. Library authors use them often.

I think more software follows semantic versioning than before it was codified.


That's exactly my point. To my best knowledge "semantic versioning" refers to Tom Preston-Werner's codified version, which is not valuable for the aforementioned reasons. The general idea behind semantic versioning is of course valuable, but we already had a word for that... it's called versioning.


I thought you were separating semantic versioning practices and the Semantic Versioning codification.

Versioning includes many practices outside semantic versioning obviously. Rational numbers. Odd minor version is unstable. Last number at least 90 is unstable. No patch versions. No minor versions. Incompatible minor versions. Dates.

I rejected your claim library authors rarely define the public API. And Hyrum's law makes semantic versioning imperfect but not useless.


Versioning is fundamentally the communication of changes to users, and those changes are inherently semantic. Semantic versioning as a general idea is thus not different from versioning as usual---one or more numbers compared in the lexicographic order relative to importance. The actual version format is at best superficial and distros have done a good job to unify all different version formats.

The Semantic Versioning codification does attach specific semantics to major, minor and patch versions, but those semantics can't exist without the public API. So the codification itself is nothing more than a stipulation that there should be three major categories of changes. The codification would have been much more useful if it included specific rules for particular languages and contexts, and there are indeed some noble attempts. But they are hardly complete and users can't rely on that. For example Rust Cargo defines a relatively complete semver guideline [1], but it doesn't define whether changing the minimum version of Rust required (MSRV) is a breaking change or not. So a minor version upgrade may or may not increase MSRV and users have to consult the release note to find it out anyway. If users can't be entirely sure if the minor version is breaking or not for their purposes, is the entire stipulation worthy?

[1] https://doc.rust-lang.org/cargo/reference/semver.html

> I rejected your claim library authors rarely define the public API.

I agree that they do think they define the public API, sometimes only in their minds, sometimes in a form of the public source code. But the source code is only a portion of the public API, and any major libraries have suffered from numerous mismatches between authors' and users' definitions of the public API. The public API in semantic versioning clearly means the latter, so my claim that library authors rarely define the public API (of the second sense) still holds. But I should have been more clear about two different notions of the public API---sorry for that.

> Hyrum's law makes semantic versioning imperfect but not useless.

The general idea, yes (you can still communicate that changes have happened). The codification, no (three-part versions are no longer useful).


Explicit access levels keep on getting eroded.

Generations of programmers come up writing web services where the router forms a hard API layer and they don't see the point in public/private/protected any more, because nobody is ever linking against their code so it doesn't matter.


Generations of programmers wrote application code where access levels didn't matter. The concept of public and private API is relevant in web services even when language access levels aren't. And erosion is not evident to me.


People take it too seriously and don't realize that you can't realistically categorize every single change neatly into 3 separate breakage categories. Arguments abound about how to manage this properly with all sorts of schemes. The fact that "0" is a special case that deserves any consideration is an example of it being broken, imo. What does it actually matter what the first digit is?

Version numbers just denote a change happened and you want them to roughly resemble some sort of chronological ordering. Everything else is gasoline for flame wars and company policies.


To me semver makes only sense if critical bug(/security) fixes will get backported to old major version(s). Otherwise downstream consumers do not really have true choices to make based on the info deduced from semver. Basically if as an upstream your intent is not to support old versions then that heavily implies that everyone should update to latest asap regardless of the brekage.


Even if I always take the latest for direct dependencies, semver is still helpful preventing breakage from incompatible upgrades to indirect dependencies. If I depend on library A, and library A depends on library B, I can't fix any breakage from an incompatible update to library B. I need to wait for library A to update.


I can’t tell if this is serious or satire. There’s no spec.


Satire, for sure. Check the about page. That said with the number of projects on this list, it might as well be serious.


It's satire. If you're unsure, have a look at their about page [1]. They are criticising the idea of staying on a 0.x release for years even though your project is long used in production by now.

[1] https://0ver.org/about.html


It's obviously satirical. If so, the author is brilliant. Otherwise, well...

I mean just look at the project show cases. Included are the usual colossal cluster^Wframeworks that power our decaying software infrastructure.

Personally, I don't trust anything that either stays perpetually under v1.0 or exceeds v10-15.


> exceeds v10-15

What do you think about internet browsers like Firefox and Chromium?


I find it ridiculous that Chrome, Edge and Firefox are all currently around v115. It's just a marketing term now. I've seen Firefox and Thunderbird major version numbers change for simple bugfixes. A release date would be enough.


There's no point in calling it 1.0 unless you want to break compatibility with 0.x. Sometimes you get the design right on day 1, I guess.


In SemVer, you can break compatibility with 0.x within 0.x. 1.x is the promise that there won't be any future compatibility breaks within the 1.x series.


It should be satire, but it is serious. The spec is SemVer, but only versions starting with a zero are allowed. This just shows that SemVer is bullshit.


It is satire. They want you to do SemVer (or CalVer, etc.) properly. There are arguments against SemVer, but I don't see how this one of them.


There is no proper use of SemVer. That's why people do lip service to it, but stay with the zero.


The 0ver projects follow SemVer.


In a broad sense maybe, but not really. The spec says:

> 4. Major version zero (0.y.z) is for initial development.

I'd argue that most of the projects we are talking about here are long past initial development.


For sure. The problem though is that the strongest points of SemVer are the “machine-readable”, straightforward rules, in particular the rules about bumping the major version. But (like you note) SemVer also has this soft rules that goes beyond how to do versions and end up really dictating how a project should be done:

1. You should have a 0.x.x period for initial development

2. This is the not-stable part of the project lifecycle

3. After 1.x.x. you should be conservative about bumping the major version (not sure if this is from the specification webpage or if this is just the culture around SemVer though)

Point 3 might be what keeps projects on 0ver.

IMO SemVer itself (the culture around it can’t be controlled so that’s outside the domain of discourse) should have just dictated the machine-readable things. Thus points 1 and 2 are fine, but point 3 really discourages large major version integers. I guess some projects can’t realistically follow that. Not because the developers are bad necessarily but simply because breaking changes are part of the domain, more or less.

It seems that only projects that end up reaching a low major version over, say, a decade, fit into the SemVer culture (meaning that people won’t complain about the versioning practice). Maybe some projects simply feel that they get less pushback if they stay on 0ver instead of going for 1.0.0 and then having to go up to 7.0.0 or beyond.

A specification which only consists of three integers (plus the optional “beta”, “rc” appends) is inherently limited. Which is why it should be very limited in scope. SemVer should have just stuck to dictating the machine-readable part without having any opinions about how large the major version integer gets. (Again, maybe this is the culture around SemVer, not something from SemVer itself.)

EDIT: Less verbosely: SemVer seems to both dictate the breaking change versioning as well as making infrequent major bumps after 0.x.x. I think that is an overreach if you want your specification to be focused and uncontroversial, because in practice people will end up going back and forth about tradeoffs.

See also: https://news.ycombinator.com/item?id=28090144



The purpose of versioning is to communicate something to your users. (It's also necessary to increase over time for package managers to work, but that's an easy bar to reach.) SemVer tries to explicitly define exactly what's being communicated: API compatibility. I think that's great, especially for libraries, but it's neither the whole story nor what's most meaningful for most projects.

The most obvious and nefarious example is that the most severe and painful kinds of backward incompatibilities are superficially permissible under SemVer: behavior changes. To confuse the issue even more these behavior changes might be to fix a bug and restore the original or intended behavior of a feature! There's no single best way to communicate that to users through a version number: if libfoo v1.4.3 broke a behavior from 1.4.{0,1,2}, should the fix be in v1.4.4 or v1.5 because technically you're creating a backward incompatibility! Does the answer change if the buggy behavior has been around multiple patch releases or multiple minor releases? Does the scale of the behavior difference impact the versioning scheme chosen? Does the approximate number of users impacted impact the versioning scheme chosen? Probably!

ZeroVer is, in my opinion, a hacky but fine solution to this: no guarantees! The developers just want to develop and it's up to the consumers of the project to figure out what release they want to use. ZeroVer is when a project chooses not to try to communicate very much through version numbers. I think that's often better than some strict adherence to SemVer that falls apart under any sort of reasonable scrutiny.

I like how browsers have gone: basically give up on the traditional Major Version Number. A Chrome 2 or Firefox 2 that is a radical redesign would probably be an entirely new product with new branding and versions. So just bump the first number a lot to communicate feature releases to users, and bump the other numbers for basically internal build reasons. The minor, patch, and build numbers are free to be used and abused for a lot of complex purposes incredibly complex and popular projects like browsers (and operating systems) have.

I think a lot of projects, Nomad included, would probably be best represented by BrowserVer. Nomad is deeply committed to incremental improvements and backward compatibility, so any "Nomad 2.0" efforts are more likely to happen under a new project. Frankly Nomad 1.0 was more about marketing than any sort of meaningful feature or compatibility promise: we wanted to communicate Nomad was stable and reliable. Going from 0.x -> 1.x is an easy way to communicate that even if nothing more significant happened from 0.12 -> 1.0 than had happened from any 0.X -> 0.Y.


I think there needs to be a much tighter definition of what the API is and not just the entire surface area.

So if a project does a major update of one of its deps, without changing its own API, or deprecates support for an old language version or distro, it should be able to ship those in a minor version.

That means that consumers who aren't keeping up with the times may be cut off in a minor update and have to do work to consume the next update. There needs to be less of an expectation that "it isn't a major update, so I won't have to lift a fucking finger and its your fault if I need to" which is what SemVer has socially turned into.


This is a funny lampooning of SemVer. That’s at least how I choose to interpret it.


I'm actually interested in practical examples of alternatives to Name+SemVer for

(1) data analysis oriented code (2) code to run experiments (e.g. psych paradigms)

I often find that in such code, forking is rather more common. That is, the code bases become wider rather than deeper. For example, we might run several experiments that have a strong resemblance to each other, but have any number of (experimentally relevant) tweaks. Within each fork, I rename, and restart the semantic versioning.



best versioning is svn/p4/g4 monotonically increasing changelist/revision number


Page should be updated, Terraform nowadays is in the 1.x version timeline.


Ah yes, an ideal versioning scheme for modern video game projects!


I noticed this the other day while writing a small server in Python. FastAPI is 0.100, fine. But I was surprised to find:

- Uvicorn is 0.22

- httpx is 0.24

- starlette is 0.28

And so on and on. More generally, the quality of Python's tooling and ecosystem is astonishingly low compared to the investment that every day pours into it.


How on earth do people get from "the version number starts with a zero" to "the quality is astonishingly low"? It blows my mind.

Edit to add: If I was an opensource package maintainer again, my first action would be to massively bump the major version number of my packages. HUGE increase in quality right there.


Not the actual quality but I suspect the perception of quality by some people would genuinely be increased by bumping the version numbers. Maybe it would help with adoption.


In SemVer, having the first number be zero means that the project is still in unstable development.


Are those libraries actually not production ready as might be implied by the 0.x or are they just not willing to bump to 1.x for whatever reason? A lot of the projections mentioned in the submission seem to be production ready despite the version number.


"Production-ready" is a state of mind.

They are widely used in production. Maybe some of them are de-facto production ready. But the developers don't want to make commitments to API stability.

And Python being Python it is very hard to statically enforce that you're upholding SemVer promises.


Only you can decide whether something is production-ready for you. The library author doesn't know what you're trying to do so they can't possibly know whether their library is ready to be used in your production. You could be making an experiment that's going to form part of a payload for a rocket, an embedded device that gets surgically-implanted into people, is life-critical and very hard to update or you could be making an online meme generator. Production-ready really does mean completely different things in these cases.


Astonishingly low compared to what? And in what aspects? (Asking unironically.)


I use Rust professionally and even though it is a newer language with infinitely fewer developers and sponsoring organizations, the tooling and ecosystem is already superior. `cargo` is superior (in DX terms) to whatever nightmare is current in the Python world, `rust-analyzer` is better than to PyCharm, the ecosystem is smaller but more reliable.


I regularly hear that the tooling of Python is bad, but we've had Poetry for a while now and it just works.

Unfortunately, I'm not experienced in Rust so I cannot really compare it to cargo. However, Poetry does everything I would expect from dependency management and packaging/publishing and I've never had problems with it.

Also, there is ruff [2] (ironically written in Rust) and mypy [3] (they recently left 0ver!) for static analysis, black for code formatting (I really miss an opinionated formatter like this in other languages), etc. They also work just fine. Python tooling doesn't seem bad to me.

[1] https://python-poetry.org/ [2] https://github.com/astral-sh/ruff [3] https://mypy-lang.org/


I'll grant it could be much worse (Common Lisp, OCaml also have significant tooling problems). I haven't used mypy but I do use pyright and it's alright.

The only way I've found to make Python tolerable is 1) lots of dataclasses and 2) using it as a more strongly-typed bash (i.e.: not for building large and complex software objects).


Poetry works, but unfortunately it depends on Python and so it frequently breaks unless you’re very careful with your Python environment management.

Installed it with your system’s native version? Good luck getting it to spawn a venv in a newer version. Used a Homebrew version? When it updates, Poetry breaks. Using asdf? Everything breaks, somehow.

I recently tried pipx and have hope that this will persist.


True, that's why they recommend installing Poetry either via their installer or using pipx, as you did.

pipx and pip's externally-managed-environment should help mitigate a lot of the broken environment issues. I use them too.


There is a new generation of Python tooling that is very high quality.

`rye` is the equivalent of `cargo`

`ruff` is the equivalent of `clippy`

Both are single-purpose, highly-functional and blazingly fast. Both written in rust actually :)


Throwing out the unpopular opinion here: PHP! For all the hate it gets, the package ecosystem is really great. Libraries follow Semver (because the package manager, composer, requires it), quality is usually high, even for less widely used packages, and compatibility is taken seriously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: