>sharing your code is even pain with colleagues even if they are using the same operating system, mainly because the Python requirement file doesn't pin dependencies,
wat? Pretty sure you can use == in requirements.txt
Also, its very possible, and quite easy to just include the code for the library in your package, which effectively "locks in" the version. We did this all the time when building AWS lambda deployments.
This comment section itself clearly shows how crazy dependency and environment management is in Python. In this thread alone, we've received instructions to...
- poetry
- "Just pin the dependencies and use Docker"
- pip freeze
- Vendoring in dependency code
- pipreqs
- virtualenv
This is simply a mess and it's handled much better in other languages. I manage a small agency team and there are some weeks where I feel like we need a full-time devops person to just help resolve environment issues with Python projects around the team.
Keep in mind that Python is 31 year old (it's even older than Java) it was created around the same time as world wide web. So it started when no one even knew they would need dependency management and evolved over time from people posting packages on their home pages, to a central website to what we now call PyPI. Similarly the tooling and way of packaging the code evolved.
What you described are multiple tools that also target different areas:
> - poetry
from what you listed this seems like the only tool that actually takes care of dependency management
> - "Just pin the dependencies and use Docker"
this is standard fallback for all languages when people are lazy and don't want to figure out how to handle the dependencies
> - pip freeze
all this does it just lists currently installed packages in a form that can be automatically read by pip
> - Vendoring in dependency code
this again is just a way that applies to all languages, and it is still necessary even if there's a robust dependency management as there are some cases where bundling everything together is preferred
> - pipreqs
this is just a tool that scans your code and tells you what dependencies you are using. You are really lost if you need a tool to tell you what packages is your application is using, but I suppose it can be useful for one offs if you inherit some python code that wasn't using any dependence management.
> - virtualenv
this is just a method to have dependencies installed locally in project directory instead per system. This was created especially for development (although it can be used for deployment as well) as people started working on multiple services with different dependencies. It's now included in python so it's more like a feature of the language.
Being 31 years old doesn't preclude having a decent, official and reproducible way of installing packages in 2022. That's just a bad excuse to justify subpar package managers and terrible governance around this problem.
Package management is pretty much a solved problem, no matter how old is your language. It smells to an outsider like me like a lot of bike-shedding and not enough pragmatism is going on in Python land over this issue. Has a new BDFL stepped up after Guido left?
But Python has a decent and a reproducible way of installing packages. The problem python has is that things evolved over time, so you can find on the net plenty of outdated information. There is also a lot of blogs and articles with bad practices, most written by people that got something working.
I think also a lot of issues with packaging is ironically because of PyPA that supposed to work on a standard, but in reality instead of embracing and promoting something that works they just pushes half-assed solutions because author is one of the members. Kind of like they were pushing failed Pipenv "for humans". Seems like Poetry is generally popular and devs are happy with it, so of course PyPA started pushing their own Hatch project, because python packaging was finally getting too straight forward.
I think Python would benefit as a whole if PyPA was just dissolved.
Poetry has been the one I've settled on as well. It "Just Works™" for everything I've used it for thus far, and it's even been easy to convert older methods I've tried to the Poetry way of doing things.
I do see that in their repo[1] they use a non standard way to build the package. They use Bazel, but that's Google for you. They never do things everyone else is doing. I'm not sure why this is Python problem rather than package problem.
They have tons of open issues around building: [2]
Yup. Java is 27 years and has a splendid package management system (Maven Central) which is so well designed that you can use it from two different tools that are extremely different (Maven and Gradle).
This really isn't a good argument though: it's an extra, extremely specific use case for assignment that looks visually very similar.
And worse, effects code maintainability - if you need that assignment higher up, you're now editing the if statement, adding an assignment, plus whatever your interstitial code is.
Python doesn't have block scoping so the argument for it is weak.
It's not for extremely specific use-case, unless you consider using variables as a condition of an if statement or loop extremely specific.
> And worse, effects code maintainability - if you need that assignment higher up, you're now editing the if statement, adding an assignment, plus whatever your interstitial code is.
How is that different than variables declared without the walrus operator? If you declare a variable with the walrus operator and decide to move its declaration you can still continue to reference that variable in the same spot, just like any other variable. Do you have an example you can share to demonstrate this? I'm not sure I understand what you mean.
> Python doesn't have block scoping so the argument for it is weak.
The walrus operator another way to define variables, not change how they behave. It's just another addition to the "pythonic" way of coding. It's helped me to write more concise and even clearer code. I suggest reading the Effective Python link I provided for some examples of how you can benefit from it.
# some other code
if determined_value := some_function_call():
do_action(determined_value)
and then I change it to this:
# some other code
determined_value = some_function_call()
logger.info("Determined value was %s", determined_value)
validate(determined_value)
if determined_value:
do_action(determined_value)
and determined_value is a reasonably expensive operation (at the very least I would never want to redundantly do it twice) - then in this case my diff for this looks like:
--- <unnamed>
+++ <unnamed>
@@ -1,5 +1,8 @@
-
# some other code
-if determined_value := some_function_call():
+determined_value = some_function_call()
+logger.info("Determined value was %s", determined_value)
+validate(determined_value)
+
+if determined_value:
do_action(determined_value)
whereas if I wrote it without walrus originally:
--- <unnamed>
+++ <unnamed>
@@ -1,5 +1,8 @@
# some other code
determined_value = some_function_call()
+logger.info("Determined value was %s", determined_value)
+validate(determined_value)
+
if determined_value:
do_action(determined_value)
then the diff is easier to read, and the intent is clearer because diff can simply infer that what's happening is the semantic addition of two lines.
Code is read more then it's written, and changed more then originally created, and making the change case clearer makes sense.
Your issue is with the readability of the diff? That is so trivial. You're trying to find anything to complain about at this point. How about just look at the code? You should be doing that anyway.
Diffs are the predominant way people relate to code changes via PRs. It is standard practice to restructure patch sets to produce a set of easy to read to changes which explain what is happening - what "was" and what "will be".
An example where I have wanted this many times before it existed is in something like:
while (n := s.read(buffer)) != 0:
#do something with first n bytes of buffer
Without the walrus operator you either have to duplicate the read operation before the loop and at the end of the loop, or use while True with a break if n is zero. Both of which are ugly IMO.
I used to think this and rage at the current state of Python dependency management. Then I just buckled down and learned the various tools. It's honestly fine.
> all this does it just lists currently installed packages in a form that can be automatically read by pip
Are you referring to version specifiers[1] being optional or is there something more to versions that I don't understand? PEP 440 is a wall of text, maybe I should get around to reading it sometime.
This whole chain started with someone pointing out the author doesn't seem to realize you can pin versions[1]. I'm just confused how people seemed to end up questioning what pip freeze does.
I tried to say that from mentioned tools only poetry can be called as a dependency management.
The other tools are used for different purposes, but perhaps could be used as a piece of package management in some way. The mentioned docker and vendoring is irrelevant to Python and it even applies to Go.
Sometimes I feel people are using Python very differently than me. I just use pip freeze and virtualenv (these are Python basics, not some exotic tools) and I feel it works great.
Granted, you don't get a nice executable, but it's still miles ahead of C++ (people literally put their code into header files so you don't have to link to a library), and even modern languages like rust (stuff is always broken, or I have some incompatible version, even when it builds it doesn't work)
By the way if you're a Python user, Nim is worth checking out. It's compiled, fast and very low fuss kind of language that looks a lot like Python.
> and even modern languages like rust (stuff is always broken, or I have some incompatible version, even when it builds it doesn't work)
Been working on and off with Rust for the last 3 years, never happened to me once -- with the exception of the Tokio async runtime that has breaking changes between versions. Everything else always worked on the first try (checked with tests, too).
Comparing Python with C++ doesn't make do argument any favours, and neither does stretching a single Rust accident to mean the ecosystem is bad.
This is consistent with my experience. Semantic versioning is very very widely used in the Rust ecosystem, so you're not looking at breaking changes unless you select a different major version (or different minor version, for 0.x crates) - which you have to do manually, cargo will only automatically update dependencies to versions which semver specifies should be compatible.
For crates that don't follow semver (which I'm fairly certain I've encountered zero times) you can pin a specific exact version.
When I was a Python dev, I never saw that happen in ten years or so of work. Pip freeze and virtualenv just worked for me.
I will say, though, that this only accounts for times where you’re not upgrading dependencies. Where I’ve always run into issues in Python was when I decided to upgrade a dependency and eventually trigger some impossible mess.
Piptools [1] resolves this by having a requirements.in file where you specify your top level dependencies, doing the lookup and merging of dependency versions and then generating a requirements.txt lock file. Honestly it’s the easiest and least complex of Python’s dependency tools that just gets out of your way instead of mandating a totally separate workflow a la Poetry.
It's really not. I've had a much, much worse experience with Python than Elixir / Go / Node for various reasons: lots of different tools rather than one blessed solution, people pinning specific versions in requirements (2.1.2) rather than ranges (~> 2.1), dependency resolution never finishing, pip-tools being broken 4 times by new pip releases throughout a project (I kept track)...
In Elixir I can do `mix hex.outdated` in any project, no matter who wrote it, and it'll tell me very quickly what's safe to update and link to a diff of all the code changes. It's night and day.
Thankfully, it's getting gradually better with poetry, but it's still quite clunky compared to what you get elsewhere. I noticed lately for instance that the silent flag is broken, and there's apparently no way to prevent it from spamming 10k lines of progress bars in the CI logs. There's an issue on Github, lost in a sea of 916 other open issues...
As soon as you take 2 dependencies in any language, there's a chance you will not be able to upgrade both of them to latest versions because somewhere in the two dependency subgraphs there is a conflict. There's no magic to avoid this, though tooling can help find potentially working versions (at least by declaration). It's often the case that you don't encounter conflicts in Python or other languages, but I don't imagine that Go is immune.
I've used npm but an not familiar with these kinds of details of it. There would seem to be some potential putfalls, such as two libraries accessing a single system resource (a config file, a system socket, etc.). I will take a look into this though. Thanks.
npm works around some problems like this with a concept of "peer dependencies" which are dependencies that can only be depended on once. The typical dependency, though, is scoped to the package that requires it.
Rust can include different versions of the same library (crate) in a single project just fine. As long as those are private dependencies, no conflicts would happen. A conflict would happen only if two libraries pass around values of a shared type, but each wanted a different version of the crate that defines the type.
I have rarely encountered issues in rust. Most rust crates stick to semver so you know when there will be a breaking change. My rust experience with Cargo can only be described as problem free(though I only develop for x86 linux).
As for pip freeze and virtualenv things start to fall apart especially quickly when you require various C/C++ dependencies (which in various parts of the ecosystem is a lot) as well as different python versions (if you are supporting any kind of legacy software). This is also assuming other people working on the same project have the same python yadda yadda the list goes on, its not great.
> pip freeze and virtualenv things start to fall apart especially quickly when you require various C/C++ dependencies
Yes 100 times. That can be incredibly frustrating. During the last year I've used a large (and rapidly evolving) C++ project (with many foss C/C++ dependencies) with Python bindings. We've collectively wasted sooo many hours on dependency issues in the team.
Long compilation times contribute considerably to slow down the feedback loop when debugging the issues.
Wat? What dependencies? I have right now 100 separate Python projects which each have their own venv, their own isolated dependencies, and their own isolated copy of Python itself and my code dir doesn’t even crack top 10 hard drive space.
No, saying "one common library is big" isn't a good way to show that "Python dependencies are big", which is what the initial claim was. Most libraries are tiny, and there's a very big one because it does many things. If Tensorflow were in JS it wouldn't be any smaller.
Yup. PyEnv for installing various versions. /path/to/pyenv/version/bin/python -m venv venv. Activate the venv (I made "av" and "dv" aliases in my .zshrc). Done, proceed as usual.
You can get that by bundling up your venv. When you install a package is a venv it installs it into that venv rather than the system. As far as the venv is concerned it is the system Python. Unfortunately passing around a venv can be problematic, say between Mac and Linux or between different architectures when binaries are involved.
Sounds like someone has never been asked to clone and run software targeting Python 3.x when their system-installed Python is 3.y and the two are incompatible.
Sounds like someone is making assumptions on the way I work :) As a matter of fact I have and the solution is pyenv. Node has a similar utility called n.
Now try installing tensorflow. Treat yourself to ice cream if you get it to install without having to reinstall Linux and without borking the active project you're on.
And unless things have gotten a lot better in the 2 years since I last did `pip install numpy` on ARM, prepare for a very long wait because you'll be building it from source.
The major issues you'll see involve library version mismatches. It's a very good idea to use venv with these tools since there are often mismatches between projects.
Tensorflow sometimes is pinned to Nvidia drivers, and protobuf. And I think it has to be system level unless you reaaaaally want to fiddle with the internals.
The core problem with Python dependencies is denial. There are tons of people who make excuses for a system that is so bad Randall Munroe declared his dependencies a “superfund site” years ago. In a good ecosystem, that comic would have prompted change. Instead, it just prompted a lot of “works for me comments”. Monads are just monoids in the category of endofunctors, and C just requires you to free memory at the right time. Python dependencies just work as long as you do a thing I’ve never seen my colleagues do at any job I’ve worked at.
That's half of why from 2017 to 2021 I had a yearly "uninstall Anaconda and start fresh" routine. The other half is because I'd eventually corrupt my environments and have no choice but to start over.
Are you using conda-forge? Solving from over 6TBs of packages can take quite a while. Conda-forge builds everything. This isn't a criticism, but because of that the number of packages is massive.
"You can't use the big channel with all the packages because it has all the packages" isn't an exoneration, it's an indictment.
To answer your question: yes, we were using conda-forge, and then when it stopped building we moved to a mix of conda and a critical subchannel, and then a major GIS library broke on conda and stayed that way for months so we threw in the towel and just used pip + a few touch-up scripts. Now that everyone else has followed suit, pip is the place where things "just work" so now we just use pip, no touchups required.
Is it a mess? Yes. But, is the problem to be solved perhaps much simpler "in other languages"? Do you interface with C++ libraries, system-managed dependencies, and perhaps your GPU in these other languages? Or are all your dependencies purely coded in these other languages, making everything simpler?
Of course the answer to these questions could be anything but to me it feels like attacks on Python's package management are usually cheap shots on a much much more complicated problem domain than the "complainers" are typically aware of.
Or the "complainers" work with Rust and Elixir and giggle at Python's last-century dependency-management woes, while they run a command or two and can upgrade and/or pin their dependencies and put that in version control and have builds identical [to those on their dev machines] in their CI/CD environment.
¯\_(ツ)_/¯
Your comment hints that you are feeling personally attacked when Python is criticized. Friendly unsolicited advice: don't do that, it's not healthy for you.
Python is a relic. Its popularity and integration with super-strong C/C++ libraries has been carrying it for at least the last 5 years, if not 10. There's no mystery: it's a network effect. Quality is irrelevant when something is popular.
And yes I used Python. Hated it every time. I guess I have to thank Python for learning bash scripting well. I still ended up wasting less time.
It's a bit ironic to pick someone's up on "taking things personally upon criticism", then proceeding to display a deep, manichean, unfounded hatered for a language that, despite its numerous flaws, remains a popular and useful tool.
Hanging people upon dawn was popular as well; people even got their kids for the event and it was happening regularly. Popularity says nothing about quality or even viability.
Use Python if it's useful for you, obviously. To me though the writing is on the wall -- it's on its loooong and painful (due to people being in denial) way out.
EDIT: I don't "hate"; it was a figure of speech. Our work has no place for such emotions. I simply get "sick of" (read: become weary of) something being preached as good when it clearly is not, at least in purely technical terms. And the "hate" is not at all unfounded. Choosing to ignore what doesn't conform to your view is not an argument.
> Your comment hints that you are feeling personally attacked when Python is criticized.
I am not feeling personally attacked (I am not married to Python), I am mostly just tired of reading the same unproductive type of complaints over and over again. This attitude is not unique to Python's situation, but actually is typical to our industry. It makes me want to find a different job, on some days.
The community is trying to improve the situation but there is no way to erase Python's history. So it's always going to continue to look messy if you keep looking back. The complaint is unproductive, or in other words, not constructive.
I can agree with your comment. What's missing is the possibility to, you know, just jump ship.
You might not be married to Python but it sure looks that way for many others. I switched main languages no less than 4 times in my career and each time it was an objective improvement.
The thing that kind of makes me look down on other programmers is them refusing to move on.
For nuance and to address your point, I have worked with PHP for about six years, .NET for five, C++ for two, and Python for seven.
I still dabble in all of them. Who knows when I will move on to the next. Rust looks nice. I tried go.
But they do not yet provide any of the tools/libraries I need for my work. That's how I've always selected my programming language.
So I would first need to invent the universe before I can create valuable things. Instead I will just wait until their ecosystems mature a little more.
I will end the discussion here though. Thanks for the response!
Yes. In elixir, you can install GPU stuff (Nx) with very few problems. Some people have built some really cool tools like burrito, which cross-compile and bundles up the VMs to other architectures. Even before that it's been pretty common to cross-compile from x86 to an arm raspberry pi image in the form of Nerves.
As a rule elixir devs don't do system level dependencies, probably because of lessons learned from the hell scape that is python (and Ruby)
Yesterday I onboarded a coworker onto the elixir project, he instinctively put it into a docker container. I laughed and just told him to run it bare (he runs macos, I run Linux). There were 0 problems out of the box except I forgot the npm incantations to load up the frontend libraries.
2) not that I can find for that specific task but the typical strategy is to download (or compile) a binary, drop it into a {project-dependency}[0]-local "private assets" directory, and call out the binary. This is for example how I embed zig pl into elixir (see "zigler") without system-level dependencies. Setting this up is about 40 lines of code.
3) wx is preferred in the ecosystem over qt, but this (and openssl) are the two biggies in terms of "needs system deps", though it's possible to run without wx.
For native graphics, elixir is treading towards glfw, which doesn't have widgets, but from what I hear there are very few if any gotchas in terms of using it.
I bring up cross-compilation, because burrito allows you to cross-compile natively implemented code, e.g. bcrypt that's in a library. So libraries that need c-FFI typically don't ship binaries, they compile at build time. Burrito binds "the correct" architecture into the c-flags and enables you to cross compile c-FFI stuff, so you don't have a system level dependency.
[0] not that this has happened, but two dependencies in the same project could download different versions of the same binary and not collide with each other.
Its not a mess, people just make it a mess because of the lack of understanding around it, and getting lazy with using a combination of pip install, apt install, and whatever else. Also, the problem is compounded by people using Mac to develop, which have a different way of handling system wide python installs from brew, and then trying to port that to Linux.
Even using conda to manage reqs is an absolute nightmare. Did a subreq get updated? Did the author of the library pin that subreq? No? Have fun hunting down which library needs to be downgraded manually
I used a couple of tricks to solve this. First, make cond env export a build step and environment.yml an artifact so you've got a nice summary of what got installed. Second, nightly builds so you aren't surprised by random package upgrade errors the next time you commit code to your project.
This has indeed been eye opening. We got bit by a dependency problem in which TensorFlow started pulling in an incompatible version of protobuf. After reading these comments, I don't think that pip freeze is quite what we want, but poetry sounds promising. We have a relatively small set of core dependencies, and a bunch of transitive dependencies that just need to work, and which we sometimes need to update for security fixes.
Why do you think that `pip freeze` wouldn't be what you want? (I once had the exact same issue with TF and protobuf and specifying the exact protobuf version I wanted solved it.)
When I tried learning Python, this mess is what turned me off so badly. Python is the first language I ever came across where I felt like Docker was necessary just to keep the mess in a sandbox.
Coming to that from hearing stories that there was supposed to be one way to do everything disenchanted me quickly.
Every article I found suggested different versions of Python, like Anaconda. They all suggested different virtual environments too. Rarely was an explanation given. At the time, the mix of Python 2 vs Python 3 was a mess.
The code itself was okay, but everything around it was a train wreck compared to every other language I’d been using (Go, Java, Ruby, Elixir, even Perl).
I attempted to get into it based on good things I’d heard online, but in the end it just wasn’t my cup of tea.
> This is simply a mess and it's handled much better in other languages.
I don't agree.
The problem is simply that Python encompasses a MUCH larger space with "package management" than most languages. It also has been around long enough to generate fairly deep dependency chains.
As a counterexample, try using rust-analyzer or rust-skia on a Beaglebone Black. Good luck. Whereas, my Python stuff runs flawlessly.
What many newer languages do is precompile the happy paths(x86, x86-64, and arm64) and then hang you out to dry if that's not what you are on.
I agree that it is handled better in many other languages. However, Go has some weird thing with imports going on. When I tried to learn it I just could not import a function from another file. Some env variable making the program not find the path. Many stackoverflow/reddit threads condecendenly pointed to some setup guide in official docs which did not fix or explain the situation.
After an few hours or so of not making much progress in AOC day 1 I just gave up and never continued learning Go.
It's not crazy at all. You use requirements.txt to keep a track of the dependencies needed for development, and you put the dependencies needed to build and install a package into setup.py where the packaging script can get it.
These are two different things, because they do two different jobs.
Granted, I'm more of a hobbiest than a dev, but I think this is part of the problem that virtualenvs are supposed to help solve. One project (virtualenv) can have numpy==3.2, and another can have numpy==3.1. Maybe I'm naive, but it seems like having a one project with multiple versions of numpy being imported/called at various times would be asking for trouble.
The thing you can’t do is solve for this situation.
A depends on B, C
B depends on D==1.24
C depends on D==2.02
There should in an super ideal world be no problem with this. You would just have a “linker” that inserts itself in the module loader that presents different module objects when deps ask for “D” but it hasn’t happened yet.
Someone else already responded. It's a one-line command.
I never could get poetry to work right; it's configs are sort of messy. pip freeze > requirements is built in. The only thing it doesn't pin is the python version itself.
As explained elsewhere in this thread, the one line command only generates a lock file. This doesn't manage the dependencies so if you want to upgrade cool-lib and recalculate all the transient dependencies so they fit with the rest of your libraries, you cannot afaik.
This is not actually true. :-) Pip will install transitive deps from a requirements file unless you add the “no deps” flag. Pip freeze doesn’t pin anything. It just dumps stuff into a text file. If it’s a complete list, it has the side effect of pinning, but that’s not guaranteed by pip freeze in any way.
Poetry requires pip in the way that `go mod` requires `go get`, i.e. Poetry allows one to operate at a higher level of abstraction, where it's harder to make mistakes and generally easier to manage your dependency tree.
Sure, this is true for every abstraction: some are more air-tight or leaky than others, but IME Poetry and go mod are fairly solid.
I haven’t had to use pip in a long time. Just like I haven’t had to use `go get` in a while.
I mean it’s not the most rock solid abstraction, but introducing sane package management in an OSS environment with a decade plus of history is very hard. Against that background, they are doing pretty well, IMO.
Kind of. It's intended to be a system package/tool. In that regard you can use, wait for it, yet another python tool - pipx. So instead of installing poetry to a venv or whatever using pip, you can use `pipx run poetry`. Now you have to install pipx...
I'm a huge, long time, python fan. I don't tend to have a lot of problems with dependencies, but it clearly is something that certain situations have problems with.
As I understand the post, the author is saying "It sure is nice to be able to just hand off a go executable and be done with it." And I think we can agree that the Python runtime situation is far from this.
I largely control my work environment, so this isn't a huge issue for me. But I'm right at this very moment converting a "python" to "python3" in a wrapper script because the underlying code got converted to py3 and can't "import configparser". (Actually, I'm removing "python" entirely and adding a shebamg).
I've been looking at writing a small tool in Python and then porting it to Go (I don't know go, so seems like a reasonable way to approach it), because the main target of the app is my developers, who probably don't have the right Python environment set up, and I just want them to be able to use it, not give them a chore.
Also, its very possible, and quite easy to just include the code for the library in your package
This only works if installing on exactly the same os and architecture. It can also make the installer for your quick little command line tool hundreds of megabytes.
That being said packing up the python interpreter and all dependencies is the approach I ended up using when shipping non-trivial python applications in the past.
Yep, I have no idea how they do not even understand the basics of python dependency management. pip freeze > requirements.txt will do it all for you. No wonder they found rust to hard.
You can do you want you suggest, but it's an operational pain in the ass. You need to maintain two files, the actual requirements.txt and the `pip freeze` one that locks the environment. And you better never `pip install` anything by hand or you'll capture random packages in your frozen file, or else always take care to create your frozen file in a fresh virtualenv. And if you don't want to install your dev packages into the production environment, then you need to maintain two requirements.txt and two of those frozen files.
The author mentions Poetry which does solve these issues with a nice interface.
`pip freeze > requirements.txt` will generate a lock file. You want a requirements listing as well as a lock file (like rust, poetry, golang, npm, ruby).
There's lots of cases where you wouldn't want to pin your requirements.txt, the main one being if you're authoring a package. You need to leave the versions unpinned, preferably just bound to a major version, allowing some variability for the users of your package in case there's a shared dependency. I have a feeling that's what the author is describing here, because Poetry solves this dilemma by introducing a poetry.lock file which pins the dev versions of all the dependencies, but publishes a package with unpinned deps.
This is wrong and OMG all other 15 answers to this comment even more wrong.
Pinning a dependency in requirements.txt does not pin its transitive dependencies.
This does NOT mean you should pip freeze everything into requirements.txt, because then how do you distinguish top level dependencies from transitive?
The correct answer is to use lock file. No third party tools needed (1). In Python they named it constraints and requires an extra flag to use. pip freeze > constraints.txt. Then next time pip install -r requirements.txt -c constraints.txt.
Oh, and always always always use a venv for each project. Globally installed packages is a recipe for disaster.
(1) Poetry and Pipenv are still nice additions, with nicer project declarations in pyproject.toml and to save you from remembering pip flags or forgetting activation of venvs. But they are not strictly necessary and it’s honestly just 2 commands extra without them.
The issue is transitive dependencies. A dependency you pin isn't guaranteed to pin its own dependencies. A bug somewhere in a grandchild dependency can manifest for you even if you have a version pinned but the dependency did not.
It's not automatically a problem but it certainly can become one.
Python has several. It’s overall a good thing. It’s the reason you can package both one-file scripts and huge C++ / Fortran packages, which is the reason it has become a huge language in HPC / ML / AI / astronomy / data science / …
I agree that it makes life more confusing for newbies, though.
something like poetry's approach is the right one here; you need a list of core dependencies (not derived ones), you need a solver for when anything changes to find a new set of viable versions, and you need a lock file of some sort to uniquely & reproducible construct an environment.
You can, but one of those packages that you depend on will have a loose version spec for one of its dependencies, making your `pip install -r requirements.txt` non-deterministic.
Poetry and Pipenv solve this, though, by pinning all dependencies in a lock file.
wat? Pretty sure you can use == in requirements.txt
Also, its very possible, and quite easy to just include the code for the library in your package, which effectively "locks in" the version. We did this all the time when building AWS lambda deployments.