Hacker News new | past | comments | ask | show | jobs | submit login

Working with asyncio sucks when all you want is to be able to do some things in the background, possibly concurrently. You have to rewrite the worker code using those stupid async await keywords. It's an obnoxious constraint that completely breaks down when you want to use unaware libraries. The thread model is just a million times easier to use because you don't have to change the code.



Asyncio is designed for things like webservers or UIs where some framework is probably already handling the main event loop. What are you doing where you just want to run something else in the background, and IPC isn't good enough?


Non-blocking HTTP requests is an extremely common need, for instance. Why the hell did we need to reinvent special asyncio-aware request libraries for it? It's absolute madness. Thread pools are much easier to work with.

> where some framework is probably already handling the main event loop

This is both not really true and also irrelevant. When you need a flask (or whatever) request handler to do parallel work, asyncio is still pretty bullshit to use vs threads.


Non-blocking HTTP request is the bread and butter use case for asyncio. Most JS projects are doing something like this, and they don't need to manage threads for it. You want to manage your own thread pool for this, or are you going to spawn and kill a thread every time you make a request?


> Non-blocking HTTP request is the bread and butter use case for asyncio

And the amount of contorting that has to be done for it in Python would be hilarious if it weren't so sad.

> Most JS projects

I don't know what JavaScript does, but I do know that Python is not JavaScript.

> You want to manage your own thread pool for this...

In Python, concurrent futures' ThreadPoolExecutor is actually nice to use and doesn't require rewriting existing worker code. It's already done, has a clean interface, and was part of the standard library before asyncio was.


ThreadPoolExecutor is the most similar thing to asyncio: It hands out promises, and when you call .result(), it's the same as await. JS even made its own promises implicitly compatible with async/await. I'm mentioning what JS does because you're describing a very common JS use case, and Python isn't all that different.

If you have async stuff happening all over the place, what do you use, a global ThreadPoolExecutor? It's not bad, but a bit more cumbersome and probably less efficient. You're running multiple OS threads that are locking, vs a single-threaded event loop. Gets worse the more long-running blocking calls there are.

Also, I was originally asking about free threads. GIL isn't a problem if you're just waiting on I/O. If you want to compute on multiple cores at once, there's multiprocessing, or more likely you're using stuff like numpy that uses C threads anyway.


> Python isn't all that different

Again, Python's implementation of asyncio does not allow you to background worker code without explicitly altering that worker code to be aware of asyncio. Threads do. They just don't occupy the same space.

> Also, I was originally asking about free threads...there's multiprocessing

Eh, the obvious reason to not want to use separate processes is a desire for some kind of shared state without the cost or burden of IPC. The fact that you suggested multiprocessing.Pool instead of concurrent_futures.ProcessPoolExecutor and asked about manual pool management feels like it tells me a little bit about where your head is at here wrt Python.


Basically true in JS too. You're not supposed to do blocking calls in async code. You also can't "await" an async call inside a non-async func, though you could fire-and-forget it.

Right, but how often does a Python program have complex shared state across threads, rather than some simple fan-out-fan-in, and also need to take advantage of multiple cores?


The primary thing that tripped me up about async/await, specifically only in Python, is that the called function does not begin running until you await it. Before that moment, it's just an unstarted generator.

To make background jobs, I've used the class-based version to start a thread, then the magic method that's called on await simply joins the thread. Which is a lot of boilerplate to get a little closer to how async works in (at least) js and c#.


Rust's version of async/await is the same in that respect, where futures don't do anything until you poll them (e.g., by awaiting them): if you want something to just start right away, you have to call out to the executor you're using, and get it to spawn a new task for it.

Though to be fair, people complain about this in Rust as well. I can't comment much on it myself, since I haven't had any need for concurrent workloads that Rayon (a basic thread-pool library with work stealing) can't handle.


That is a common split in language design decisions. I think the argument for the python-style where you have to drive it to begin is more useful as you can always just start it immediately but also let's you delay computation or pass it around similar to a Haskell thunk.


There is also https://docs.python.org/3/library/asyncio-task.html#eager-ta... if you want your task to start on creation.


I feel you. I know asyncio is "the future", but I usually just want to write a background task, and really hate all the gymnastics I have to do with the color of my functions.


I feel like "asyncio is the future" was invented by the same people who think it's totally normal to switch to a new javascript web framework every 6 months.


JS had an event loop since the start. It's an old concept that Python seems to have lifted, as did Rust. I used Python for a decade and never really liked the way it did threads.


Python's reactor pattern, or event loop as you call it, started with the "Twisted" framework or library. And that was first published in 2003. That's a full 6 years before Node.js was released which I assume was the first time anything event-loopy started happening in the JS world.

I forgot to mention that it came into prominence in the Python world through the Tornado http server library that did the same thing. Slowly over time, more and more language features were added to give native or first-class-citizen support to what a lot of people were doing behind the scenes (in sometimes very contrived abuses of generator functions).


Yeah, this pattern is old. But JS is the only common language (at least today) that went so all-in with it.


I agree, I find Go's way much easier to reason about. It's all just functions.


Ordinary CPython code releases GIL during blocking I/O. You can do http requests + thread pool in Python.


You don’t? concurrent.futures.ThreadPoolExecutor can get a lot done without touching async code.


I am a big advocate for ThreadPoolExecutor. I'm saying it's superior to asyncio. The person I'm responding to was asking why use threads when you can use asyncio instead.


Ach, I posted before I saw the rest of your thread, apologies.

Totally agree, concurrent.futures strikes a great balance. Enough to get work done, a bit more constrained than threads on their own.

Asyncio is a lot of cud to chew if you just want a background task in an otherwise sync application


So, in Rust they had threading since forever and they are now hyped with this new toy called async/await (and all the new problems it brings), while in Python they've had async/await and are now excited to see the possibilities of this new toy called threads (and all its problems). That's funny!


Yes? They have different use cases which they are good at.


Python is more so in the same boat as Rust. Python asyncio was relatively recent.


Well, Python had threads already. This is just a slightly different form of them behind the scenes.


Being hyped for <feature other languages have had for years> is totally on-brand for the Rust community.


That sounds more like Golang (generics)


Yeah I've never liked the async stuff. I've used the existing theading library and it's been fine, for those programs that are blocked on i/o most of the time. The GIL hasn't been a problem. Those programs often ran on single core machines anyway. We would have been better off without the GIL in the first place, but we may be in for headaches by removing it now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: