Hacker Newsnew | past | comments | ask | show | jobs | submit | spinningslate's commentslogin

Related: Michael Kennedy moved TalkPython [0] hosting to Hetner in 2024. There's a blog about the move here [1] and a follow up after Hetzner changed some pricing policy [2].

He's also just released a book on hosting scale production Python apps [3]. Haven't read yet though would assume it'll get covered there in more detail too.

--

[0] https://talkpython.fm/

[1] https://talkpython.fm/blog/posts/we-have-moved-to-hetzner/

[2] https://talkpython.fm/blog/posts/update-on-hetzner-changes-p...

[3] https://talkpython.fm/books/python-in-production


yes, though perhaps stating the obvious: it depends what they do with it.

Ladybird currently has 8 full-time devs [1] and is making impressive progress on delivering a browser from scratch. Wise investment in small, focused, capable teams can go a long way if they're not chasing VC-driven Unicorn status (or in stasis as a Google anti-trust diversion).

That's not challenging your point though: in the face of competing budgets at US tech giants, EUR17Mn still barely registers above noise level. Nevertheless, it's a start. We can only hope it grows and doesn't get shut down by some political lobbying by the aforementioned US behemoths. A modest budget might actually help there - not yet big enough to cause concern to incumbents.

[1]: https://ladybird.org/


What is the point of a new browser engine? What will be the advantage over WebKit/Blink/Gecko?

Sure it “isn’t monetized”, but nothing stops you from making non-monetized forks of chromium or Firefox. And nothing stops company from forking Ladybird and monetizing it, either.


Hopefully new independent voice in the questions of platform features, development and future.

Google could do pretty much anything with the platform if it were not for Apple and iOS. And that's a big if because if they align on something it will get to the platform.

Firefox unfortunately seems to be infected by Silicon Valley people that seem to be quite obedient to the status quo.

Ladybird is at least developed by people from all around the world.


> The EU law is fine

Kind of. The intent is good and the wording disallows some of the dark patterns. The challenge is that it stands square in the path of the adtech surveillance behemoths. That we ended up with the cesspit of cookie banners is a result of (almost) immovable object meeting (almost) irresistable force. There was simply no way that Google, Facebook et al were ever going to comply with the intent of the law: it's their business not to.

The only way we might have got a better outcome was for the EU to quickly respond and say "nope, cookie banners aren't compliant with the law". That would have been incredibly difficult to do in practice. You can bet your Bay Area mortgage that Big Tech will have had legions of smart lawyers pouring over how to comply with the letter whilst completely ignoring the intent.


It was never intended to be "enforced":

> The standard, developed in 1994, relies on voluntary compliance [0]

It was conceived in a world with an expectation of collectively respectful behaviour: specifically that search crawlers could swamp "average Joe's" site but shouldn't.

We're in a different world now but companies still have a choice. Some do still respect it... and then there's Meta, OpenAI and such. Communities only work when people are willing to respect community rules, not have compliance imposed on them.

It then becomes an arms race: a reasonable response from average Joe is "well, OK, I'll allows anyone but [Meta|OpenAI|...] to access my site. Fine in theory, dificult in practice:

1. Block IP addresses for the offending bots --> bots run from obfuscated addresses

2. Block the bot user agent --> bots lie about UA.

...and so on.

[0]: https://en.wikipedia.org/wiki/Robots.txt


Thanks for the info. However people seem to think that robots.txt will protect them while it was created for another world as you nicelly stated. I guess Nepenthes like tools will be more common in the future, now that tragedy of commons entered digital domain.


As a dyed-in-the-wool print debugging advocate, and a Gleam-curious Erlang/BEAM enthusiast, this is very interesting for me.

Thanks for all your work, great to see how well the language and tooling are maturing.


print debugging is the best debugging <3 Thank you for the kind comment!!


Nah it's just the easiest and most reliable way. Usually anyway; sometimes you have extreme timing or space constraints and can't even use that. On microcontrollers I sometimes have to resort to GPIO debug outputs and I've worked on USB audio drivers where printf isn't an option.


hello fellow print debugging enjoyer, rejoice!


Indeed, for me, the IDE debuggers became useless when we started writing multi threaded programs.

Printf is the only way to reliably debug multi threaded programs.


> Those who care very deeply about very tight privacy

> that has enough privacy to be sustainable

These are the key phrases. Mozilla has hitched its wagon to advertising. Behind all the bluster over last week, the underlying direction is clear. They bought Anonym [0] and Ajit Varma, the new VP of Product for Firefox and source of the updates, is ex-Meta. It's reasonable to assume that he's there, in part, because of advertising expertise.

Some will see Anonym's "privacy-powered advertising" as "enough privacy" and the only viable way to sustain Firefox without Google's annual cash injection.

Others won't buy that, believing that a browser can be built without relying on advertising. Ladybird is taking this approach - so we'll find out.

> If Firefox’s market share dips any lower website makers won’t support it

This is the risk the exec team must know they've taken. Specifically: what proportion of the current Firefox user base exists because of the historic pro-privacy stance, and what percentage of that will leave because of the advertising-based future?

[0] https://www.anonymco.com/

--

EDIT: addedd missing reference


> Others won't buy that, believing that a browser can be built without relying on advertising. Ladybird is taking this approach - so we'll find out.

I'm afraid that we'll find out indeed and end up with no Ladybird and no Firefox either.


What you do will, in part, depend on how you feel about 2 things:

1. Mozilla is now an advertising business - see e.g. links in this El Reg post [0].

2. How you feel about the alternatives.

Behind the PR bluffing of the last few days, #1 is clear. Mozilla has hitched the wagon to advertising.

There's unlikely to be a single good answer for #2. All the alternatives have compromises: Vivaldi is chrome-based and has some closed source code; Brave has crypto and Eich's political views (and also chrome-based; the various firefox forks (LibreWolf, PaleMoon, Waterfox, ...) all have questions over their sustainability.

Perhaps the most promising is Ladybird, but it's a good way off yet.

Let's hope we're near the botton of the enshitification curve and there are positives on the horizon somewhere.

[0]: https://www.theregister.com/2025/03/02/mozilla_introduces_te...


"When you upload or input information through Firefox, you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of Firefox."


The US firearm mortality rate was 5x that of the nearest high-income countries in 2019 [0]. The US had 120 firearms per 100 people in 2018 with 80% of all homicides being gun-related [1].

Those statistics may not be wholly attributable to differences in gun laws but it seems a stretch to suggest they're unrelated.

[0] https://www.linkedin.com/pulse/us-midst-public-health-crisis...

[1] https://www.bbc.co.uk/news/world-us-canada-41488081


thank you for writing this.

I cut my teeth on OS/2 in the early 90s, where using threads and processes to handle concurrent tasks was the recommended programming model. It was well-supported by the OS, with a comprehensive API for process/thread creation, deletion and inter-task communication. It was a very clear mental model: put each sequential sequence of operations in its own process/thread, and let the operating system deal with scheduling - including pausing tasks that were blocked on I/O.

My next encounter was Windows 3, with its event loop and cooperative multi-tasking. Whilst the new model was interesting, I was perplexed by needing to interleave my domain code with manual decisions on scheduling. It felt haphazard and unsatisfactory that the OS didn't handle scheduling for me. It made me appreciate more the benefits of OS-provided pre-emptive multi-tasking.

The contrast in models was stark. It seemed obvious that pre-emptive multi-tasking was so obviously better. And so it proved: NT bestowed it on Windows, and NeXT did the same for Mac.

Which brings us to today. I feel like I'm going through groundhog day with the renaissance of cooperative multi-tasking: promises, async/await and such. There's another topic today [0] that illustrates the challenges of attempting to performs actions concurrently in javascript. It brought back all the perplexion and haphazard scheduling decisions from my Windows 3 days.

As you note:

> Of course, context switching between different tasks is not free, and event loops have frequently been able to provide higher efficiency.

This is indeed true: having an OS or language runtime manage scheduling does incur an overhead. And, indeed, there are benchmarks [1] that can be interpreted as illustrating the performance benefits of cooperative over pre-emptive multitasking.

That may be true in isolation, but it inevitably places scheduling burden back on the application developer. Concurrent sequences of application domain operations - with the OS/runtime scheduling them - seems like a better division of responsibility.

[0]: https://news.ycombinator.com/item?id=42592224

[1]: https://hez2010.github.io/async-runtimes-benchmarks-2024/tak...


Did you ever used SOM?

To this day it still seems it had a much better approach to components development and related tooling, than even COM reboot as WinRT offers.


Yes! Fond memories. I put it firmly in the Betamax category: superior technology that lost out for political/marketing reasons.


for those curious about SOM OS/2 Technical Library: System Object Model Guide and Reference

https://archive.org/details/os2-2.0-som-1991


What makes me angriest about the current async propaganda... and I use the term deliberately to distinguish it from calm discussions about relative engineering tradeoffs, which is a different discussion... is the idea that it started with Node.

Somehow we collectively took all the incredible experience with cooperative multitasking gathered over literally decades prior to Node and just chucked it in the trash can and had to start over at Day Zero re-learning how to use it.

This is particularly pernicious because the major issue with async is that it scales more poorly than threads, due to the increasing design complexity and the ever-increasing chances that the various implicit requirements that each async task has for the behavior of other tasks in the system will conflict with each other. You have to build systems of a certain size before it reveals its true colors. By then it's too late to change those systems.


I would frame it a bit differently. Async scales very elegantly if and only if your entire software stack is purpose-built for async.

The mistake most people are making these days is mixing paradigms within the same thread of execution, sprinkling async throughout explicitly or implicitly synchronous architectures. There are deep architectural conflicts between synchronous and asynchronous designs, and trying to use both at the same time in the same thread is a recipe for complicated code that never quite works right.

If you are going to use async, you have to commit to it with everything that entails if you want it to work well, but most developers don't want to do that.


This is actually a major issue in the LLM wrapper space. Building things like agents (which I think are insanely overhyped and I am so out on but won’t elaborate on), usually in Python, where you are making requests that might take 1-5 seconds to complete, with dependencies between responses, you basically need to have expert level async knowledge to build anything interesting. For example, say you want two agents talking to eachother and “thinking” independently in the same single threaded Python process. You need to write your code in such a way that one agent thinking (making a multi second call to an llm) does not block the other from thinking, but at the same time when the agents talk to each other they shouldn’t talk over eachother. Now imagine you have n number of these agents in the same program, say behind an async endpoint on a FastAPI server. It gets complicated quick.


It's also unnecessary for virtually all actual systems today.

The systems that can potentially benefit from async/await are a tiny subset of what we build. The rest just don't even have the problem that async/await purports to solve, never mind if it actually manages to solve it.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: