Hacker News new | past | comments | ask | show | jobs | submit login
Astral (astral.sh)
1078 points by hasheddan on April 18, 2023 | hide | past | favorite | 236 comments



I love Ruff and I'm glad that Charlie and the rest of the team are able to work on such tools full-time. I'm also happy to see that the author of Maturin (Rust+Python interop) is involved, as Maturin is a fantastic project with great ease-of-use.

For those who aren't familiar, Ruff is a very fast Python linter that supersedes a variety of tools like isort, flake8, and perhaps eventually Black [1]. My understanding is that since Ruff is an all-in-one tool, it can parse the underlying Python files just once -- and since this is the most expensive phase of these kinds of tools, adding any extra functionality on top (such as linting, formatting, ...) is very cheap. And that's to say nothing of the fact that Ruff is built on Rust, with the usual performance wins that Rust projects tend to have.

I'm a little apprehensive of how development will look given a VC raise with the expectation of VC returns. But for now, I'm delighted and can't wait to see what the team comes up with.

[1]: https://github.com/charliermarsh/ruff/issues/1904


The main reason pylint is slow is it tries to infer the types of the variables, and it predates type annotations so it can go through multiple functions/control paths across multiple files to figure these out. This approach is obviated by type annotations and type checkers.

Ruff is built for speed from the start and doesn't look at type annotations because you should run a type checker alongside.


Just to be sure. Is Ruff faster by lacking type checking?

Also, does Ruff work well on codebase without type hints?


Ruff is definitely faster by lacking type-checking. That requires way more analysis.

Ruff will work fine without type hints.

I believe it rightfully leaves it to mypy for those who want those features.

Mypy transpiles itself to c using mypyc and that can still take a while to complete when caches get invalidated.


Ruff is faster than many tools which don't do type checking.

Ruff works well regardless of whether you use type hints.


that's dodging the question a little bit? let's say Ruff is faster because it's so good, how much faster than type checking alternatives would this great tool be if it did type checking?


That depends on how you do type checking and more specifically type inference. Mypy infers types of local variables but not function boundaries as it expects you to annotate function arguments and return values.

In mypy a variable taking values of two types is an error (unless explicitly annotated) which means you can assume the type from the first assignment statement and then use that assumption every other time the variable is used/assigned. In pylint it allows variables to take multiple types and only emits errors when an operation is invalid against one of the types. E.g. `x[0]` is valid for lists and tuples so you could assign it a list or a tuple depending on some condition.

The number of possibilities it considers quickly multiplies. Eg if your function is of the form `if blue_condition: x = blue_action(x)` that's a doubling of the possibilities and ten flags -> 1024 possibilities. (pylint has some heuristics on when to give up.)

The advantage of pylint is that it doesn't require your code to fit a certain style. But nowadays people have type annotations and they prefer to use them even if it means changing their code style a bit here and there.

Other type inferers with different approaches are pyright, Jedi, Pyre, pytype etc with different tradeoffs.


Without building the tool to compare, it's really hard to tell. In my experience of rewriting some Python and Ruby code into compiled versions, you can expect 10-20x improvements. But it's both really dependant on the code itself and on how you're going to reimplement it - most of the time it doesn't make sense to do perfect 1:1 rewrites.


Even if no VC was involved, how do you make money developing an open source linter?


I think we're going to see a shift in developers expectations towards the cost of their tools, as a side-effect of Copilot & co. If you already pay $100/mo in subscriptions for IDE, its plugins, and services it integrates with, then forking additional $5 for a linter doesn't look as absurd as it used to...


How did Hashi corp make money (and an eventual billion dollar plus IPO!) developing open source tools like vagrant, consul and terraform...

Clearly the creator of ruff is going to expand from an open source tool into a company focusing on Python tooling and developer experience, services, etc. There is a market for stuff like that just based on pycharm and jet brains' success in the space alone.


HashiCorp made a $270 million loss last year, more than 50% of their revenue.


A lot of the money comes from Vault which is open source but only used by larger organizations that want support.


Vault namespacing also requires an enterprise license, so yeah if you're doing anything useful with Vault at scale, you're paying for it.


im sure they have a ton of ideas. my bet is that they might launch something in the CI/CD space. like building python packages/images fast, speeding up python testing or tackling something in ML inference.


They are probably looking for an ecosystem around the tool, like how github built on git.


Support contracts. Custom features built to order. Other commercial products based on the open-source libraries for parsing and analysis. Running a hosted service.

All this while remaining a team of 2-3, hopefully, with all company-running stuff minimized and outsourced.


Telemetry probably. Remember the telemetry in your terminal folks, on here? :cry:

But you might be able to do support contracts, like ansible. Tough road though.


Look at sentry


Sentry requires a platform to work, it's much easier to monetise. I'm not even sure it's possible to self-host?


if it's any consolation, Accel is in my mind one of the "good ones". they have a record of backing "good" devtool companies that manage open source and commercial concerns well - Vercel, Sentry, etc. (i'm sure theres more, those are just the two that i'm closest to)


What are the good VC companies? Not just for dev tools, but in general. Where do you find this information?


just being "in the scene" for a few years :)

tbh i care more about individual investors than the fund brand, but its a fallback whenever meeting someone new

edit: https://overcast.fm/+t_0aYbn-o/12:00 listen to steve krouse talk about how accel encouraged him for val.town even tho he didnt initially believe in it himself. thats not normal investor behavior. look out for people like that, by definition they are rare


I actually cursed Maturin a few years ago, as it refused to compile on OpenBSD and hence broke some lib I planned to use.

Python is becoming a bit too reliant on Rust. Rust is good but, in term of portability, is just not as mature as C. If your lib relies on Rust, please please please test it on something beyond Linux and Mac.


> please test it on something beyond Linux and Mac

That’s asking a lot. If your platform is poorly supported by Rust, then maybe that’s where your efforts should be directed. If you fix that, lots of interesting stuff beyond one library is unlocked.


Theo de Raadt engaging the Rust community would be a sight to behold :)


I am not aware of him saying anything about Rust beyond this: https://marc.info/?l=openbsd-misc&m=151233345723889&w=2

Which doesn't mean he hasn't, of course!


That email (from Dec 2017) ends with “Such ecosystems come with incredible costs. For instance, rust cannot even compile itself on i386 at present time because it exhausts the address space.”. I presume Theo is actually complaining about using more than 3 GB (?) of memory, but still it really shows the different cost-versus-benefit decisions that we all make.


Is it actually reasonable to expect compiler toolchains to work on old or underpowered hardware? Or is the real problem that cross-compilation is still a special case, rather than being the only way compilers work? If it wasn't still normal to depend on your target environment being the same as your build host, would anyone really want to do serious development work directly on their RPi/Gameboy/watch/whatever, just because that is the target runtime environment?


I think latchin onto this example misses the crux of his raitonale:

> In OpenBSD there is a strict requirement that base builds base. So we cannot replace any base utility, unless the toolchain to build it is in the base.

> Such ecosystems come with incredible costs.

Basically, the cost of adding rust to the OpenBSD base system currently far outweights the proposed benefits (reimplement fileutils), especially considering people will probably not want to pour in the effort needed to rewrite those with strict POSIX compliance (which is another requirement).

He's not saying rust isn't useful but that it wouldn't be a net benefit to have a hard dependency on it in the base system. In the BSD world folks take a very strict and conservative view of what can go in base (and for good reason IMHO).

This is different from the GP comments about just making it possible to use rust programs/libraries in OpenBSD, so we're definitely on a tangent here.


I am wildly guessing that Theo’s beef is more that rust uses a lot of memory (paraphrase: 640kb should be enough for anybody). OpenBSD does integration builds on a variety of different systems, and maybe Theo noticed the OpenBSD/386 build failing due to lack of necessary memory?


That seems a reasonable guess, but it brings me straight back to wondering why such a diversity of build environments is necessary or useful. And my wild guess is that it's mostly because cross compilation is still a second class citizen in most languages today. Though I guess it could be a kind of cultural expectation that you should be able to compile the whole OS on the hardware you're running it on.


My understanding is that the policy of the project is that the base system must not be cross compiled. My understanding of why is that they want to be able to fully bootstrap from base itself.


The rust compiler runs on i86, the complaint is that it can't compile the rust compiler, because it uses more memory than is addressable on i86.


That’s what the person was getting at. Does it matter if it can be cross-compiled elsewhere?


It matters to Theo, due to policies created for specific objectives. It does not matter to many other people.


I guess at the end of the day this is all that really matters. If there are specific goals that Theo has around this, then there's no point in me second-guessing whether those are "reasonable"; it's a matter of values and preferences and whatever.

Whereas if this conversation was about something with broader stewardship, like _Linux_, I'd be saying this is silly, you shouldn't be compromising other things just so you can build your RPi kernel on an RPi.


Yes. My recollection is that around that time, llvm and/or rustc itself would sometimes go over that. I don't know off the top of my head what memory usage looks like right now.


> please please please test it on something beyond Linux and Mac

Why? Simply because you use something that even 95% of the unix folk do not?

I can totally see how they wouldn’t be concerned with that. Just as I’m not concerned with making my JS lib work for those weirdos that still run IE 5.5.


>If your lib relies on Rust, please please please test it on something beyond Linux and Mac.

No, thank you, I'll just add Windows. Your choice to use platforms that nobody else does save for a few specialists does not constitute a need on my part to support your favorite. In the same way that if I package for Debian, Ubuntu and RHEL, it's your problem that it's not on Arch or unsupported by your custom Hannah Montana Linux. Feel free to submit PRs though.


I thought Black was an auto-formatter, similar to clang-format, but for Python. Ruff seems to be only doing linting, not formatting. Am I missing something?


Ruff will be getting more autoformat capabilities: see the link in parent poster and also read the first comment there, from one of Black maintainers.


Pretty bold to try to replace Flake8 and Black with one tool. I hope they succeed. Would be great to have a tool that does both and is substantially faster.


Indeed.

I do hope they'll expose Ruff as a Python module / API in the future. I'm currently using Black to format Python code that's in-memory (never gets written out to disk).

With Black (as an imported Py module), it's just a matter of passing in a string and getting one back. With Ruff as it is now, I'd have to write that out to disk, spawn Ruff process, then read the formatted file back in. Do that for many files and the speed advantages disappear, and actually it's slower.


You can mount the file system in memory for that, but I see your point.


Why doesn't ruff reading/writing from stdin/out, combined with passing the data in with python subprocess.run and os.pipe work here?

    cat bad.py | ruff check --fix --stdin-filename stdin.py - > good.py
Substituting bad.py and good.py for in-memory os.pipe objects for your data. This still involves spawning a subprocess, which can add up, but you're not having to read/write to disk at least.


It does work. The tradeoff is:

* run ruff command many times (with different input file each time) * write all the files to "disk" (tempfs actually so it doesn't involve disk access) and run ruff once

Currently with black I avoid that by importing it as a module once so I get the benefits of both (and is simple to boot).

In other comments here I've read that a person heavily involved in Python-Rust interop is also part of the team, so there's hope they'll expose ruff as a module too and magically whisk my dilemma away :-)


    ruff check . —-fix


Ruff is great, but it lacks a lot of the rules[1]. As far as I understand it reads every file in parallel, if that is still the case there are categories of issues it cannot detect. In my case ruff cannot fully replace my existing tools, but makes for a great companion.

1: https://github.com/charliermarsh/ruff/issues/970


just starting to dabble in Rust, and I've seen PyO3 mentioned a number of times for writing rust bindings. What's the difference between Maturin and PyO3?


PyO3 is the library that enables Python <-> Rust bindings, Maturin is a build tool for packaging PyO3 Rust libraries (which export Python APIs) as Python packages!


thanks!


It's true that being that much faster is interesting, but that's not the only reason.

ruff is just really good.

It has sane defaults, it is easy to use and configure. It can replace not one but multiple tools.

From day one it had few bugs or integration problems.

Charlie Marsh is just a damn good developer, on top of driving a good product vision.

That's rare.

Lots of respect to that.


That's a very good point. We love to talk about speed in dev tools, probably because it's an easy metric to track, but usability and good features are really the key points for making a tool that people want to use and keep using. Too many open source projects neglect the product angle in favor of the technical.


> Charlie Marsh is just a damn good developer

curious how you assessed this. did you inspect the code or are you just commenting from the user POV?


When I worked at Khan Academy, Charlie was an intern, and for a few months we worked together on building the KA app for Android. I can vouch that he was a damn good developer then, and I'm sure he's a much better one now.


Eating at a restaurant:

- "This chef is a great cook!"

- "How do you know, have you looked inside the kitchen?"


Code is not food. You can create a great product with shit code (Or just mediocre code!). Does that make someone a good dev, or a bad dev?

How you come to the conclusion he's "just a damn good developer" is a very fair question.


In this analogy, the food is the entire project, and a dev does much more than coding. You can do one thing right with luck, you can't do most things right by luck alone.

The guy is fluent in 2 languages, one being known to be hard to master, create a tool that replace several others, get adopted in months by half the community, welcome 172 contributors on a project that is parsing stuff, a hard problem. Also the doc is good.

As a professional dev, I never get all those right. Never.

So yes, the food is good, and the chef is excellent to get all that stuff right.


I think it's actually basically the same. Being a clean, organized cook is probably positively correlated with making good food, but it's not strictly necessary. If someone is writing sloppy code but shipping a project of this overall quality without leaking the "low quality" of their implementation, then I wouldn't presume to call them a "bad dev" - I would probably conclude that they're more efficient than I am at prioritizing where to spend organizational energy.


Is the function of the developer to write outstanding code, or develop outstanding products?

I think there's something to be said for praising the developer that is able to understand user needs so well that they create flawless experiences.


>Is the function of the developer to write outstanding code, or develop outstanding products?

Just my opinion of course, but to be considered a "damn good developer," you have to do both.


> You can create a great product with shit code (Or just mediocre code!). Does that make someone a good dev

Yes.


Oddly defensive. Why not answer the question?


Because he disagreed with the implication of the question and thought it was more important to address that?


FWIW, some of my time at Khan Academy overlapped with some of Charlie's and I agree with this assessment :)


I think everyone that's worked with him, would agree :)

For the brief time he was at Cedar, he contributed a lot of high quality code


The rest of the post you're replying to is a list of evidence, that claim was summarizing everything else.


[flagged]


then you probably dont know that i'm actually more supportive of charlie than most and are reading negative intent from a perfectly neutral/innocent question that is perhaps badly phrased :)


I thought more people would recognize your username, but I guess not. I suppose you're more prolific on Twitter than HN.


i mean i kinda like that HN de-emphasizes usernames, conversation should stand on its own if its to be judged by own merit. but i do have 14k karma haha


I don't get it! What's the link here? Sorry if I'm being daft or missing something obvious


> one look at the repository would tell you this person is obviously above average at writing software.

If you can put your knee jerk emotions aside for a minute and reread that comment carefully - the competence of the dev was never in question.


Yes, it was.


No it was not. The question was directed at the commenter about assessing skill, never made any statement disparaging said skill.

Also you seem to be unaware of the relationship of this commenter - you may want to examine their Twitter from before this story was submitted here. Or go on and prove yourself a prat, I won’t stop you.


You're out of line, no need to name-call. I may not want to examine the website and twitter feeds of every commenter on here. Perhaps it wasn't rude, but given how pedantic many people here are, I took the original comment to be a criticism of praising the coder, not a genuine question of evaluating code quality.


> I may not want to examine the website and twitter feeds of every commenter on here.

While that sounds fair enough, the commenter had already noted their relationship within the original thread here by the time you decided to make that completely dismissive reply.

If you’re going to just tritely counter a statement it seems rather foolish not to do a bear minimum of observation.


Is it bad to be jealous of people that are much better than you are?

I mean, for your own sake, ideally you are not, but I think it’s reasonable.


Yet another case of Python developers getting a basic utility which any other language had available for years and being amazed at something which is an industry standard literally anywhere else. Linter taking multiple seconds is not a problem which occurs in any other popular language.

It really boggles my mind why is this lang so popular. Once you write something a little more involved than an utility script or jupyter notebook you start dealing with stupid problems like

- venv

- no standard package manager, dependency resolution taking forever

- multiprocessing

- untyped libraries (looking at you boto)

- `which python`

- wsgi

- CLI debugging

et cetera.

I'm currently working daily with Python, and compared to the .NET world I'm coming from, it's MINDBLOWING how many things are annoying here. In my previous job I was able to spend several years working on a C# app barely ever needing to touch the terminal, everything came with batteries included, tooling / autocompletion / package management / performance / time spent on dealing with little issues was REALLY good in comparison.

Reason I moved is that it's hard to find a job in C# which isn't soul sucking stuff like banking / maintainance / insurance, so I'm dealing with it as the project is interesting at least.


I think the main problem for you is that Python is heavily rooted in Unix and, more specifically, GNU/Linux and free software. Also, it has history. Python has been in use since the early 90s. C# is a Java knockoff that appeared almost a decade later.

The intersection of people who enjoy both C# and Python is very small. I too have worked a soul-sucking job in finance and use of languages like Java and C# was a big part of what I didn't like.

It seems there are two ways to react to the "I don't get it" situation:

1. Other people must be wrong in the head to like this,

2. I don't have the knowledge and/or experience to understand why other people like this.

In life I find it's generally better to give the benefit of the doubt and take the second path. But you may still conclude it's the first after all. All I can say is I'm grateful there are others who are wrong in the head like me and prefer Python for whatever reason.


> Linter taking multiple seconds is not a problem which occurs in any other popular language.

It doesn't happen in Python either. So that's good.

> barely ever needing to touch the terminal

Not really a good sign, point-click developers are not usually the strongest. That said I like things to work easily and don't have many quarrels with Python. Certainly fewer than most other languages.

Sounds like you're not very familiar with it and got used to C#. I don't have a problem with anything you list.


> Not really a good sign, point-click developers are not usually the strongest.

In principle I agree, but my point is - it was possible to be fully productive being just a point-click developer.

> It doesn't happen in Python either. So that's good.

pyright + black + isort hook in the project I'm working on takes 1-5 seconds on an M1 mac on save. We're moving to ruff for this reason. Never saw this anywhere else.

> I don't have a problem with anything you list.

To give better examples:

- poetry takes ~900 seconds to resolve dependencies during update/lock

- I had to use multiprocessing where in any other language I would use threads because GIL, inter-process communication is needlessly complex

- I can't dot+tab+enter autocomplete while using boto or many other libraries - due to weak typing discoverability suffers and I need to have documentation always open in another window to do anything. This is not a problem in strongly typed languages.

- wsgi - concept of running separate interpreters for each incoming request is a bit wasteful

- I never had any problem with broken / incompatibile environment like I had on python. I just install recent version of .net / rust / node (to a lesser extent) and things generally tend to work. Here I have to worry about specific subversions and conflicts in PATH with another envs and OS-packaged python. Avoidable, but noobs are certain to hit this at some point.

- I didn't talk about general speed as it's beating a dead horse, but Python basically nullifies the last 20 years of hardware improvements

> Sounds like you're not very familiar with it and got used to C#.

I also used Rust and Go and was super happy with the experience for similar reasons.


I mentioned elsewhere but might as well here again. These tools should only be run on the files that changed, not every single one in the repo. I run my tools with the script equivalent of:

    git status | grep \.py | tools
And it should exit in a split second. We have a pretty large codebase, though not gigantic.


People seem to forget the three ways to make things faster in computers:

1. Micro-optimisation,

2. Better algorithms,

3. Change the problem.

The potential impact of each of these increases as you go down.

Choosing Rust over Python is micro-optimisation. All things being equal, the impact isn't going to be that great. People are like "let me squeeze out every ounce of power from my CPU so I can run a linter on the entire codebase every time I save". Python folk are like "why on earth would you do that?" We simply changed the problem.


You might be interested in pre-commit which does exactly that! https://pre-commit.com/

It probably will also support Ruff if it doesn't already.


Yup, I know it, thanks. Think of the above as pseudo code.

Still, I want some things to run more often than commit, such as pyflakes.


> It really boggles my mind why is this lang so popular.

You are thinking like a software engineer, understandably

If you looks at STEM research fields Python is a solid tool for the fact it is a utlility/scripting tool. All the issues you listed rarely occur in this instance.

Most of the Python I write has a very limited lifetime. Typically shorted than the development time

There are indeed better Python alternatives, but very few use them in comparison


>It really boggles my mind why is this lang so popular

a lot of problems can be solved well with python.

The frameworks are also priceless. What is an alternative to Django, for example?

Nearly every backend service I create is with python, unless it needs speed or rock-solid reliability. For that I use Go. Dynamic UI/UX I use nextjs with python backend.


.NET is a phenomenal alternative to Django with excellent tooling, ecosystem, and performance.

I think the only irreplaceable part of the Python ecosystem at the moment is the math/stats/NN stuff. I wouldn't hire a data scientist who refused to learn Python, but I would hire a web dev who refused to learn it.


> .NET is a phenomenal alternative to Django

Presumably you're referring to ASP.NET, but if not, I'd love to know what you're working with. They're not really all that comparable. If I want to make something very quickly and need it to be trustworthy, I'll use Django, especially with the admin site. That said, these days I'd much rather work in the .NET ecosystem.


> Presumably you're referring to ASP.NET

Sure, but .NET includes ASP.NET.

> If I want to make something very quickly

The difference in setup time between a modern .NET 7 web application and Django is basically nothing. You can have a boilerplate project in a few seconds, and the Hello World for the API side is 4 lines:

    var builder = WebApplication.CreateBuilder(args);
    var app = builder.Build();\
    app.MapGet("/", () => "Hello World!");
    app.Run();
> and need it to be trustworthy

I don't understand what this means.

Security? .NET is very likely more secure out of the box because it's supported by a far larger company and is used by so many large organizations.

Lack of runtime errors? You're going to have more in Python because it doesn't have a compiler with strict rules.


Same, the tooling makes a big difference when it comes to languages.

I'm a Dart developer by day, Rust enthusiast by night, both are relatively modern languages and the tooling just feels right (of course nothing is perfect): the analyzer, linter, testing, dependency management, lsp, formatter are all just there.

I never want to go back to 20-30 year old languages where, for all these tools, there are 10 different subpar solutions with three decades of baggage, history, and context (though, of course, congrats on the 1000x speed up).

Python, JavaScript, Java, C++, are all in this category, and honestly, even community favorites like TypeScript and Kotlin have similar warts.


Java is in the same category as C# from the previous post. Mature off the shelf libraries, with a great ecosystem and great integration.


I agree on the venv, 'which python', and packages and I would add Python V2/V3. I work on a few Macbooks and have to occasionally return to Python. It's constant firefighting on environments.


Maybe People should take a closer look to dev containers: https://containers.dev/


Maybe tooling for other languages are faster because the dev has to spend a lot more time specifying a lot more things / are a lot more verbose. The reason tooling on Python is slow is exactly why people love it, because it's lightweight, easy to quickly whip up a script. Take a one line hello world vs having to create an entire project and 10 lines in C#.

Not having to type annotate every tiny thing is also great. You can type only what brings you value and leave the rest to be inferred. It really is the best of all worlds.


The reason tooling in on Python is slow is because the language implementation is slow. We get that you love the language but it does not change that the execution is objectively poorly done.


Choosing a language based on initial setup or Hello World is valuing the wrong things.

You set up an application once, write each line a few times, read it dozens of times, and run it millions of times.

The most costly part of that process is reading/rewriting because developer time is expensive.

Language choice should be optimized to make it easy to read other people's code and to prevent mistakes from getting into production. Python is not near the top of the pack in those dimensions.


> Language choice should be optimized to make it easy to read other people's code and to prevent mistakes from getting into production. Python is not near the top of the pack in those dimensions.

I think most people would definitely disagree with you on the first point. As far as languages go, Python is probably one of the most readable ones out there. It's often even compared to pseudocode because well written Python reads like it.

The latter is a valid point, but assumes all code you ever write is made for production. That's just plainly false. Python is great for tooling, automation, prototyping and many other smaller scale non-production usage. That's where the low-barrier of entry really shines.


> As far as languages go, Python is probably one of the most readable ones out there.

I could not disagree more. Python (as most people write it) has many things that make it less readable:

1. Semantic whitespace (absolute disaster for readability, almost as bad as YAML)

2. Choosing to "save keystrokes" by removing helpful visual cues that other languages have, like parentheses, semicolons, and brackets -- these things make scanning much easier

3. As most people write it, very few line spaces, making Python look like an unbroken run-on sentences instead of nice blocks of code

I've been writing and maintaining code for 26 years, and I've worked a lot with C#, JS/TS, Python, and PHP. I know other languages but haven't spent as much time with them.

Out of those, PHP is ironically the most readable because of its $ variables, and then the order is C#, TS, JS, and then Python. Python is hard to scan and just looks like a brick wall at first glance.


Do you honestly think "readability" is only about whitespace and semicolons?

Compare

    namespace HelloWorld
    {
        class Hello {         
            static void Main(string[] args)
            {
                System.Console.WriteLine("Hello World!");
            }
        }
    }
to

    print("Hello World!")
and

    std::ifstream t("foobar.txt");
    std::stringstream content;
    content << t.rdbuf();
to

    with open('foobar.txt') as file_object:
        content = file_object.read()
How can you honestly look at any of that and claim Python isn't far more readable. It literally reads like English. "With open foobar.txt as file object, content equal file object read". Compare that to whatever monstrosity the ifstream string stream << rdbuf crap is...


> It's often even compared to pseudocode because well written Python reads like it.

This is absolute rubbish. Go and find me a piece of application code which isn’t procedural math/data manipulation, give it to someone completely new to it and see how easily they (don’t) comprehend it.


You don't have to use CLI debugging unless you want to. All popular Python IDEs have integrated debuggers with all the usual features one might expect - not as advanced as modern Java or C# debuggers, true, but definitely light years ahead of debug prints.

It's true that many users avoid them, though, but that's more of a cultural issue, and seems to be more common among DS/ML folk.


Once you write something a little more involved than an utility script

I only use Python specifically for utility scripts for this reason.


Using something like [poetry](https://python-poetry.org) would make your workflow much better. With this tool, you don’t need to care about venv and `which python`.


Roblox is a big C# shop that's pretty fun - platform tech in the gaming space. And hiring!


Coming from Java I have the same thought. Other languages seem infantile in comparison.


User of Ruff here and follower of Charlie's work.

I've been slinging python since 2003, and I've used a pretty wide swath of the toolchain. I've also had the (pleasure?) of using python in a lot of different contexts: desktop applications, web programming, custom scientific calculation plugins, grad school hacks, Maya, and obviously Juypter notebooks.

My honest take: Toolchain tools like Ruff are the only way the Python ecosystem as a whole moves forward. In order to be broadly adopted by the wide swath of use cases, it needs to be universally applicable and have a killer reason for being (in this case, speed, which opens up new use cases that didn't exist before).

Ironically, the commonality to these toolchain improvements for python ... is that they not be written in Python. If you want good analogues, you can look at the work that others have done with multithreading and trying to bypass the GIL, which is one of my other hobby horses with Python. Hot take: For most users, python is not used for itself, but more to flexibly orchestrate some other low-level problem. This is why Maya, scipy, most of data science, and other DSLs use python so much.

To empower these users, you either need to (1) work in the compiler (2) below the GIL or (3) do the heavy lifting of wrapping around the language flexibility without requiring changing the python code itself. Ruff does that, and I imagine the thesis of Astral is to extend that philosophy to the rest of the toolchain.

Lastly, in the spirit of this site, I'll give my second spicy take: I think web development in python is on the decline, and the future of python is in data science and related fields. These fields care a lot about fast toolchain, and will use ruff and other tools to achieve those ends without modifying legacy code. For web, node has won. I know users that use python but you can't really beat needing to learn just one language vs two to build a web app.


>Hot take: For most users, python is not used for itself, but more to flexibly orchestrate some other low-level problem.

Well, there's nothing wrong with that.

In fact, a great "glue language" (and Python sure needs lots of improvements in many areas) is kind of the holy grail of IT!


Ha! And I remember when Perl was the default "glue language".

It's been a long time since I've seen Perl, I bet it's still kicking around somewhere but I doubt they teach it in undergrad anymore.


Having written a bunch of Flake8 plugins, including custom plugins for internal use at a company, and using 5-10 popular plugins on every Python project I work on... rewriting the entire linting ecosystem in one monolithic Rust tool doesn't feel like the best solution. There's no good story for building an ecosystem around this yet, and I think that's a big hurdle to overcome.

Ruff is fast, sure, but the benchmarking seems a little disingenuous, as I believe their number includes caching, but doesn't necessarily include caching for other tools. In fairness, not all the other tools have caching, but it is common to run them through pre-commit and therefore only on the current git diff, which speeds them up by orders of magnitude.


> Ruff is fast, sure, but the benchmarking seems a little disingenuous, as I believe their number includes caching, but doesn't necessarily include caching for other tools.

It looks like the ruff benchmark is run with the `--no-cache` arg: https://github.com/charliermarsh/ruff/blob/main/CONTRIBUTING...


There's no reason your custom bespoke plugins couldn't be called by ruff as necessary. It's silly to burden the happy path of 99% of users who just need common sense python linting with those edge cases and custom needs.


That could be done in one of two ways:

- Supporting flake8 plugins, using the existing community, and sacrificing the speed.

- Requiring new plugins, in Python or another scripting language, sacrificing the community progress and goodwill, and sacrificing some of the speed.

Neither of these options is good. The Python linting ecosystem is a mature one with a lot of investment into the existing tools, and rather than try to speed those up (which could be done in a number of ways), Ruff started from scratch.

It doesn't feel like the right decision for an ecosystem that is as community focused as Python, and the engineering reasons feel like a toss up at best.


???

There is no downside to adding your bespoke flake8 plugins. For people that don't use them (99% of people) they get the benefit of blazing speed. For your custom plugins you live with the tradeoff of slower execution to do those AST passes with flake8/python tools. Even if ruff didn't exist you would still be burdened with your slow flake8 plugins speed. There is zero downside to you and only upside to people that aren't you.

Kinda just sounds like you're grousing because somebody moved the cheese.


It's not just "custom" plugins, it's all third-party plugins though right? If someone wants to publish a new linter for something, right now they can, and others can use it easily, but Ruff centralises that and makes it harder.

You're right that it will probably still be faster overall because "most" linting will be done with Ruff and any extras would be done externally, but now you've either got 2 tools when you had 1 before, or you've got to shell out to Python which adds overhead, or you've got to rewrite plugins in a Ruff-compatible format, or something else.

> Kinda just sounds like you're grousing because somebody moved the cheese.

I'm just disappointed that someone looked at slow linting and decided the answer was their own new tool, rather than participating in the existing community. Now the effort has forked, it'll take more work overall in the community, and we were already lacking engineering resource.


I'm disappointed the flake8 community hasn't prioritized performance and has led to python linting being much less widely utilized. I'm looking forward to tools like ruff giving much faster and more usable linting.


That's what esbuild did for their plugins—which made them useless since it kills any performance gains of switching to esbuild in the first place


Yes the point isn't esbuild will have magic pixie dust that makes nodejs AST processing magically faster--that's objectively impossible. The point is you can migrate to the new system over time without losing critical plugin functionality right now. It's not to live in a steady state with old slow nodejs plugins.


Having now switched several projects from webpack+babel to esbuild, with a bunch of plugins to replace various bits of resolution magic, SCSS compilation, and more, the performance gains are consistently astronomical. Simple builds go from taking several seconds to consistently completing in an unnoticeable amounts of time. Full rebuilds are still significantly faster than incremental (--watch) builds used to be.

Setting this up takes a bit more work than for webpack, and it's easier to run into limits on what's reasonably possible, but I don't intend to ever go back.


I agree with all of the naysayers here.

Speed is not my problem today, so what's the point if it's not solving my personal problem? Speed was my problem yesterday, and probably will be again tomorrow, but today it isn't and I don't know why random people I don't know aren't invested enough in solving my today problem.

Even worse, they're trying to get paid for it!

Seriously: what does it take to impress people? A mere 1000x speed increase and single point of configuration isn't good enough? Personally, I've waited for my much slower tools to finish plenty of times, I've only run a subset of the tools because I didn't want to wait for all of them to finish every time, and I've avoided configuring them because I'd have to figure out which one does what and how to configure each one.

Yes, it's a rearchitecture, which brings with it the pain of rearchitectures—mainly not supporting the bespoke tools built with the old architecture—but isn't it a good idea to identify when the existing base is problematic and be able to demonstrate that a superior solution could be gained by starting over? And they've even gone to the effort of bringing in the 90% case by encompassing the functionality of several existing tools!

I would understand the complaints better if you were somehow suddenly unable to run any of your existing stuff, but this isn't an incompatible upgrade to an existing project.

</rant>


> I agree with all of the naysayers ... Speed is not my problem today

> A mere 1000x speed increase and single point of configuration isn't good enough? > this isn't an incompatible upgrade

I genuinely can't understand if you are for or against this tool.


He's being extremely sarcastic. He's massively for it


The user expresses their frustration with the critics by sarcastically asking what it would take to impress them. They believe that the new technology offers significant benefits and that the complaints seem unjustified.


HN is never happy. Everything should be fast, simple, configurable, extensible, portable, retro-compatible, future-proof, environmentally-friendly, inclusive, offline-first, and above all free.

For some people, nothing is ever good enough. Such is the nature of getting feedback from strangers. The sooner you learn to filter out the naysayers, the better.

This tool is probably fine. But positive feedback is usually expressed as an upvote.


> Seriously: what does it take to impress people? A mere 1000x speed increase and single point of configuration isn't good enough?

It is great accomplishment for ruff the project. But the topic here is the launch of Astral the company, not ruff the project. And 1000x speed increase is not a business plan.


Meh. Fair point, but that isn't the only topic being discussed. And when the topic is whether ruff the tool is worthwhile, the discussion is more a pile of complaints.

I agree that the topic you're describing, and that is suggested by the URL posted, are worth talking about. It makes me nervous to invest time into a VC-backed linter. Though it seems useful enough to be forkable if things go sour.

"VC-backed linter" is a bizarre phrase, though honestly I think you could have described something like Purify in Ye Olden Days as a linter, and it was worth spending money on. (And the issues with being VC-backed go well beyond simply being whether or not something is worth spending money on.)


A faster version of black is not a strong enough value proposition for an entire company. Ruff should remain an open source product. What is the point of making every half decent Developer tool a whole startup?


That's just a foot in the door.

Like sentry made a good logging library, then pivoted to an observability service.

And today I use sentry because I have a great history with their product.

It's smart and a positive way to make money.

I dig it.

PS: ruff is not replacing black (although it will probably in the end), but compete with flake8 and pylint.


Sentry had a natural path into cloud services because error monitoring has a server-side component to it.

Serious question, what is the path for a linter? Where else are people paying for linting as a service?


A lot of companies would pay actual money for some semblance of supply-chain security. Hosted, verified, certified Python dependencies. This is how Red Hat made all their money in Linux. Something like "use our vetted and secure pypi instead of the free-for-all full of typo squaters and package takeovers that the public pypi.org offers."

Starting with some nice developer tooling and going from there doesn't seem crazy at all.


> A lot of companies would pay actual money for some semblance of supply-chain security.

After the core-js debacle[0] earlier this year, it was evident that alot of companies actually do not care care about supply-chain security.

Those that do will happily roll their own hosted repositories that provide little to no guarantees.

[0] https://github.com/zloirock/core-js/blob/master/docs/2023-02...


That's Anaconda's business model.


I don't know their plan, so I can't speak for them.

What I would do is build an entire ecosystem of that quality that would include a tool to solve the Python distributions problem.

Either you help with deployment on the server, and you offer hosting.

Or you help with making an installer, and you offer nodes to build installers for multiple OS and upload to multiple app stores, manage updates, cdn, permissions...

You can even start small and just help with a service for cross-compiling C extensions and scale from that.

Or provide machine learning analysis of the quality of your code and make companies pay for it.

Or go full Continuum.

They are good enough that they can pick and choose whatever they want, really.

When you solve pain, people pay. If readthedoc managed to survive by being a static rst site, astral has a shot provided they keep the business side of things in mind as nicely as they build their user stories.


> Serious question, what is the path for a linter?

The natural end state is yet another build service.


SAST/DAST products cost truckloads of money, so that’s a possible direction. The linter is a tech demo in this case.


Some SAST products cost a truckload of money.

There's a FOSS SAST product for Python already, though, called Pysa: https://pyre-check.org/docs/pysa-basics/


Cloud linting! You send your content hash to the cloud, and it’ll tell you what errors you have.


I'm inclined to agree and when they eventually die, what happens to the tool they built?

I've been using https://rome.tools and really love the work they put into it. It's clear they had people working fulltime on it. But now, what? The code is open-source, there are people working on it, but development has mostly dropped off. I guess that's okay? It just adds a lot of doubt into the longevity of the project.

I'd be wary adding these tools into your stack because their progress relies pretty heavily on VC funding and a tight runway to profitability.


> What is the point of making every half decent Developer tool a whole startup?

I mean, paying people who work on it, for one?


I’m not against folks getting paid. My question is how does one build an entire company selling a slightly better version of a widely used open source tool? Ruff is not strong enough to build a company from.


> X is not strong enough to build a company from.

I mean, these are the famous last words of a lot of non-visionaries. I'm not saying that Ruff is some kind of unicorn, but there are a lot of cases where a seemingly small improvement on an existing technology resulted in a very successful enterprise. Docker, for example. There are others that I'm sure people will chime in with.


Maybe your answer is correct but it is not useful, at all. It does not answer the question, like, providing real solutions.


Python has become a hugely critical language for science, AI/ML, finance, etc. and the current state of tooling has lagged the language's importance. I can think of lots of ways a company that solves that problem could add adjacent products and monetize them. Enterprise support, tooling for building and deploying custom lint rules, supply-chain security, managed builds, etc.


I agree and would ask: is linting that much of a bottleneck to development?

Having an automated lint run upon opening a PR seems like a minor expense, especially when you can work on other tickets while you wait.


It's much more noticeable when running locally. Going from something like black + pylint + mypy running in a pre-commit hook to black + ruff + mypy has been wonderful for me.

It lets me actually set up another terminal session to run ruff on every file change - where pylint would take seconds, ruff is essentially instant.

Side note: I really hope mypy can get the same treatment; it runs quickly once its cache is established, but it's terribly slow running from scratch.


I'm currently speeding up Mypy + Jedi in Rust. I'm pretty far already and it's definitely a lot faster. Tests are currently running 500 times faster than Mypy.


These don't have to be run on every file in the repo, only the ones that have changed. At least not often. If it takes seconds, there is something wrong.


this is too narrowly focused on what Ruff does today and not enough on what it could do for the Python ecosystem as a whole. his focus and clear execution has built an incredible wedge and brand, and his next product will probably be well received, and the next, and the next.

the opportunity to bring speed and sanity to the whole Python Ecosystem tooling is large (if you dont feel the pain, you don't do enough python) and honestly i cant belive anyone has been (crazy enough) to try this since Anaconda.


I agree with you. On the other hand, if you build something cool you may want to make some money... The problem imo, is changing the speech in the middle of the way, aham openai


I agree, but I also don't make the decision. The free market decides whether their value proposition is enough, so the company's success is dependent on the developer demand.


This along with pydantic [0] means that 2/3 of my favorite python open source projects are now commercially backed. I wonder how long FastAPI will last?

As an aside, what is the issue with versioning these days? Ruff and FastAPI both have massive user bases and a reliable codebase, but haven't released v1.0.0! Ruff hasn't even got as far as v0.1.0.

[0]: https://techcrunch.com/2023/02/16/sequoia-backs-open-source-...


You aren't the only one that has noticed:

https://0ver.org/

More seriously though, for these projects, the first version is version zero. If they make no major backward incompatible changes, why would they ever release a version 1?


Because version 1 is the first version with any actual guarantees: https://semver.org/spec/v2.0.0.html#spec-item-4


I wonder if the eventual goal of this is similar to deno, where it's about building really good, no-hassle tooling/ecosystem for a language and then making it very simple to deploy/host (if I'm misspeaking about deno please correct me, I'm not well versed on it).

One thing I like most about Go is actually the Go tool; having a ubiquitous linter, formatter, test framework, dependency management, etc, all built in and not having to install various tools is huge imo. I think a lot of languages are missing this ease of tooling. I think (?) this is what deno/astral is trying to address for javascript/python and then the business model is once you're using it, it's simple to host with them. Curious what other think


Speed is a non existent problem for linters. Pyflake is quick enough. I spend more time thinking than writing code. I just write in one file at a time, which can be linted in sub-second time.

How does it compare with Pyflake8 in error messages? Does it find more errors? Does it have less false positives? Does it integrate well with other developers tools? Does it have sane defaults?

These are the really important questions that aren't answered in the site.


I disagree about the speed. I don't have a problem with pyflake, but running `time ruff -s .` in a project with 1,236 Python files took 78ms. At that speed, it could re-check the file I'm working on in an editor after every keystroke with no noticeable latency. It's not just a little bit faster. It's freakishly, ridiculously, gone-plaid faster.

Edit: for comparison, flake8 took 8.19s and found approximately the same number of issues. pyflakes took 4.49s and found fewer.


Don't need to lint the whole repo, just the files you're working on currently. Git can tell you what has changed.


That’s not true unless you have a dependency map between all modules. (Note: that’s what I did in pytest-fastest to only retest modules that had changed, or that imported modules that had changed.) Otherwise, if you rename a function, you wouldn’t know what all broke.


You don’t need to check everything, every time. (Another case of yours is when you want all types checked.)

Good time for a full check is when new feature is complete. Run all linters, typers, and full test suite, until clear. Then commit and push.


There's three relevant speeds for linters:

1. Fast enough to be live updating as you type in an IDE.

2. Slow enough that running a linter has to be a separate action you take, but fast enough that you don't go do something else while it's running.

3. So slow that it's an asynchronous task that you launch and then come back to later.

Ruff is in the first category, while most other python linters are in the second. This level of performance enables a qualitative difference in how you interact with the tool. If you are invoking it as a separate task, then going from 500ms to 50ms is indeed not very interesting, though.


Speed is a problem for linters. If I spend 200ms of CI time instead of say 5minutes that’s a real difference in toil and CI spend

Ruff has an LSP too so it integrates well with editors unlike flake8 and similar which only work on save — and are really slow


No linter takes 5 minutes to analyze a bunch of modified files for your next commit.


> How does it compare with Pyflake8 in error messages?

https://github.com/charliermarsh/ruff#rules

> Ruff supports over 500 lint rules, many of which are inspired by popular tools like Flake8, isort, pyupgrade, and others. ... By default, Ruff enables Flake8's E and F rules. Ruff supports all rules from the F category, and a subset of the E category, omitting those stylistic rules made obsolete by the use of an autoformatter, like Black.

You can see the current list of supported rules here: https://beta.ruff.rs/docs/rules/

There's also a checklist on this PR which tracks progress on implementing pylint compatibility: https://github.com/charliermarsh/ruff/issues/970


I am at once:

- happy because Ruff deserves full time focus and having a team that can focus on tooling as their main product (not a nights&weekends hobby) is a clear win for everyone;

but also

- concerned because VC is not charity, and this must surely come with strings attached (in terms of future growth); I haven't seen many VCs aiming for "sustainable profitable business providing great value for the community" type exits; then again, if this turns into a Hashicorp-type story that wouldn't be too bad an outcome either


If your codebase is as large as cpython's, I think the benefits are clear. But most projects aren't actually this big right? Flake8 runs less than a second on even my biggest projects. Is linting such a bottleneck in people's workflows? Why bother shaving a couple seconds off a linter when your test suite and Dockerfiles take minutes to complete?

I don't really see why I should care about ruff.


If both run under 500ms. I guess it's your choice. Just pick the one that's most productive for you. If flake8 takes longer than 1 second, i'd replace it with ruff. 1 second is a really really long time for a linter, even if your codebase is 1 million lines of code.


I'm glad people are continuing the promise of Rome. High quality, high performance programming language tooling is a great mission and very much an unsolved problem. And tbh, the Python ecosystem is a great space. There's plenty to be built.


I don't use Rust - but can't deny Rust is saving the environment.

countless CPU hours wasted by running dev tools written in slow as molasses languages now getting rewritten in Rust.

build / lint times * number of times builds taken * kWH = saved energy


This is a drop in the ocean, and really not saving anything meaningful.


Hopefully. But oftentimes the result is people just use more of something when it's cheaper.



A bit optimistic to say "sometimes" IMO


heh, I would actually be curious about that environment claim when one balances the amount of power consumed by compiling rust; i.e. the savings for its users balanced against the horrific amount of wattage drawn on the developer's workstations or cloud or ci servers

I now have to compile Firefox overnight because I can no longer compile it and do something else with my (core i7) laptop simultaneously


Sure, as long as there are no tradeoffs involved. But there are. Opportunity cost for one.

Your saved energy calculation is the absolute upper bound possible, the real value is likely close to 0 or even negative (crypto).

Even the upper bound you've calculated is an infinitely small amount of energy compared with everything else we use energy for at the global scale. It's irrelevant.

If you want to save the world, do something that has a clear positive relationship towards it. Ditch your car, plant a tree, vote for the right politician.


Those progress bars with time of other linters - is it a joke or it's a data from stupidly enourmous codebase? Or slow (>1s) linters is something that is normal in python?


It's linting the entire CPython codebase and standard library, which is quite large (possibly one of the largest python codebases). But yes, pure python linters will be inherently quite a bit slower than a native implementation. The point of ruff is that it's so fast you forget or never even notice it's running. Every change, every little modification, etc. should always be linted with near instant feedback.


I worked on a 50k-100k loc python codebase where the linters took a full minute to run if you deleted your cache. And I did try to optimize it. I wouldn't be surprised if that's normal.


If you have to do things like parsing or AST walking a lot, you run into the slow performance of the core bytecode interpreter pretty quickly. For Python, this is a compute-heavy workload.


I think HN should ban people who are too lazy to put a proper subject line on their post.

I mean, "Astral" .... what's that ? The "astral.sh" domain doesn't tell me anything either.

I'm sure those in the know automatically know what it is, but for the rest of us, the title is completely and utterly meaningless.


Banning is a little extreme, but I agree that title is unhelpful. Better title would be "Ruff is a fast Python linter written in Rust" (which is a slight paraphrase from one of the subheads).


Or "Ruff creators found Astral Software Inc." or something.


Editoralizing is discouraged and your post will end up edited to match the original URL’s title either way.


especially when it doesnt seem to be the ruff founder who submitted this particular post. maybe dang or someone will rename it something more appropriate but from hn comments its pretty clear what the context is


Agreed. I have a policy of _never_ clicking on a link if I don't have at least _some_ idea of what's behind it, and that includes HN. It's the top link on HN right now and I came to the comments just to see what it was even about.

And yes, I have regretted clicking on HN links before.


+1. Comes across as a bad attempt at gaining publicity.


Although I'm sure it is faster, comparing "Linting" speeds as a Blackbox strikes me as bullshit artistry (for any language other than python perhaps)


I hope they eventually attack package management and easy static bundling. It would make Python more attractive for people coming from other ecosystems.


I wonder if they'll try to bring in Pyflow¹, a rust-based python package manager, under the Astral banner. Could make a lot of sense!

¹: https://www.github.com/David-OConnor/pyflow


I cannot comment on the feasibility of building a profitable open source business around Python tooling, but I can say that I'm very very glad someone is taking that risk: both Charlie in career/business risks, and the VCs in taking on the financial risk. I just get to benefit - ruff is really nice and I anticipate big improvements to Python tooling as a result, at no cost to me.


If they can get a good type checker in ruff, or produce one as comparably better than mypy than ruff is from pre-existing tools ... well.


I really hope they develop a faster mypy. In my dev workflows, mypy takes most of the time. Linting is a small fraction of it.


I'm currently speeding up Mypy + Jedi in Rust. I'm pretty far already and it's definitely a lot faster. Tests are currently running 500 times faster than Mypy.


A VC raise gives the team cash to work on this full-time, that's great news.

What worries me is what possible possible path can exist that gives VC-returns that is not at odds with customer happiness? Customers being Python developers.


In the last few weeks I've been using Python to play with some LLMs locally, and at first I thought there was some bug in VS Code because the linting was so slooooooow.

This seems long overdue! Looks like a great project :)


I'm surprised PyLance isn't in the list of linters that they benchmarked. It's the fastest one by far, in my experience, and it's conveniently available in the VSCode Python plugin.


Just pointing out that Pylance is a VSCode wrapper around Pyright, which is a static type checker with no primary linting features. The separate VSCode Python plugin however integrates with pylint, flake8, pylama, bandit, etc. and I agree that it is very good.


Somehow, ruff is linting my entire codebase 350x faster than pylint did.


What’s in the roadmap for what will be built by Astral?


they sure know how to make a website! quite a change from ye old README.md


Perfect timing, I’m excited to what Astral will bring.

The python ecosystem keeps building momentum as “doing things with data” becomes bigger, more accessible, and also more (near) real-time and I think the ecosystem would really thrive with better, unambiguous tools that become de facto to the community of builders instead of plenty of suboptimal ones to choose from.


Congrats Charlie! Excited to see you launch and I wish you the best. Ruff is awesome and y'all are doing great work.


I do a fair amount of python scripting but I don't work on code bases or anything.

Can someone help me along what this tool would help me with? I think I've been stuck in the scripting world for far too long.

Edit: .... oh I've been using one of these for ages and didn't even realize it.


This is great, I'm going to switch to this from black. Being used to working in other languages I feel like I'm swimming in molasses when using Python.

It's funny that all good things for Python are not written in Python. Says a lot about the language.


It doesn't replace black, at least not yet. It sounds like autoformatting is planned.


Yeah, just realized that and was about to edit my comment. Found an issue tracking the black replacement bit: https://github.com/charliermarsh/ruff/issues/1904


Python needs something like pip and poetry that is (1) correct, and (2) faster than poetry.


This might be Pyflow, written in rust: https://www.github.com/David-OConnor/pyflow


Thanks for mentioning this, wasn't aware of this potential alternative to poetry (which I keep making mistakes with, since I only use it every few months or so).


Super excited about ruff improvements, but I gotta comment that the CSS for a clicked link in astral.sh looks like normal text. I clicked a link, hit back in Android Firefox, and couldn't find the link again.


Already is a python package named Astral: https://sffjunkie.github.io/astral/


This sounds nice, but um, what's the benefit to faster linting? Is it being slow really a problem people have? I've no experience of this.


18 minutes of CPU time vs 22 seconds of CPU time is significant energy-wise too.

(Example from https://github.com/home-assistant/core/pull/86224 by yours truly.)


That's a great point. The GWP of computation is hugely underappreciated.

(Also, nice work!)


Just because you don't have the issue doesn't mean it's non-existent. We went from 8ish seconds with flake8 to tens of milliseconds with ruff. Ruff can just run as a pre-commit hook because it's so fast.

I for one have stopped using flake8 in favour of ruff because of both speed and the huge amount of supported rules in ruff already.


> Just because you don't have the issue doesn't mean it's non-existent.

Indeed! Knowing this is why I asked! Having only worked on small projects I've never had an issue with flake8 as a precommit hook, but what you describe makes it sound compelling.


OT...but brilliant site design. compliments.


Seems cool but i wish they had an auto-formatter.

After years of coding, I have found that taking away the opinionated-ness and doing a format-on-save (ideally, that's supported by the language like Golang) is by far the most productive.

I'm sad that python is still prevalent everywhere, given how terrible the language and its tooling is, but it seems it's not going away with the A.I wave, so companies like this will become more valuable.


nowadays, it's cool to say this new toy is faster than the rest because it is written in Rust without providing further details?

When I visit the Bun project landing page[0], I get concise reasons as to why bun is faster than its peers.

[0] https://bun.sh/


> nowadays, it's cool to say this new toy is faster than the rest because it is written in Rust without providing further details?

What more details do you want? You run it, and it runs in milliseconds rather than seconds. I don't care if it's because it's written in Rust or because they sacrificed a goat to Baal, I care that it's fast.


> nowadays, it's cool to say this new toy is faster than the rest because it is written in Rust without providing further details?

It's usually true, too.


Don't have an opinion about it yet, but I love the website - it's really fast !


The site is very well done and impressed me too. It's built with Next and noticed Astral is backed by Guillermo Rauch himself.


Is there a similar endeavour for Ruby?


Does anyone have recommendations for a good jinja linter?


How well would this mix with something like fastapi?


Testing do i be shadowbannered


Good for him.

I’m less enthusiastic about them hiring one of the core contributors of Rome away though.

I don’t care about Python, but I very much care about Typescript.


How are they going to make money?


Does it support type inference and stuff?


https://beta.ruff.rs/docs/faq/#how-does-ruff-compare-to-mypy...

> Ruff is a linter, not a type checker. It can detect some of the same problems that a type checker can, but a type checker will catch certain errors that Ruff would miss. The opposite is also true: Ruff will catch certain errors that a type checker would typically ignore. ...


just tried. damn it is fast.


Yet another linter.


I'm trying to use this via ssh into another instance on VSCode, but it doesn't work, any help?


What is it beyond a linter.

Also: speed is good, but when it comes to linting, that's not what I'd place first as a feature.

How good is the code at spotting my potential mistakes would come before speed for sure.

Hence: it's fast, but is it in any way better than the other solutions out there?


Is anyone else sick of seeing "yet another" Python tool? Python is so slow and irrelevant by so many standards nowadays...

Fully prepared to be downvoted for these because I am all too aware of Python's (IMO) undeserved popularity, but the facts are clear: Python lags in performance against nearly any other "backend" or "scripting" language (compare to C, C++, C#, Go, Rust, and many more)

My reasoning: every day I see stack overflow filled with repetitive Python questions, and random finance furus talking about their cool data analysis tools, all in Python, all with poor performance and hacked together with god knows how many libraries.

The problem is indeed because Python is so accessible: anyone who can write Python doesn't know enough about computing in general to even know what the concept of performance is, let alone risks of relying on 1903287012 libraries to get the job done.

Hell, I know half a dozen companies still using Python 2... jeez

Thanks for coming to my TED rant.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: