Hacker Newsnew | past | comments | ask | show | jobs | submit | epage's commentslogin

What was it about headers?

In terms of rebuild performance in defining the interface vs implementation, there is interest in having rustc handle that automatically, see https://rust-lang.github.io/rust-project-goals/2025h2/relink...


That change sounds like a big improvement if they manage to do it!

There are at least workarounds for a surprising number of languages: https://dbohdan.com/scripts-with-dependencies

Read up on the details of each one when proposing the design for Cargo script to see what we could learn.


I would argue that third-party tools don't really cut it, because a lot of the value is being able to include a script inline in, e.g., an email or chat message, and that's undermined if the recipient has to download and install a separate tool. (uv gets half credit because adoption is rapidly rising and it has a shot at becoming a de facto standard, but I'll only award full credit to Python when pip supports this.) They're good for exploring the design space, though.


Fully agree. I had a section in the Cargo RFC devoted to why a first class solution is important, see https://rust-lang.github.io/rfcs/3502-cargo-script.html#firs...


As the person designing and implementing cargo script integration into cargo (there have been many third-party implementations in the past), I was both glad to see it in the wild and surprised and glad to see it called out like this!

Docs are at https://doc.rust-lang.org/nightly/cargo/reference/unstable.h...

Yes, there has been a long road to this in defining what this should look like, how it should interact with the language, what is the right scope for the initial release, and so on.

At this point, I'm doing what I hope is the wrap up work, including updating the style guide and the Rust reference. The big remaining work is in details I'm working out still for rustfmt and rust-analyzer. Other than those, I need to get to a bug fix in rustc and improve error reporting in Cargo.

For myself, I use cargo script nearly daily as I write a script whenever I'm creating a reproduction case for an Issue I'm interacting with.


T-lang and T-cargo still decided to add frontmatter to Rust, see https://rust-lang.github.io/rfcs/3503-frontmatter.html


What `cargo being cargo` problems are you having?


How does this compare to `cargo check --timings`?

It visualizes each crate's build, shows the dependencies between them, shows when the initial compilation is done that unblocks dependents, and soon will have link information.


Let's play this out in a compiled language like Cargo.

If every dependency was a `=` and cargo allowed multiple versions of SemVer compatible packages.

The first impact will be that your build will fail. Say you are using `regex` and you are interacting with two libraries that take a `regex::Regex`. All of the versions need to align to pass `Regex` between yourself and your dependencies.

The second impact will be that your builds will be slow. People are already annoyed when there are multiple SemVer incompatible versions of their dependencies in their dependency tree, now it can happen to any of your dependencies and you are working across your dependency tree to get everything aligned.

The third impact is if you, as the application developer, need a security fix in a transitive dependency. You now need to work through the entire bubble up process before it becomes available to you.

Ultimately, lockfiles are about giving the top-level application control over their dependency tree balanced with build times and cross-package interoperability. Similarly, SemVer is a tool any library with transitive dependencies [0]

[0] https://matklad.github.io/2024/11/23/semver-is-not-about-you...


This scheme _can_ be made to work in the context of Cargo. You can have all of:

* Absence of lockfiles

* Absence of the central registry

* Cryptographically checksummed dependency trees

* Semver-style unification of compatible dependencies

* Ability for the root package to override transitive dependencies

At the cost of

* minver-ish resolution semantics

* deeper critical path in terms of HTTP requests for resolving dependencies

The trick is that, rather than using crates.io as the universe of package versions to resolve against, you look only at the subset of package versions reachable from the root package. See https://matklad.github.io/2024/12/24/minimal-version-selecti...


Note that the original article came across as there being no package unification. A later update said that versions would be selected "closest to the root".

I was speaking to that version of the article. There was no way to override transitive dependencies and no unification. When those are in the picture, the situation changes. However, that also undermines an argument of the article against SemVer

> But... why would libpupa’s author write a version range that includes versions that don’t exist yet? How could they know that liblupa 0.7.9, whenever it will be released, will continue to work with libpupa? Surely they can’t see the future? Semantic versioning is a hint, but it has never been a guarantee.


Is that what Go does? I always thought their module versioning system sounded well thought out (though I gave up on Go before they introduced it so I have no idea how well it works in practice).


> All of the versions need to align to pass `Regex` between yourself and your dependencies.

In nominally typed languages, all types have to be nominally the same, yes, but it does not follow that semver compatible packages will not permit passing different versions of each around: https://github.com/dtolnay/semver-trick (the trick is to ensure that different versions have nominally the same type by forward dependency).

Anyways, even with this, cargo has a lock file because you want to run in CI what you ran when you developed the feature instead of getting non-deterministic executions in an automated build + verification. That's what a deterministic build is aiming for. Then next feature you develop you can ignore the lockfile, pull in the new dependencies and move on with life because you don't necessarily care for determinism there: you are willing to fix issues. Or you aren't and you use the lockfile.


> All of the versions need to align to pass `Regex` between yourself and your dependencies.

No, they don't. As the article explains, the resolution process will pick the version that is 'closest to the root' of the project.

> The second impact will be that your builds will be slow....you are working across your dependency tree to get everything aligned.

As mentioned earlier, no you're not. So there's nothing to support the claim that builds will be slower.

> You now need to work through the entire bubble up process before it becomes available to you.

No you don't, because as mentioned earlier, the version that is 'closest to root' will be picked. So you just specify the security fixed version as a direct dependency and you get it immediately.


Wasn’t the article suggesting that the top level dependencies override transitive dependencies, and that could be done in the main package file instead of the lock file?


That was only in an update, not in the original text.


You should not be editing your cargo.lock file manually. Cargo gives you a first-class way of overriding transitive dependencies.


You can also do cargo update -p


Java is compiled, FYI.


And interpreted.

Some call transforming .java to .class a transpilation, but then a lot of what we call compilation should also be called transpilation.

Well, Java can ALSO be AOT compiled to machine code, more popular nowadays (e.g. GraalVM).


Likely, comparing on `char` ('+') would be slower as it requires decoding the `&str` as a `char` which comes with some significant overhead (I've seen 9% on a fairly optimized parser). Ideally, when you grammar is 7-bit ASCII (or any 8-bit UTF-8 values are opaque to your grammar), you instead parse on `&[u8]` and do `u8` comparisons, rather than `char` or `&[u8]`.


Likely the reason `split_whitespace` is so slow is

> ‘Whitespace’ is defined according to the terms of the Unicode Derived Core Property White_Space.

If they used `split_ascii_whitespace` things would likely be faster.

Switching parsing from `&str` to `&[u8]` can offer other benefits. In their case, they do `&str` comparisons and are switching that to a `u8` comparison. A lot of other parsers are doing `char` comparisons which requires decoding a `&str` to a `char` which can be expensive and is usually not needed because most grammars can be parsed as `&[u8]` just fine.


> Second (and a much worse problem) are macros. They actually hit the same issue. A macro that expands to 10s or 100s of lines can very quickly take your 10000 line project and turn it into a million line behemoth.

Recently, support as added to help people analyze this

See https://nnethercote.github.io/2025/06/26/how-much-code-does-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: