Hacker News new | past | comments | ask | show | jobs | submit | epage's comments login

In Rust, it is frowned upon to pin to an exact version. We do encourage people to specify what semver version range will work. We don't have great support for depending on and verifying multiple-major version ranges when a library broke compatibility but not in a way that affects you.


Is this being tracked somewhere?


We want to make our software available to any system without every library maintainer being a packaging expert in every system.

The user experience is much better when working within tkese packaging systems.

You can control versions of software independent of the machine (or what distros ship).

Or in other words, the needs of software development and software distribution are different. You can squint and see similarities but the fill different roles.


So every user has to be an expert in every package manager instead? Makes sense. Make life easy for the developer and pass the pain on to thousands of users. 20 years ago you may or may not support RPM and DEB and for everyone else a tarball with a make file that respected PREFIX was enough. (Obviously a tarball doesn’t support dependencies.)


Why would users need to be an expert in them?


Real world benchmarks also wouldn't be great because they would be showing how well it works in someone else's program, rather than yours.

This at least gives you an idea of what the relative cost of different operations are so you can consider what are frequent operations in your program and then benchmark a couple from there, rather than everything.


Also, smol_str was removed from the comparison because matklad, the author of smol_str, suggests ecow

> I’d rather say the opposite: users of those crates should switch to ecow. It is exactly what smol_str would have been, if it were a proper crate with a stable API, rather than an implementation detail of rust-analyzer. > > It’s a drop-in replacement for String, with O(1) clone and SSO, and I believe this is all you need. Other crates either have needlessly restricted API (no mutation), questionable implementation choices, or a bunch of ad hoc traits in the API.

https://www.reddit.com/r/rust/comments/117ksvr/ecow_compact_...


> To the extent there is an exception, testing-only code may be. Testing-only code has very different constraints than production code anyhow. Even then, though, I still find that refactoring problem arises, and test code needs to be refactorable too.

For me, I avoid defining anything within a function except when that thing being defined is what is being tested in a test, e.g. https://github.com/clap-rs/clap/blob/87647d268c8c27e3298b2c0...


As someone who was advocating for a similar warning in Rust (finally added to clippy in 1.78), I'm glad to see this improvement in `go vet`

> The go vet subcommand now includes the stdversion analyzer, which flags references to symbols that are too new for the version of Go in effect in the referring file. (The effective version is determined by the go directive in the file’s enclosing go.mod file, and by any //go:build constraints in the file.) > > For example, it will report a diagnostic for a reference to the reflect.TypeFor function (introduced in go1.22) from a file in a module whose go.mod file specifies go 1.21.


Don't most Rust crates solve this by running MSRV builds/tests?

Or is the goal to catch it eg. in precommit hook?


Clippy is both faster in the CI and can be integrated into an IDE feedback


At least for servo, it started as a research project. I imagine a lot of time was spent trying out ideas for WebRender or parallel CSS (two parts that made it into Firefox).

I also assume starting a browser i easier than finishing a browser but I can't speak to where either is at on the difficut-to-finish spectrum.


The more I've been doing open source maintenance and contributions where there isn't as much context between the code author and reviewer, the more I've been pushing for a little more than this.

- Add tests in a commit *before* the fix. They should pass, showing the behavior before your change. Then, the commit with your change will update the tests. The diff between these commits represents the change in behavior. This helps the author test their tests (I've written tests thinking they covered the relevant case but didn't), the reviewer to more precisely see the change in behavior and comment on it, and the wider community to understand what the PR description is about.

- Where reasonable, find ways to split code changes out of feature / fix commits into refactor commits. Reading a diff top-down doesn't tell you anything; you need to jump around a lot to see how the parts interact. By splitting it up, you can more quickly understand each piece and the series of commits tells a story of how the feature of fix came to be.

- Commits are atomic while PRs tell a story, as long as it doesn't get too big. Refactor are usually leading towards a goal and having them tied together with that goal helps to provide the context to understand it all. However, this has to be balanced with the fact that larger reviews mean more things are missed on each pass and its different things on each pass, causing a lot of "20 rounds of feedback in and I just noticed this major problem".

As an example of these is a recent PR of mine against Cargo: https://github.com/rust-lang/cargo/pull/14239

In particular, the refactors leading up to the final change made it so the actual fix was a one line change. It also linked out to the prior refactors that I split out into separate PRs to keep this one smaller.


I agree, but I usually explain (and do) this from the side of fixing a bug, but where the test suite is currently passing: first commit adds the failing test (shows that it would have caught the error), second commit makes it pass.

Also agree with GP that each commit on master should be passing/deployable/etc., but I don't see why they can't be merge commits of a branch that wasn't like that.


That still interferes with `git bisect`. Make the test pass in history but then make it fail in your working directory and work to get it to pass before committing.


No it doesn't? Only on your unmerged branch anyway, which seems either no big deal or desirable to me.


I absolutely love that testing suggestion - I'd never considered shipping a whole separate commit adding the OLD test first, but having a second commit that then updates that test to illustrate the change in behavior is such an obviously good idea.


imo revision is worse.

I feel like the best terms are patch id and patch revision id.


I mean none of the options are great, imo.

- "commit" is overloaded with git, and jj still uses commits for other things under the hood. - "patch id" is overloaded with patch files, and jj still uses git's snapshots, not patches (unlike darcs/pijul, iiuc) - "patch revision id" isn't bad, but it's a bit wordy - "change id" just seems vague, since it's unclear where one change begins and another ends

"revision" at least captures the idea that you are revising the same piece of functionality, but then you might expect each snapshot/commit to be a different revision, and not have the same ID, which also isn't quite right.


I am sad I read this, because patch is perfect, but I doubt they will change the language again.


patch sounds too specific... like an actual patch file tied to the actual contents of the patch.

change is probably the right word, you want to change something, the exact operations of the change (multiple revisions of different patches) can evolve over time.


Maybe because I have never used an actual patch file, but patch just feels right to me. As an end user, a patch is an intentional delta blob resulting in some difference to the software. Writing software is just organizing those deltas. If I need to cherry pick between branches, pulling a patch from one to another feels more right than “changes” as a collective object.

Oh well, naming things is hard.


Isn't sapling only compatible with git servers and not local repos? That is a huge impediment with so many tools expecting git. That is what i find impressive about jj is it is compatible with local repos.


that is true, i have a separate clone of my repos with sapling and git and use 98% of the time the sapling one with sapling and most importantly interactive smartlog (which is 1000 times better than most git tooling so would be reason alone to do this.) for the few times i need my git tools i sync the git clone via the git remote and then use the git tools on that repo.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: