Hacker News new | past | comments | ask | show | jobs | submit | steveklabnik's comments login

You can use the GNU ABI instead, if you don't want to use the Visual Studio Build Tools.

Not only does NonStop not support Rust, but apparently they failed to port gcc to it, even. So compiling Rust straight to C itself is pretty much the only option there.

> allowing an endrun of the the spirit of the GPL in gcc was never going to happen.

You're right:

https://gcc.gnu.org/legacy-ml/gcc/2005-11/msg00888.html

> If people are seriously in favor of LLVM being a long-term part of GCC, I personally believe that the LLVM community would agree to assign the copyright of LLVM itself to the FSF and we can work through these details.


Not your parent.

I never understood the "git cli sucks" thing until I used jj. The thing is, git's great, but it was also grown, over time, and that means that there's some amount of incoherence.

Furthermore, it's a leaky abstraction, that is, some commands only make sense if you grok the underlying model. See the perennial complaints about how 'git checkout' does more than one thing. It doesn't. But only if you understand the underlying model. If you think about it from a workflow perspective, it feels inconsistent. Hence why newer commands (like git switch) speak to the workflow, not to the model.

Furthermore, some features just feel tacked on. Take stashing, for example. These are pseudo-commits, that exist outside of your real commit graph. As a feature, it doesn't feel well integrated into the rest of git.

Rebasing is continually re-applying `git am`. This is elegant in a UNIXy way, but is annoying in a usability way. It's slow, because it goes through the filesystem to do its job. It forces you to deal with conflicts right away, because git has no way of modelling conflicts in its data model.

Basically, git's underlying model is great, but not perfect, and its CLI was grown, not designed. As such, it has weird rough edges. But that doesn't mean it's a bad tool. It's a pretty darn good one. But you can say that it is while acknowledging that it does have shortcomings.


It's more than that; it's also git's incredibly unfriendly way of naming things.

Take for example the "index" which is actually a useful thing with a bad name. Most tutorials start by explaining that the index is a staging area on which you craft your commit. Then why is it called index and not staging area? Incredibly bad name right there from the get go. If you ask what the word "index" means in computer science, people usually think of indices into an array, or something like a search index that enables faster searching. Git's index doesn't do any of that.

And git's model leaks so much implementation detail that many people mistake these for essential concepts; there are people who would tell you any version control system that doesn't have the "index" is not worth using because they don't allow one to craft beautiful commits. That's patently false as shown by jj and hg. This useful concept with a bad name becomes one amorphous thing that people cannot see past.


> I've never experienced any other software which has such a powerful mental model.

I hate to be that guy, but you should spend some time with jj. I thought the same, but jj takes this model, refines it, and gives you more power with fewer primitives. If you feel this way about git, but give it an honest try, I feel like you'd appreciate it.

Or maybe not. Different people are different :)


(not your parent)

> How long did it take you to become proficient?

As with anything, it varies: I've heard some folks say "a few hours" and I've had friends who have bounced off two or three times before it clicks.

Personally, I did some reading about it, didn't really stick. One Saturday morning I woke up early and decided to give it a real try, and I was mostly good by the end of the day, swore of git entirely a week later.

> I assume the organization uses git and you use jujitsu locally, as a layer on top?

This is basically 100% of usage outside of Google, yeah. The git backend is the only open source one I'm aware of. Eventually that will change...


GitButler and jj are very friendly with each other, as projects, and are even teaming up with Gerrit to collaborate on the change-id concept, and maybe even have it upstreamed someday: https://lore.kernel.org/git/CAESOdVAspxUJKGAA58i0tvks4ZOfoGf...

This is exciting, convergence is always good, but I'm confused about the value of putting the tracking information in a git commit header as opposed to a git trailer [1] where it currently lives.

In both cases, it's just metadata that tooling can extract.

Edit: then again, I've dealt with user error with the fragile semantics of trailers, so perhaps a header is just more robust?

[1] https://git-scm.com/docs/git-interpret-trailers


Mostly because it is jarring for users that want to interact with tools which require these footers -- and the setups to apply them, like Gerrit's change-id script -- are often annoying, for example supporting Windows users but without needing stuff like bash. Now, I wrote the prototype integration between Gerrit and Jujutsu (which is not mainline, but people use it) and it applies Change-Id trailers automatically to your commit messages, for any commits you send out. It's not the worst thing in the world and it is a little fiddly bit of code.

But ignore all that: the actual _outcome_ we want is that it is just really nice to run 'jj gerrit send' and not think about anything else, and that you can pull changes back in (TBD) just as easily. I was not ever going to be happy with some solution that was like, "Do some weird git push to a special remote after you fix up all your commits or add some script to do it." That's what people do now, and it's not good enough. People hate that shit and rail at you about it. They will make a million reasons up why they hate it; it doesn't matter though. It should work out of the box and do what you expect. The current design does that now, and moving to use change-id headers will make that functionality more seamless for our users, easier to implement for us, and hopefully it will be useful to others, as well.

In the grand scheme it's a small detail, I guess. But small details matter to us.


Thanks for the explanation!

While you're around, do you know why Jujutsu created its own change-id format (the reverse hex), rather than use hashes (like Git & Gerrit)?


I don't know if it's the only or original reason, but one nice consequence of the reverse hex choice is that it means change IDs and commit IDs have completely different alphabets ('0-9a-f' versus 'z-k'), so you can never have an ambiguous overlap between the two.

Jujutsu mostly doesn't care about the real "format" of a ChangeId, though. It's "really" just any arbitrary Vec<u8> and the backend itself has to define in some way and describe a little bit; the example backend has a 64-byte change ID, for example.[1] To the extent the reverse hex format matters it's mostly used in the template language for rendering things to the user. But you could also extend that with other render methods too.

[1] https://github.com/jj-vcs/jj/blob/5dc9da3c2b8f502b4f93ab336b...


Yes, it was to avoid ambiguity between the two kinds of IDs. See https://github.com/jj-vcs/jj/pull/1238 (see the individual commits).

Interesting, that was just a few short months before I showed up. :)

I'm not an expert on this corner of git, but a guess: trailer keys are not unique, that is

  Signed-off-by: Alice <alice@example.com>
  Signed-off-by: Bob <bob@example.com>
is totally fine, but

  Change-id: wwyzlyyp
  Change-id: sopnqzkx
is not.

I've also heard of issues with people copy/pasting commit messages and including bits of trailers they shouldn't have, I believe.


~I think it's more that not all existing git commands (rebase, am, cherry-pick?) preserve all headers.~

ignore, misread the above


That's a downside of using headers, not a reason for using them. If upstream git changes to help this, it would involve having those preserve the headers. (though cherry-pick has good arguments of preserving vs generating a new one)

ah, I'm sorry, I misread your comment (and should have mentioned the cherry-pick thing anyway).

It’s all good!

> which if I understand correctly triggers UB.

Yes, your parent's example would be UB, and require unsafe.


> whether or not it was better to design around the fact that there could be the absence of “something” coming into existence when it should have been initialized

So this is actually why "no null, but optional types" is such a nice spot in the programming language design space. Because by default, you are making sure it "should have been initialized," that is, in Rust:

  struct Point {
      x: i32,
      y: i32,
  }
You know that x and y can never be null. You can't construct a Point without those numbers existing.

By contrast, here's a point where they could be:

  struct Point {
      x: Option<i32>,
      y: Option<i32>,
  }
You know by looking at the type if it's ever possible for something to be missing or not.

> Are there any pitfalls in Rust when Optional does not return anything?

So, Rust will require you to handle both cases. For example:

    let x: Option<i32> = Some(5); // adding the type for clarity

    dbg!(x + 7); // try to debug print the result
This will give you a compile-time error:

     error[E0369]: cannot add `{integer}` to `Option<i32>`
       --> src/main.rs:4:12
        |
    4   |     dbg!(x + 7); // try to debug print the result
        |          - ^ - {integer}
        |          |
        |          Option<i32>
        |
    note: the foreign item type `Option<i32>` doesn't implement `Add<{integer}>`
It's not so much "pitfalls" exactly, but you can choose to do the same thing you'd get in a language with null: you can choose not to handle that case:

    let x: Option<i32> = Some(5); // adding the type for clarity
    
    let result = match x {
        Some(num) => num + 7,
        None => panic!("we don't have a number"),
    };

    dbg!(result); // try to debug print the result
This will successfully print, but if we change `x` to `None`, we'll get a panic, and our current thread dies.

Because this pattern is useful, there's a method on Option called `unwrap()` that does this:

  let result = x.unwrap();
And so, you can argue that Rust doesn't truly force you to do something different here. It forces you to make an active choice, to handle it or not to handle it, and in what way. Another option, for example, is to return a default value. Here it is written out, and then with the convenience method:

    let result = match x {
        Some(num) => num + 7,
        None => 0,
    };

  let result = x.unwrap_or(0);
And you have other choices, too. These are just two examples.

--------------

But to go back to the type thing for a bit, knowing statically you don't have any nulls allows you to do what some dynamic language fans call "confident coding," that is, you don't always need to be checking if something is null: you already know it isn't! This makes code more clear, and more robust.

If you take this strategy to its logical end, you arrive at "parse, don't validate," which uses Haskell examples but applies here too: https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...


Here's a program that uses only std::unique_ptr:

  #include<iostream>
  #include<memory>
  
  int main() {

      std::unique_ptr<int> null_ptr;
    
      std::cout << *null_ptr << std::endl; // Undefined behavior
  }
Clang 20 compiles this code with `-std=c++23 -Wall -Werror`. If you add -fsanitize=undefined, it will print

  ==1==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x000000000000 (pc 0x55589736d8ea bp 0x7ffe04a94920 sp 0x7ffe04a948d0 T1)
or similar.

Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: