The plan for how the add signed commits is there, and the work isn't that hard (especially as gitoxide continues to add functionality), it just has to be pushed over the line and I've been a bit slack on getting that going.
There's definitely nothing foundational blocking it though and it will happen one day if you'd like to give it a go in the meantime.
All the colours can be adjusted or turned off entirely in the config. [1] A number of different graph styles are supported [2], and failing that, you can completely customise the output template [3]
$XDG_CONFIG_HOME/jj/config.toml is supported, that's where I keep mine.
The working copy is updated whenever you run jj by default, but watchman is also supported (recently added). [4]
In my experience, the command to fix the stale workspaces only needs to be run in exceptional cases where a bug got triggered and a command failed to complete or if you're doing manual poking around.
It's a mindset shift, but it's well worth it in my opinion.
On a slightly different track to the other comments but make the computer work for you. Reduced memory retention means using good static types, lots of tests, and libraries/APIs that are hard to use incorrectly are an enormous benefit.
Since the audits are designed to be used at a per project level and contributed directly into the VCS repo (allowing you to using git signing for example) I don't quite understand what additional off-line cryptographic signatures are required here (considering that Cargo's lockfiles already contain a hash of the crate which would prevent the project from getting an altered version of a crate accidentally and that SHA validation is being considered as part of vet as well https://github.com/mozilla/cargo-vet/issues/116).
Interestingly since you can't pass an allocator to an operator call, there is some extra guarantee around whether a hypothetical overloaded operator can allocate. If you implement it for a vector type which doesn't contain an allocator reference, then you're sure that any operators won't allocate.
For what it's worth, I find a lot of Zig code benefits from switching to u32/u64 indexes into an array instead of using pointers. This is only really doable if your container doesn't delete entries (you can tombstone them), but the immediate benefit is you don't have pointers which eliminates the use after free errors you mentioned.
The other benefit is that you can start to use your ID across multiple containers to represent an entity that has data stored in multiple places.
See [1] for a semi-popular blog post on this and [2] for a talk by Andrew Kelley (Zig creator) on how he's rebuilding the Zig compiler and it uses this technique.
The alternate path is one the free software community has been pushing towards for decades. The ability to voluntarily associate (and disassociate) from trusted communities, software and communities that works for users and not the other way around, and autonomy coupled with mutual aid/benefit.
Compare the focus of the GNU, Mastodon, Matrix, etc. projects to blockchain world and the fundamental difference is they're not trying to create a world in which we don't trust anyone except a the idea that human nature runs on greed and can be exploited by making us have to pay (spend tokens) for everything.
> the fundamental difference is they're not trying to create a world in which we don't trust anyone
This. Massive effort and talent seems to be directed towards what is intuitively a dead end: trust is something between people, not something between devices. Any protocol no matter how ingeniously crafted will be subverted when it leaves the silicon layer and hits the human layer (unless you create the ultimate dystopia where people are collared and tracked on permanent basis or something equivalent)
Nevertheless, imho the human-centric vision of computing is misfiring, losing battle after battle (from self-sovereign computing, to social media, to mobile etc) and at some point it will lose the war. Maybe the silver lining of the cryptofunded "web3" marketing onslaught for "alternatives" is to give the real-deal one more window of opportunity...
>making us have to pay (spend tokens) for everything.
If you have to pay, or someone else has to pay for something (in FOSS it is just done in a more distributed way), why use a massively corrupt and degenerate currency to do it?
In addition to the other replies, Terraform has used ARM on Azure for a while now which is similar to this a unified API for all Azure Resource Management (hence ARM) and this hasn't caused any issues for them there.
The pain of not being able to do complex queries in an Elastic world means this is a pretty logical conclusion. I'd love to see the ability to also collect metrics and traces in Clickhouse as well, which would let me easily join across dynamic service boundaries to collate information I need.
For example, being able to correlate a customer ID (stored in a log) to a trace ID (stored in the request trace) to Snowflake warehouse usage (stored as metrics) to a subset of the pipeline (mixed between logs and traces) to get a full understanding of how much each customer cost us in terms of Snowflake usage would be immensely valuable.
There's definitely nothing foundational blocking it though and it will happen one day if you'd like to give it a go in the meantime.