Completely agree. Look at some videos on YouTube. 20,000 comments on brand new videos sometimes. A lot of good people are commenting on the internet. The problem is that the trust in public institutions is at an all time low, and that is leading to much more doom and gloom and those of us who are from the 2000s can feel the difference in the comment sections.
DankLinux provides packages for the Niri scrolling Wayland compositor and related tools, targeting popular mainstream distros: Fedora, openSUSE, Debian, and Ubuntu. Niri and related tools are still too bleeding-edge to appear in Debian or Ubuntu's official repos, so if you're interested in Niri and don't want to assemble the long list of prerequisites to compile everything yourself, this is an easy way to try it out.
While it pains me as a normie Gen-Xer to recommend anything "dank", using Niri along with DankMaterialShell [1] for configuration and launching apps lets you quickly put together a reasonably complete desktop using this repo. I've been building my own Niri binaries for a few months now, and I'm very happy that I can finally retire my build environment.
There's more than one way to do it, but the very normal UX is that you can just scroll through the diff file-by-file and stage/stash/drop each hunk individually by placing your cursor over it and issuing the appropriate command. You can do the same with files, staging/stashing/dropping changes to a file by placing the cursor on its name and issuing a command.
And you can even edit the content you stage, so that you can stage something different, than what is in the working tree. Having different content in the index vs. working tree is the feature of the index, which I think JJ just doesn't support?
jj doesn’t support it in the sense that there’s no special index feature. You can use a workflow where you have a commit represent the index (and that’s basically the most common workflow). This means you don’t need a separate feature, you just use the tools that slice and dice any commit with your “index”.
As an stgit user, this seems like a weird workflow to me. I never want to have that many uncommitted changes just floating around that will eventually belong to multiple commits. If I'm halfway through something and realise "oh, it would be good to do xyz first" I don't want to have xyz's changes and my half-way-through changes all mixed up -- I want to pop the half-way-through stuff, do the xyz stuff and commit it, then re-push the half-way-through stuff to keep working. If I'm looking at a diff and picking out parts of it then I've done something wrong -- I have a tool for doing it but I only need to use it every couple of months...
I end up with "neatly separated does-one-thing commits" but I get there by building them up as I go along, not by writing a ton of code and then trying to split it up afterwards.
This sort of flow is very nice in jj, primarily because of the “no index” plus “auto commit” behavior, I’ll regularly go “oh yeah I want to go do that” and I’m about to just go do it and then come back to right where I left off, since my work is already saved.
Yeah, I get the impression jj is good for this, and if I were using raw git then it would be a massive upgrade. Luckily for me stgit already does what I want in this area so I have no strong need to investigate alternatives, but if stgit ever bitrots then jj might be a useful next thing.
StGit maintainer here. I have been a jj user for over a year now. It has proven superior to StGit for all of my workloads. I even use jj when maintaining StGit.
An incomplete list of wins vs StGit includes:
- jj makes managing multiple branches fluid, whereas stg has limited tools for managing patches between stacks. 'stg pick' is largely all there is. It's a real dance to move a patch between stacks.
- jj has a much better system for naming changes. I'm super jealous of how jj solved this problem. StGit requires you to name the patches. I added the feature that allows StGit to refer to patches by relative or absolute index in addition to by name. jj's immutable change ids that can be referenced by unambiguous prefix is the correct answer to this problem.
- 'jj rebase' is so vastly superior to stg push/pop/sink/float for reordering changes that I don't even know where to start. It wasn't immediately obvious to me just how flexible, simple, and powerful 'jj rebase' is when I first started using jj, I have learned that it is in its own league relative to StGit's stack ordering story.
- Similarly 'jj squash' makes 'stg squash' look amateurish.
I could go on. If you're a StGit user, you owe it to yourself to give jj a proper try.
On that first point, there's a use case I sometimes have where stgit feels very clunky:
* I have a branch foo with a stack of patches which are the thing I'm working on, based on a master branch
* I have a separate stack of patches on a branch bar (let's say this is a feature that interacts with foo but it's not ready to upstream yet or maybe at all)
* I want to test and work on the combination of foo and bar and make changes that might need to be updates to some patch in foo, or to some patch in bar
At the moment I pick all the patches in foo onto bar in order to do the work and updates in this combined branch, squashing fixes and changes into appropriate patches. Then once I'm happy I go back to the foo branch, blow away the old patches and re-pick them off my combined branch.
This works but feels super clunky -- does jj do better here? That would be a real incentive to try it out.
For the rest, they don't sound like they're strong enough to beat "I've used stgit for 10 years and have a solid workflow with it".
And I just scanned the jj rebase docs and it looks awful, everything I moved to stgit to get away from. I do not want to think about managing a patch stack in terms of "move a bunch of revisions to a different parent". I like having a straightforward stack of patches that I can push, pop and reorder and list. Don't make me think about graphs of commits and multiple rebase suboptions and all that for something that I do all the time in my main workflow, please.
Combined with jj absorb, some people just work this way all the time, even.
> I like having a straightforward stack of patches that I can push, pop and reorder and list.
You can work this way too, what you'd want is `jj rebase` with -A, -B, and -r: r for the revision, and A and B for the before and after you want to move it too. This lets you reorder things however you want pretty straightforwardly. I tend to work in a stack of patches too, or at least, way more than I used to.
What I mean is that I do not want a single "swiss army knife" rebase command that does everything with lots of options to remember. It's fine to have that in the toolbox for the once in six months weird requirement. But for the simple cases I do every day I want simple commands that each do one thing and have memorable names.
If I'm understanding you correctly, you can have both. If there are specific rebase types that you perform regularly, you can create aliases for them and give them whatever name is meaningful to you.
For example, I frequently use `jj up` to rebase the current branch on main. Likewise, `jj pop` rebases just the current commit (popping it from its current place). I even have a `jj ppop` - better name suggestions are welcome - which does this but for the parent commit.
I suspect that the once-off effort to write your own commands would take no longer than it would take to read the documentation if the commands already existed, but with the hopeful extra benefit of giving you a better understanding of how to use rebase for those once in six months weird requirements when they may arise.
But to be clear, I'm not suggesting you must or even should put in this effort if you have something that works for you. My reply is mostly so that anyone who comes across this discussion and sees Steve's mention of -A, -B, etc isn't scared off by them. Whilst they're always there for you, you can use the power it gives you but in the form of single function commands that don't require you to think.
---
For anyone wondering, the aliases I mentioned. These can be dropped in your jj config with `jj config edit --user`.
Most of the time in jj you don’t even rebase manually, because it’s automatic. And the vast majority of the time, I’m using one or two flags to rebase if I am calling it. You might even need only two in this case (before only might be fine?) I just use both before and after because it’s so easy to remember the names.
Anyway you should use the tools you like, it’s all good.
Anyone interested in zippers, or, more significantly for this website, how new technologies are invented, adopted, and mature, should read "Zipper: An Exploration in Novelty" by Robert D. Friedel [1]
YKK is kind of one of the heroes of the story. The zipper was pioneered by the U.S. company Talon Fastener, which was acquired and parted out in the 1970s. YKK bought the legacy machining for manufacturing zippers and went on to dominate the global market.
I did something like this for Python [1]. The application I worked on at the time had a feature allowing users to import and export their data as a JSON document, and users often had enough data to make this cumbersome, especially with serialization and deserialization overhead. My implementation can also generate JSON documents as they stream out, from Python generators. The incremental JSON parsing was a little difficult to use, but incremental generation was an immediate win. We generated JSON documents from database results row-by-row and streamed the output to the web server, never producing the entire document in memory.
I had a surprising interaction with Gemini 2.5 Pro that this project reminds me of. I was asking the LLM for help using an online CAS system to solve a system of equations, and the CAS system wasn't working as I expected. After a couple back and forths with Gemini about the CAS system, Gemini just gave me the solution. I was surprised because it's the kind of thing I don't expect LLMs to be good at. It said it used Python's sympy symbolic computation package to arrive at the solution. So, yes, the marriage of fuzzy LLMs with more rigorous tools can have powerful effects.
Just like humans... we are not so good at hard number crunching, but we can invent computers that are amazing at it. And with a lot of effort we can make a program that uses a whole lot of number crunching to be ok at predicting text but kind of bad at crunching hard numbers. And then that program can predict how to create and use programs which are good at number crunching.
> That's when A.I. starts advancing itself and needs humans in the loop no more.
You got to put the environment back in the loop though, it needs a source of discovery and validity feedback for ideas. For math and code is easy, for self driving cars doable but not easy, for business ideas - how would we test them without wasting money? It varies field by field, some allow automated testing, others are slow, expensive and rate limited to test.
Simulation is the answer. You just need a model that's decent at economics to independently judge the outcome, unless the model itself is smart enough. Then it becomes a self-reinforcing training environment.
Now, depending on how good your simulation is, it may or may not be useful, but still, that's how you do it. Something like https://en.wikipedia.org/wiki/MuZero
That requires a lot of human psychology and advanced hard economic theory (not the fluffy academic kind). With human controlled monetary supply and most high-level business requiring illegal and immoral exploitation of law and humans in general, it's not a path machines can realistically go down or even want machines treading down.
Think scams and pure resource extraction. They won't consider many impacts outside of bottom line.
Simulated environment suggests the possibility of alignment during training but real time, real world, data streams are better.
But the larger point stands: you don't need an environment to explore the abstraction landscape prescribed by systems thinking. You only need the environment at the human interface.
The question is where should AI advance itself? Which direction? There are an infinite number of theorems that can be derived from a set of axioms. Infinite. AI can't prove them all. Somebody needs to tell it what it needs to do, and that is us.
Sorting a finite number of elements in a sequence, is a very narrow application of AI, akin to playing chess. Usually very simple approaches like RL work totally fine for problems like these, but auto-regression/diffusion models have to take steps that are not well defined at all, and the next step towards solving the problem is not obvious.
As an example, imagine a robot trying to grab a tomato from a table. It's arm extends across 1 meter maximum, and the tomato is placed 0.98 meters away. Is it able to grab the tomato from the point it stands, or it needs to move closer, and only then try to grab the tomato?
That computation should better be calculated deterministically. Deterministic computation is faster, cheaper and more secure. It has to prove that: $tomato_distance + $tomato_size < $arm_length. If this constraint is not satisfied, then: move_closer(); Calculate again:$tomato_distance + $tomato_size < $arm_length.
From the paper:
> Our system employs a custom interpreter that parses "LLM-Thoughts" (represented as DSL code snippets) to generate First Order Logic programs, which are then verified by a Z3 theorem prover.
I don't understand your claim about 'Deterministic computation is faster, cheaper and more secure.' That's not true at all.
In fact, for many problems the fastest and simplest known solutions are non-deterministic. And in eg cryptography you _need_ non-determinism to get any security at all.
Maybe the number crunching program the text generation program creates will, with enough effort become good at generating text, an will in turn make another number crunching computer and then…
It didn't do well critically, but audience scores on many platforms are 60-70%. It came hot on the heels of The Matrix, has similar themes, but nowhere near as ... everything compared to Matrix. I'd bet the only reason it did so poorly critically is due to the timing of the release.
If you like Matrix, Memento, Truman Show, Black Mirror (San Junipero, Bandersnatch), Inception, Interstellar, 12 Monkeys etc. you may also like it. These are not necessarily thematically aligned but based on vibes they cluster near it for me.
I definitely enjoyed it many years ago as a younger person.
Three movies with overlapping themes came out in mid-1999: The Matrix, The Thirteenth Floor, and eXistenZ (probably in that order of box office revenue).
Parent post is talking about symbolic manipulation, not rote number crunching, which is exactly what we're supposed to be good at and machines are supposed to be bad at.
I’d argue we aren’t solving those inverse kinematics / kinetics via “number crunching” - but rather that our neuromuscular systems are analog. Which I don’t usually call that “number crunching” in the sense current computers … compute.
As a psychologist, I completely agree. It absolutely is NOT number crunching. Analog computation is primary and dominant in animals. It has to be, for so many reasons. I continue to be amazed at how much IT people do NOT grasp human and animal IT. And that, I would argue, is why so many IT folks keep talking about our supposedly approaching human intelligence in technology. If they really understood human intelligence the absurdity of that statement would keep them quiet. An elegant, artful puppet is still a puppet, and without the personal history context and consciousness we possess, not to mention a vast complex of analogue computation functionality we rely upon, that puppet will only ever be a clever number-cruncher. We are so much more.
Are our brains "analog"? Or are they in fact "digital"? I would think actually more digital than analog. A synapse triggers or it does not trigger. It either triggers or not, not something in between. In this sense it is 0 or 1.
Similarly transistor-based logic is based on such thresholds, when current or voltage reaches a certain level then a state-transition happens.
I really like LLM+sympy for math. I have the LLM write me a sympy program, so I can trust that the symbolic manipulation is done correctly.
The code is also a useful artifact that can be iteratively edited and improved by both the human and LLM, with git history, etc. Running and passing tests/assertions helps to build and maintain confidence that the math remains correct.
I use helper functions to easily render from the sympy code to latex, etc.
A lot of the math behind this quantum eraser experiment was done this way.
The combination of LLMs and formal verification tools is pretty interesting. We've been thinking about this for compliance automation - there are a lot of regulatory requirements that could theoretically be expressed as formal constraints. Curious about the performance though. Z3 can be really slow on complex problems, and if you're chaining that with LLM calls, the latency could get rough for interactive use cases.
I get having it walk you through figuring out a problem with a tool: seems like a good idea and it clearly worked even better than expected. But deliberately coaxing an LLM into doing math correctly instead of a CAS because you’ve got one handy seems like moving apartments with dozens of bus trips rather than taking the bus to a truck rental place, just because you’ve already got a bus pass.
I feel like a better analogy is trying to rent a truck to move to a new apartment and after repeated failures of trucks not working they just hire a moving company for you to get you to leave
I also switched from PaperWM to niri, and I was reluctant to do it, because I really liked not having to configure several different little apps to get a working desktop environment. GNOME comes out of the box with an app launcher, a basic configuration editor, a screen locker, widgets for controlling audio and network, etc. But ultimately, PaperWM was too quirky. For example, sometimes PaperWM and an app would disagree about what size the app's window should be, and the window would resize itself repeatedly. The vertical sizing never worked very well either.
I'm in a similar situation; I think QuickShell [1] could be a compelling option, particularly premade configs for it like DankMaterialShell [2] (which is intended for Niri).
The reason to use a factory instead of 'new' is that a factory can vary the return type, unlike a plain constructor. You need a factory when, based on the constructor parameters or the system configuration, different classes of object may be instantiated. I really have to disagree with your characterization of the GoF book. The premise is that it's a set of designs that can be applied when specific situations are encountered, though, granted, if you're reading the book before you've actually seen the situation where a particular pattern can be applied, it seems abstract. Certainly, popular conceptions of patterns taken out of context make that problem worse.
You could do that more or less in Javascript by creating and initializing different member variables in the constructor depending on a constructor argument but I would hate you for it.
yeah, they say in the introduction even that it is to give a language to common patterns that emerge in C++ style OOP. It gives examples of what it might look like, but isn't an instruction manual on what to do. Its to give a common language "this is a gateway" as opposed to "this is where stuff goes in", or "front-door" or whatever tribal names / descriptions people come up with.
Outside of maybe NYC, taxi service in the U.S. was totally unreliable before Uber/Lyft. It's not even a matter of price. It's so much easier to get a ride now in most of the country.
I don't think AirB&B really improved hotels, but it did organize and centralize the "vacation rental" market, making it easier to, for example, rent a beach cottage for the weekend.
Existing Taxi services did not improve - they were replaced by a lower quality, more expensive alternative with a lesser economic infusion to local economies.
Hotels and the hospitality industry did not improve at all.
None of those points refute me or support the argument I was contesting - that a “uber or airbnb of banking” would cause banks to “get their act in order”.