I work in the embedded space. I'd absolutely love to have a tool to build immutable, signed distro images which I can push to devices with a/b style updates. I imagine you can do this with mkosi, but it doesn't quite feel like the intended use case.
Most immutable distros (this included) are developed with the idea that you'd run local commands on the machine, and update a single OS image in place. That gets pretty unwieldy to manage across a fleet of devices.
Right now the industry standard tooling for building truly immutable images for embedded devices is Yocto. It works well, but it's incredibly complicated and has ridiculous build times because it insists on building every single package from source. It's utter madness that building a Linux image for a common ARM64 processor requires me to compile the Rust compiler myself.
You don't have to build ParticleOS images on the machine itself, it's perfectly possible to build them offline somewhere else and download them from the target machine when doing an update with systemd-sysupdate. It's just that we haven't quite gotten to ironing out all the details there. We're adding support to OBS (OpenSUSE Build System) to build ParticleOS images and will eventually start publishing prebuilt images there which can then be consumed by systemd-sysupdate.
For the embedded space you'd just build the ParticleOS images on your own build system, publish them somewhere and consume them with systemd-sysupdate, doesn't have to be OBS of course.
But we don't do stuff like only downloading diffs with systemd-sysupdate and such yet, so your milleage may vary.
That's interesting. I've been looking for an angle on embedded Linux software updates for my deployment tool (currently limited to updating desktop and server apps only).
When you say push, do you literally mean push or do you mean the devices would still pull updates on their own schedule where you get to control rollout percentages, channels, etc centrally? Mostly devices have to be assumed to be behind NAT, right?
What I'm thinking here is maybe it'd be useful to have a mode in Conveyor [1] that builds a standalone ARM Linux that can boot and pull updates for everything including the app, but which coordinates with the userspace app to decide when it's time to restart and for which the build to update to is controlled by server-supplied metadata so you can divide the fleet by tag, etc.
The problem with immutable images is you need a complimentary config management tool to manage them at scale. That's why we built Etcha[0] (config management for 1000+ devices) and then built an immutable OS (EtchaOS) around it--they're meant to work together instead of "cloud-init is good enough". cloud-init is terrible at scale, especially outside of public cloud.
We at darkscience used buildroot for a simple ZNC SBC[0] distribution which we deployed to an Orange Pi Zero that we sold at 33C3[1].
It was great! I highly recommend!
It didn't take long to get started and while the compile times were pretty large (wonder if NIX or Bazel could help here?) it ended up with absolutely microscopic resource requirements.
I've had a reasonable amount of luck with a combination of Buildroot and Bazel. I use Buildroot to assemble a minimal Base OS with just the kernel, bootloader and a few system services. Then the application layer is built with Bazel and assembled into a set of squashfs images which are mounted as overlay filesystems at boot time. The whole thing is shipped as a SWUpdate file built with the Bazel layer.
Because most of the iteration is happening in the Bazel layers I can generate a full system update in about 15 seconds with everything being fully reproducible.
> Most immutable distros (this included) are developed with the idea that you'd run local commands on the machine, and update a single OS image in place. That gets pretty unwieldy to manage across a fleet of devices.
This is how it always has been on enterprise network devices like switches, routers, etc. It’s pretty trivial to automate updates.
I think that's often the thing that kills my side projects. I'll make reasonable progress, lose interest for a bit, and then when I come back to it all the working memory is gone and so I spend a few hours just getting up to speed again. Better ongoing documentation to allow quicker self-onboarding is what I need!
I've had reasonable success with taking diary-style notes on whatever project I'm working on within that project's page in Obsidian, and then when I need to come back to it, either reading over it or passing it through a LLM to summarise / ask me targeted questions to prod my memory.
In the essay, the "unintentional moderate" is defined as someone who holds all kinds of views, some from the far left, some from the far right, some from the middle - but by chance the average of their views makes them a moderate.
I had to go looking for that, because the graph doesn't show that at all. I think the graph is a bad take on the ideas in Paul Graham's article.
An AI agent will likely be worse in that you would have to actively haggle with it so it doesn’t upsell you by default, which IMO is harder than circumnavigating the dark patterns.
An actually useful agent is something that is totally doable with technologies even from a decade ago, which you by necessity need to host yourself, with a sizeable amount of DIY and duct tape, since it won’t be allowed to exist as a hosted product. The purveyor of goods and services cannot bargain with it so it puts useless junk into your shopping cart on impulse. You cannot really upsell it, all the ad impressions are lost on it, and you cannot phish it with ad buttons that look like the UI of your site — it goes in with the sole purpose to make your bookings/arrangements, it’s a quick in-and-out. It, by its very definition and design, is very adversarial to how most companies with Internet presences run things.
I had a play with the tscircuit CAD tool which they're writing this autorouter for. It's hardware-as-code, leveraging React.
I love the concept of schematics-as-code. But layout-as-code is awful. Maybe AI tools will improve it enough - but unless there's a user friendly escape hatch which lets you tweak things with a mouse, it's a dead end.
I've had the same problem with auto-generated wiring diagrams and flowcharts. Great in theory, but then the tool generates some ugly thing you can't make sense of and there's no way to change it.
There's definitely tremendous potential for AI in electronic design. But I think the really complicated bit is component selection and library management - you have to somehow keep track of component availability, size and a range of parameters which are hidden on page 7 of a PDF datasheet. Not to mention separate PDF application notes and land patterns.
Compared to all that, connecting up schematic lines and laying out a PCB is fairly straight forward.
Love that you tried the tool and couldn't agree more. We think that new layout paradigms similar to flex and CSS grid need to be invented for circuit layout. This is an active area of research for us!
Open source CAD has improved a lot in the last few years. You can use FreeCAD for modelling simple parts today and it mostly works. That wasn’t the case a few years ago.
There was a time KiCAD was a buggy mess. And no doubt blender as well.
Blender is rock-solid and very smoothly usable, and as a beginner, I won't find anything missing or buggy. It would take a beginner years to get to the limitations, corner cases and broken things.
KiCAD is solid, very usable, but not totally smooth. The workflow is still far away from blender-like total integration and bliss. Where ten years ago you could find the occasional bug, a beginner won't find any nowadays.
FreeCAD only just last year started shipping releases that don't nullpointer after 2 minutes. Even a beginner with a trivial project will stumble over bugs, limitations, problems and design flaws.
There is a huge difference in quality, and KiCAD will get to Blender levels certainly. But FreeCAD will take forever, if the pace continues like that.
FreeCAD is built on top of Open Cascade and I think that’s what’s going to limit them. It was a fast way to get to v1, but there’s only so much that project can do to work around the limitations of the library they built on.
It’s funny that a country so wound up in national pride as the USA can’t see that an existential threat to your countries sovereignty triggers a deep emotional response.
That’s been my observation too. My completely amateur theory is that a young child’s perception of the world - their ability to contextualise what they’re looking at - changes so much that even if those old memories are still hanging around, they don’t really make sense any more. It’s more like a sliding window than a hard cutoff.
I read a similar discussion here on HN before and the explanation that its how our brain decodes memories shifting over time really hit me. I have a theory that if that in fact is the case, then things like journaling would help to retain memories in a superior way, since we can review such things as our encoding changes and retain it while we can still decode. Even if we dont remember it as richly though we will still be able to go back to a journal to remember something at a bare minimum.
The foundation of maintainable software is architecture. I can't be alone in having often spent days puzzling over a seemingly highly complex problem before finally finding a set of abstractions that makes it simple and highly testable.
LLMs are effectively optimisation algorithms. They can find the local minima, but asking them to radically change the structure of something to find a much simpler solution is not yet possible.
I'm actually pretty excited about LLMs getting better at coding, because in most jobs I've been in the limiting factor has always been rate of development rather than rate of idea production. If LLMs can take a software architecture diagram and fill in all the boxes, that would mean we could test our assumptions much quicker.
Yes, this is how I feel as well. I'm not going to use an LLM to create my architecture for me (though I may use it for advice), because I think of that as the core creative thing that I bring into the project, and the thing I need to fully understand in order to steer it in the right direction.
The AI is great at doing all the implementation grunt work ("how do I format that timestamp again?" "What's a faster vectorized way to do this weird polars transformation?" "Can you write tests to catch regressions for these 5 edge cases which I will then verify?").
Almost everytime I read about someone finding LLMs useful for a programming task, the description of how the LLMs are used sounds like either the person is missing domain knowledge, don't use a capable editor, or are not familiar with reading docs.
When I find myself missing domain knowledge, my first action is to seek it. Not to try random things that may have hidden edge cases that I can't foresee. The semantics of every line and every symbol should be clear to me. And I should be able to go in details about its significance and usage.
Editing code shouldn't be a bottleneck. In The Pragmatic Programmer, one of the advice is to achieve editor fluency. And even Bram has written about this[0]. Code is very repetitive, and your editor should assist you in reducing the amount of boilerplate you write and navigating around the codebase. Why? Because that will help you prune the code and get it in better shape as code is a liability. Generating code is a step in the wrong direction.
There can be bad docs, or the information you're seeking is not easily retrievable. But most is actually quite decent, and in the worst case, you have the source code (or should). But there are different kind of docs and when someone is complaining about them, it's usually because they need a tutorial or a guide to learn the concepts and usage. Most systems assume you have the prerequisites and will only have the reference.
Most immutable distros (this included) are developed with the idea that you'd run local commands on the machine, and update a single OS image in place. That gets pretty unwieldy to manage across a fleet of devices.
Right now the industry standard tooling for building truly immutable images for embedded devices is Yocto. It works well, but it's incredibly complicated and has ridiculous build times because it insists on building every single package from source. It's utter madness that building a Linux image for a common ARM64 processor requires me to compile the Rust compiler myself.
reply