It's still not perfect since you're still leaking information about the privacy set implied by the outer ClientHello, but this possibly isn't much worse than the destination IP address you're leaking anyway.
Whenever people complain about the energy usage of LLM training runs I wonder how this stacks up against the energy we waste by pointlessly redownloading/recompiling things (even large things) all the time in CI runs.
While this leaves a lot to be desired as a window manager, it illustrates one of my main gripes about the Wayland ecosystem: By effectively bundling the window manager and X server, it makes it much harder for more niche/experimental window managers to come about and stay alive. Even with things like wlroots, you have to invest a lot more work to get even the basics working that X11 will give you for free.
True; but a counterargument is that the _display protocol_ is not the right abstraction layer for decoupling window management from the display server. There is nothing stopping someone from writing a batteries-included wlroots-like library where the only piece you need to write is the window management and input handling, or even an entire Wayland compositor that farms these pieces out to an embedded scripting runtime.
But even then, I think we have rose-tinted glasses on when it comes to writing an X11 WM that actually works, because X11 does not actually give much for free. ICCCM is the glue that makes window management work, and it is a complete inversion of "mechanism, not policy" that defines the X11 protocol. It also comes in at 60-odd pages in PDF form: https://www.x.org/docs/ICCCM/icccm.pdf
For an example, X11 does not specify how copy-and-paste should work between applications; that's all ICCCM.
I'm sorry for not making it more clear, but that was just an example of something left unspecified by the X11 core protocol but instead defined in a standard convention.
An example that matters for window managers would be complex window reparenting policies or input grabs, but that's a little less descriptive of the core concept I was trying to get across.
> Wayland will take 20 more years before it can dethrone X11. And even then we will mostly run X11 apps on XWayland.
And yet RedHat/Fedora and Ubuntu, as well as GNOME, are leading the charge to drop X support in the next release; KDE as of V7. It may take 20 years for Wayland to match X's capabilities, but it looks like the guillotine has already been rolled out.
A more conspiratorial person than I could be led to think that RedHat is actively working against the viability of a free software desktop, but of course that's nonsense, because they're helping the cause by forcing all resources to be focused on one target at the expense of near-term usability. And the XLibre crowd also aren't controlled opposition intended to weaponize the culture war and make people associate X with fascism, that's just nonsense some idiot cooked up to stir shit.
> because they're helping the cause by forcing all resources to be focused on one target
This might work for company-backed projects but not for OSS enthusiasts and power users - they will leave for greener pastures. For example, Linux Mint lives off the manpower that GNOME 3 drove away, Void and Alpine Linux live off the manpower that systemd drove away. There will be some ecosystem that will live off the manpower that Wayland drives away.
The ICCCM, abbreviated I39L, sucks. I39L is a hash for the acronymic expansion of ICCCM for "Inter-Client Communication Conventions Manual". Please read it if you don't believe me that it sucks! It really does. However, we must live with it. But how???
> Even with things like wlroots, you have to invest a lot more work to get even the basics working that X11 will give you for free.
Like what?
A few years ago I copied the wlroots example, simplified it to less than 1000 LoC and then did some of my own modifications and additions like workspaces. And this side-project was done in less than a week on my spare time.
YAGN more experimental/niche window managers. Windows and macOS get by fine on one apiece, in fact their desktop story is better because their WM and toolkit is standardized.
The developers of Wayland (who are identical to the developers of Xorg) aspire to more of a Windows/Mac-like ecosystem for Linux, in which standardization, performance, and support for modern graphics hardware without hacks or workarounds are prioritized over proliferation of niche window managers and toolkits
Terrible window management is a huge reason I will not use Mac OS or Windows. I immediately lose so much productivity. I am coming up on my 30th year of using Linux, and I can't imagine moving to an OS with such limited window capabilities. No sloppy mouse focus? No always on top? No sticky windows? No marking windows as utility windows to skip alt-tab?
I watch my colleagues on Mac OS and Windows during peer programming, and am flabbergasted as they fumble around trying to find the right window.
I am interacting with my computers interface for 10+ hours every single day. I do not stare at a single application, but am constantly jumping between windows and tasks. The one size fits all approach is the same as the lowest common denominator approach, and it hinders people who need to do real work.
Linux already has GNOME and KDE as solid mainstream platforms (which is already twice as good as MacOS/Windows), and it also already has Sway, Hyprland, Niri. If an idea is worth implementing, it gets implemented even with Wayland.
Windows is the epitome of bad window management. It's actually gotten worse as they've removed functionality over the years and broken other functionality. "Oh, you've got a modal window open? Now you can't even move the window that spawned it!" "Oh, you want to move this window to the top of the screen? Let me maximize that for you! Of course we're not going to let you disable that behavior..."
Microsoft got the Start Button/taskbar bit right in 1998 with the addition of the quicklaunch bar, although they keep trying to screw it up. But their window management has been abysmal since the beginning. If you use a large monitor (so you don't need to maximize everything) it's really painful.
> The developers of Wayland (who are identical to the developers of Xorg) aspire to more of a Windows/Mac-like ecosystem for Linux, in which standardization, performance, and support for modern graphics hardware without hacks or workarounds are prioritized over proliferation of niche window managers and toolkits
Is that why they arranged things to ensure that the Wayland world would always be split into GNOME, KDE, and everything else (in practice, wlroots)?
For me as a developer, reproducible builds are a boon during debugging because I can be sure that I have reproduced the build environment corresponding to an artifact (which is not trivial, particularly for more complex things like whole OS image builds which are common in the embedded world, for example) in the real world precisely when I need to troubleshoot something.
Then I can be sure that I only make the changes I intend to do when building upon this state (instead of, for example, "fixing" something by accident because the link order of something changed which changed the memory layout which hides a bug).
> things like docker have been around doing just that for a while now.
Thats just not enough. If you are hunting down tricky bugs, then even extremely minor things like memory layout of your application might alter the behavior completely-- some uninitialized read might give you "0" every time in one build, while crashing everything with unexected non-zero values in another; performance characteristics might change wildly and even trigger (or avoid) race conditions in builds from the exact same source thanks to cache interactions, etc.
There is a lot of developer preference in how an "ideal" processs/toolchain/build environment looks like, but reproducible builds (unlike a lot of things that come down to preference) are an objective, qualitative improvement-- in the exact same way that it is an improvement if every release of your software corresponds to one exact set of sourcecode.
Docker can be used to create reproducible environments (container images), but can not be used to reproduce environments from source (running a Dockerfile will always produce a different output) - that is, the build definition and build artifact are not equivalent, which is not the case for tools like Nix.
I see reproducible builds more as a contract between the originator of an artifact and yourself today (the two might be the same person at different points in time!) saying "if you follow this process, you'll get a bit-identical artifact to what I have gotten when I followed this process originally".
If that process involves Docker or Nix or whatever - that's fine. The point is that there is some robust way of transforming the source code to the artifact reproducibly. (The less moving parts are involved in this process though the better, just as a matter of practicality. Locking up the original build machine in a bank vault and having to use it to reproduce the binary is a bit inconvenient.)
The point here is that there is a way for me to get to a "known good" starting point and that I can be 100% confident that it is good. Having a bit-reproducible process is the no-further-doubts-possible way of achieving that.
Sure it is possible that I still get an artifact that is equivalent in all the ways that I care about if I run the build in the exact same Docker container even if the binaries don't match (because for example some build step embeds a timestamp somewhere). But at that point I'll have to start investigating if the cause of the difference is innocuous or if there are problems.
Equivalence can only happen in one way, but there's an infinite number of ways to get inequivalence.
Buildroot has package-parallel builds when using BR2_PER_PACKAGE_DIRECTORIES (see https://buildroot.org/downloads/manual/manual.html#top-level...). It's for some reason still marked as experimental in the docs but it has been solid for me for many years.
The lack of dependency tracking isn't great but other than working around it like you described just using ccache has worked pretty well for me. My Buildroot images at work do full recompiles in under 10 minutes that way.
Meanwhile the Yocto projects I've worked on used to have a ton of chaff that causes partial rebuilds with trivial changes to take longer than that. This probably isn't an inherent Yocto/BitBake thing but the majority of Yocto projects out there seems to take a very kitchen-sink approach so it's what you'll end up having to deal with in practice.
This sounds like a perfect application for EROFS[1]. While it comes from an embedded systems background, it has seen some usage in container use cases and is moving towards a general "mountable tar" application. It would also avoid the tedium you have to go through in shrink_btrfs.py because you can just generate the image out of a tree.
I wanted to give repackaging the btrfs image a shot but the download was pretty slow - I assume your server is getting HN-hugged a bit so I didn't want to make it worse and stopped the download.
> I also liked the fact that Btrfs is probably super well tested in the Linux kernel by now.
btrfs has certainly been around for longer, but in my (embedded systems only) experience, EROFS has been pretty solid - it's slowly being picked up by Android, so it is definitely seeing a lot of use in the wild (probably surpassing btrfs by the number of installations already).
> btrfs.openfreemap.com just a public Cloudflare bucket, no idea why it might be slow.
I'm getting 30 MiB/s (on a gigabit uplink) - not great, not terrible. A .torrent would be nice but I guess outside of being on the HN front page full-planet downloads by different people won't synchronize enough for this to be useful (and using web seeds is problamtic in its own right with small-ish chunks).
It's still not perfect since you're still leaking information about the privacy set implied by the outer ClientHello, but this possibly isn't much worse than the destination IP address you're leaking anyway.