That's such a weird characterization of this article, which (in contrast to other writing on this subject) clearly concludes (a) that Nix achieves a very high degree of reproducibility and is continuously improving in this respect, and (b) Nix is moreover reproducible in a way that most other distros (even distros that do well in some measures of bitwise reproducibility) are not (namely, time traveling— being able to reproduce builds in different environments, even months or years later, because the build environment itself is more reproducible).
The article you linked is very clear that both qualitatively and quantitatively, NixOS has made achieved high degrees of reproducibility, and even explicitly rejects the possibility of assessing absolute reproducibility.
NixOS may not be the absolute leader here (that's probably stagex, or GuixSD if you limit yourself to more practical distros with large package collections), but it is indeed very good.
> NixOS may not be the absolute leader here (that's probably stagex, or GuixSD if you limit yourself to more practical distros with large package collections), but it is indeed very good.
Could you comment on how stagex is? It looks like it might indeed be best in class, but I've hardly heard it mentioned.
The Bootstrappable Builds folks created a way to go from only an MBR of (commented) machine code (plus a ton of source) all the way up to a Linux distro. The stagex folks built on top of that towards OCI containers.
And with even a little bit of imagination, it's easy to think of other possible measures of degrees of reproducibility, e.g.:
• % of deployed systems which consist only of reproducibly built packages
• % of commonly downloaded disk images (install media, live media, VM images, etc.) consist only of reproducibly built packages
• total # of reproducibly built packages available
• comparative measures of what NixOS is doing right like: of packages that are reproducibly built in some distros but not others, how many are built reproducibly in NixOS
• binary bootstrap size (smaller is better, obviously)
It's really not difficult to think of meaningful ways that reproducibility of different distros might be compared, even quantitatively.
Sure, but in terms of absolute number of packages that are truly reproducible, they outnumber Debian because Debian only targets reproducibility for a smaller fraction of total packages & even there they're not 100%. I haven't been able to find reliable numbers for Fedora on how many packages they have & in particular how many this 99% is targeting.
By any conceivable metric Nix really is ahead of the pack.
Disclaimer: I have no affiliation with Nix, Fedora, Debian etc. I just recognize that Nix has done a lot of hard work in this space & Fedora + Debian jumping onto this is in no small part thanks to the path shown by Nix.
> Disclaimer: I have no affiliation with Nix, Fedora, Debian etc. I just recognize that Nix has done a lot of hard work in this space & Fedora + Debian jumping onto this is in no small part thanks to the path shown by Nix
This is completely the wrong way around.
Debian spearheaded the Reproducible Builds efforts in 2016 with contributions from SUSE, Fedora and Arch. NixOS got onto this as well but has seen less progress until the past 4-5 years.
The NixOS efforts owes the Debian project all their thanks.
> Arch Linux is 87.7% reproducible with 1794 bad 0 unknown and 12762 good packages.
That's < 15k packages. Nix by comparison has ~100k total packages they are trying to make reproducible and has about 85% of them reproducible. Same goes for Debian - ~37k packages tracked for reproducible builds. One way to lie with percentages is when the absolute numbers are so disparate.
> This is completely the wrong way around. Debian spearheaded the Reproducible Builds efforts in 2016 with contributions from SUSE, Fedora and Arch. NixOS got onto this as well but has seen less progress until the past 4-5 years. The NixOS efforts owes the Debian project all their thanks.
Debian organized the broader effort across Linux distros. However the Nix project was designed from the ground up around reproducibility. It also pioneered architectural approaches that other systems have tried to emulate since. I think you're grossly misunderstanding the role Nix played in this effort.
> That's < 15k packages. Nix by comparison has ~100k total packages they are trying to make reproducible and has about 85% of them reproducible. Same goes for Debian - ~37k packages tracked for reproducible builds. One way to lie with percentages is when the absolute numbers are so disparate.
That's not a lie. That is the package target. The `nixpkgs` repository in the same vein package a huge number of source archives and repackages entire ecosystems into their own repository. This greatly inflates the number of packages. You can't look at the flat numbers.
> However the Nix project was designed from the ground up around reproducibility.
It wasn't.
> It also pioneered architectural approaches that other systems have tried to emulate since.
This has had no bearing, and you are greatly overestimating the technical details of nix here. It's fundamentally invented in 2002, and things has progressed since then. `rpath` hacking really is not magic.
> I think you're grossly misunderstanding the role Nix played in this effort.
I've been contributing to the Reproducible Builds effort since 2018.
I think people are generally confused the different meanings of reproducibility in this case. The reproducibility that Nix initially aimed at is: multiple evaluations of the same derivations will lead to the same normalized store .drv. For a long time they were not completely reproducible, because evaluation could depend on environment variables, etc. But flakes have (completely ?) closed this hole. So, the reproducibility in Nix means that evaluating the same package set will lead to the same set of build recipes (.drvs).
However, this doesn't say much about build artifact reproducibility. A package set could always evaluate to the same drvs, but if all the source packages choose what to build based on random() > 0.5, then there is no of build artifacts at all. This type of reproducibility is spearheaded by Debian and Arch more than Nix.
For development, "localhost" has a convenience bonus: it has special treatment in browsers. Many browser APIs like Service Workers are only available on pages with a valid WebPKI cert, except for localhost.
Yeah, I've been using localhost domains on Linux for a while. Even on machines without systemd-resolved, you can still usually use them if you have the myhostname module in your NSS DNS module list.
I ended up writing a similar plugin[1] after searching in vain for a way to add temporary DNS entries.
The ability to add host entries via an environment variable turned out to be more useful than I'd expected, though mostly for MITM(proxy) and troubleshooting.
Bring up dates and times if you want to wreak havoc on any AI. :D
Developers around the world's most beloved topic, how to handle date and time correctly, is still a topic of great misunderstanding. AI and AI agents are no different from that. LLM seems to help a little, but only if you know what you are doing, as it usually needs to be the case.
Some things won't change so fast; at one point or another, data must match certain building blocks.
Google AI Overview incorrectly identified the day for a given date due to a timezone conversion issue, likely using PST instead of IST. ChatGPT and Perplexity provided more accurate and detailed responses.
One would think the arcana of time zones and the occasional leap second would not interfere with an individual setting egg timers often enough to become a burden
Except that's not the problem. its basic comprehension of requests. They aren't getting the wrong time, they try to play music, or the phone says "no timers playing" while the google home WILL NOT STOP until you lock the phone. etc.
Its basically an embarrassment for a project that's been alive this long from such a major.
Supporting existing projects is the oilsand mining of the promotion world. Low, old buzzword content, little reward. Implenting new buzzwords, wirh streetcred rich frameworks , thats the fracking.
Efficiency ,capabilties or customer satisfaction are irrelevant .
Phones today cannot even reliable handle things like "remind me to pick up tomatoes next time i am at a store"
google knows perfectly well, where I am and wants me to add 'infos' to locations and businesses the second I arrive (just got a notification today), but reminders like these are unavailable.
The location based reminders sure worked perfectly fine many years ago, like when I had Nexus phones. It's just getting worse all the time, I don't get it.
And worked with Samsung Bixby. Gemini, even after getting Advanced, is just terrible for a phone AI. I need to set a lot of alarms and calendar events, I don't need to do crazy photoshop (which Gemini is admittedly good at).
My own hands and cheap alarm clocks, or a piece of paper, have been working reliably for several decades. They also don't stop working when a corporation decides they want to hype something.
One of the best things about Gerrit - besides stacking and turn-by-turn review - is how it emphasizes good commit messages by making them part of the process.
Each commit becomes one "unit of review", and the commit message can be reviewed just like the code changes.
A small C++/Go/... proxy can do the same thing with much, much less overhead. Been there, done that - for something well-defined like this, it is more stable and less work than fighting mitmproxy.
Routing everything through the proxy will degrade performance even with SNI interception.
Same with pfSense - a plain Linux server and a simple iptables rules set would do the job without having to fight against all the pfSense abstraction layers.
Write a .proto file with just enough of the reverse-engineered proto fields to auto-generate code and flip the flag. Cheaper than the Python implementation and easier to update when the proto changes.
Ignoring unknown field tags is an important Protobuf feature - it allows for compatible schema changes without breaking existing deployments.
It would, but it would also decrease the video quality. I'm not opposed to letting my kids watch YouTube, there is a lot of good quality content there, but having some agency in what they pick would be a lot better than the current behavior of short after short after short. Just like snacking on fast food.
Interesting points and some things I've been exploring too:
Video quality does decrease, and sometimes that's good a good thing.. :)
- Lower video quality is lower resolution = less addictive.
- Decreasing saturated colors reduces children's brain heroin. (Try to put the tv in normal or movie color mode and see the addictiveness fall off).
- Lowering the sound helps kids hear less of the background addictive noises and strain their hearing a little more and can help them get tired.
- Lowering brightness can help with as well.
- Kids device for viewing could be different than adults to allow filtering and shaping.
As for content, I agree.
- Recently I heard there's more and more fraudulent content under official channels that includes bad content inside the good stuff. This needs to be caught.
- Managing access to shorts is important, if not limit outright.
Do you have a youtube premium account that removes ads by chance?
I never got used to shorts/reels/etc, but it is troubling to see kids addicted to them. I have been thinking that by forcing some pause between videos it would remove some of their addictiveness.
It does. Sometimes I click on an interesting short and then keep swiping to see if anything else is interesting. When the app takes ten seconds to load, I go do something else because there's no real value in the shorts.
I gotta say, I don’t get that perspective. The content is one thing, but YouTube is super reliable for me, streaming or watching. I can easily stream in 4k 60FPS from OBS and YouTube has never had issues ingesting it, though I generally do 1440p because my computer is slow. When watching, I have never had an interruption on my wired Apple TV even for 4k/60FPS.
I do hate the pushing of shorts and the algorithm that seems to have a 3 video memory, but aside from that I’m pretty happy, I don’t get the weird right wing stuff or creepy videos pushed at me or my kids.
For me the content is not the main problem, rather the consistent bloating and enshitification of the player and interface over the years. Nowadays I don't bother anymore and just use mpv and ytdlp to play the few videos I'm interested in.
I don't even bother using scripts, I just manually paste the URL of the video I want to watch into mpv. It's not slow enough for me to have to deal with the garbage Youtube interface.
You want someone to show you how to write a C++/Go program to forward traffic? There are a lot of tutorials online that can already demonstrate this for you. :)
Can you put together a guide in response showing where the inefficiencies are and how to mitigate them with more simple software?
It sounds like the author was aware of at least parts of your comment. The post is very thorough. They benchmarked using python and c++ and the final impl doesn’t even decode protobuf. They used various mitm solutions. They are using pfsense for more than just “it’s muh security router”—they are vlanning and vpning the traffic so they can target inly the appletv on their network.
Your comment is cheap and dismissive. The author’s post is not. You owe it to the community to put your money where your mouth is.
Not sure what kind of answer you are looking for? I did not criticize the author's post. It was an enjoyable read, and I personally would have given up a long time before going to such impressive lenghts. The fact that the app isn't using certificate pinning is really interesting and the sheer amount of hacker spirit and determination is extremely wholesome.
I am, however, very familiar with this particular engineering challenge (specifically, attempting to build on pfSense and using mitmproxy scripts in production), so I wanted to share my personal experiences to hopefully save someone else some time and frustration while attempting the same thing.
https://github.com/elazarl/goproxy is pretty nice Go library for writing proxies, I used it once. Supports both HTTPS passthrough and MITM. Here's a trivial example MITMing connections to www.google.com and rejecting requests to https://www.google.com/maps while allowing everything else through:
-k is to ignore cert error; note how we don't need it for apple.com due to passthrough.
Remember to use your own cert rather than the hardcoded one in "production" (a trusted network like your home of course, probably a bad idea to expose it on the open Internet).
The researchers found undocumented hardware functionality which allows someone who already has code execution a greater-than-expected degree of low-level access to the ESP32 wifi stack.
reply