Hacker News new | past | comments | ask | show | jobs | submit login
What is Silverblue? (fedoramagazine.org)
303 points by anuragsoni on July 14, 2019 | hide | past | favorite | 136 comments



I have been working on a similar idea (well ok the concept of an immutable desktop - the tech is completely different) - https://github.com/mikadosoftware/workstation/tree/master/bi...

The article is completely right about this being the future of user OS's - even my half-broke me-ware above has changed how I think about using my laptop - just knowing exactly what is under me is exactly what I have set is ... reassuring.

Being able to know I can try things out and a reboot gets me back to my last known good point is ... well a bit like a video game with savepoints. And there becomes an utter focus on data and non-data. And probably the best advantage is that you ratchet up - every security improvement I think of becomes built in and makes my platform one tiny bit higher

SilverBlue is well worth watching - I say they really are into something


Thanks for summarizing the crucial piece, that this is about an Immutable OS. You have to scroll 2 full pages before the Silverblue news release actually gets to that.


> Being able to know I can try things out and a reboot gets me back to my last known good point is

This is exactly with I do with Darch.

https://godarch.com/

It supports Ubuntu/Debian/Arch/VoidLinux.

Here are my recipes: https://github.com/pauldotknopf/darch-recipes

I push my images to Docker Hub, pull them on each of my machines and boot them bare-metal. The same exact bits.


So what is it doing under the hood?

The layers themselves seem to be bash scripts so presumably have RUN xxx put in front of them? The booting from bare metal is cool - never tried that. presumably that won't work on a mac?


Have you looked at nixos or guix?


Sounds similar to what Apple’s doing with Catalina. On https://www.apple.com/macos/catalina-preview/ they say:

Dedicated system volume.

macOS Catalina runs in its own read-only volume, so it’s separate from all other data on your Mac, and nothing can accidentally overwrite your system files. And Gatekeeper ensures that new apps you install have been checked for known security issues before you run them, so you’re always using good software.


That sounds like the way Android partitions work.


Linux, the BSDs ...


This is a logical volume at the FS level rather than a proper partition, which has the advantage of sharing space with other volumes/&c.


I'd like to share a similar project/tool that I developed.

Darch. https://godarch.com/

I essentially use Dockerfiles to build my operating systems. I push them to Docker Hub so that each of my machines have access to them. I can boot them bare-metal, read-only, with a tmpfs overlay. I can apt-get install/remove anything, completely break my system, then reboot and everything is fixed!

Here are my recipes: https://github.com/pauldotknopf/darch-recipes

You can easily get it a test-run with a pre-made VM: https://pknopf.com/post/2018-11-09-give-ubuntu-darch-a-quick...

I'd love to hear some feed back. I've been using it personally for the past few years. I wouldn't do it any other way.


Basically : the OS is itself a layered read-only "container", on top of which flatpak is the recommended way to install applications.

I wish someone built an OS based on k8s as a service and application orchestrator. We wouldn't have to reinvent all the config files, the command line tools and we could reuse knowledge between cluster and single-machine administration. Plus k8s already voluntary abstracted the underlying technologies, so it should be simple to reuse it. We would use the same high-availability concepts than from the cloud, such as stateless service, horizontal scaling of services, etc. We could also reuse Istio and all the standards it is built-on to introspect the system. In other words, a microservice based OS.


No way. I get paid to write YAML hell for k8s at work. I don’t want my desktop to have to work like that. Not all things need to be K8s


I think that idea isn't really viable until we have a serious microkernel contender. We'll see how hurd goes, but personally, I'd put more stock in fuscia. A microkernel with a massive comlamy behind it (which employs much of the best software engineers there are and fully intends to bring it to market) is a seriously cool thought. Maybe we'll see debian 11/fuscia some time (though arch will still be better).


RancherOS is a bit like your wish.


How fast would Steam games play?



Check out k3os and OpenShift/OKD 4.


You've basically described Qubes OS, with K8S a less OS-y choice than Xen.


In some ways the implementation resembles ChromeOS -- you can install Linux tools and applications in a container, while the base OS is immutable and atomically updated via filesystem diffs.


It's worth a moment to give credit to the long defunct Stateless Linux project:

https://fedoraproject.org/wiki/StatelessLinux

This was imagined a decade ago, but the technology and the market weren't ready then. I am really excited to see it as an actual product.


I'm guessing there are embedded Linux deployments that went this direction even earlier. Read-only rootfs is generally a good idea if you can swing it. You gain the ability to sign the rootfs which is good for security.


Having worked on embedded systems with (nearly) read-only root filesystems:

An even bigger motivator is that flash memory lifetime is more determined by number of writes per block rather than time or reads per block. So to keep your storage both cheap and reliable, it's best to flash it with a single full (compressed) image every time you run a firmware upgrade, and otherwise mount it read-only.


I work on embedded Linux systems, which inspired me to write Darch with the same technology I was using for my embedded systems.

https://godarch.com/

https://pknopf.com/post/2018-11-09-give-ubuntu-darch-a-quick...


Both Linux live CDs and OpenWRT have done this for over a decade with union mounts.


The scope seems to be pretty similar to: https://nixos.org/


But there are also large differences.

Silverblue still uses a single namespace for all libraries (unless you count Flatpaks, which can bring their own dependencies), whereas on NixOS you can have many different versions of the same library in parallel.

In NixOS, the whole system is defined declaratively, including the system configuration, whereas Silverblue uses a mutable /etc.


Also with NixOS you can install a package and start using it immediately, whereas with SilverBlue you first have to reboot the system to use the new package or after installing updates (or use one of the experimental live update features that tend to break booting the system). If you don't change the set of installed packages often, and reboot your system daily to get the updates, then SilverBlue works quite nicely. You can use containers (via podman or docker), and there is a builtin utility "toolbox" that gives you a persistent mutable container.


In Silverblue, your /usr is really immutable, so when you install package, you are installing it into different tree than the one currently running. Reboot switches to that newly constructed tree.


Is there a dev version of silver Blue where you can install/remove package on the fly, and maybe commit and reboot once you're ready?


`ostree admin unlock` will make the current image writable.


Yeah thats pretty useful for some quick hacking, it is all lost on a reboot though.


The —hotfix flag would remedy that. Honestly though it’s all fighting the design of the system, really there for developers to break things.



This seems poorly motivated.

> What are the benefits of an immutable OS?

> One of the main benefits is security. The base operating system is mounted as read-only, and thus cannot be modified by malicious software. The only way to alter the system is through the rpm-ostree utility.

How is this different from the current experience? "Operating system" files already aren't writable by the user. The only way to alter the system is through the "sudo" utility.

> Another benefit is robustness. It’s nearly impossible for a regular user to get the OS to the state when it doesn’t boot or doesn’t work properly after accidentally or unintentionally removing some system library. Try to think about these kind of experiences from your past, and imagine how Silverblue could help you there.

How often does this happen? I've worked with complete Linux noobies who were "forced" to use Linux in a VM daily and I've never seen this happen.


>The only way to alter the system is through the "sudo" utility.

The sudo utility doesn't really give any guarantees or produces reproducible states. You can mess around with sudo however you like.rpm-ostree transactions are completely reversible.

Silverblue is the equivalent of an accountant having a transparent history of your machine states. Sudo is like grabbing a pencil, a rubber, and a spraycan and gowing to town.


> How often does this happen? I've worked with complete Linux noobies who were "forced" to use Linux in a VM daily and I've never seen this happen.

Depends on the noob, and just because you never met a situation like that doesn't mean nobody else experiences too (disclosure: it happens all the time)

I was somewhat a noob, I tried to edit Ubuntu's Yaru theme and I messed up and deleted the corrupted system files. There was nothing to worry since I already made backups of the original files, the distress came when I realized I accidentally deleted my good backup files (was bad regex). I wasn't really worried at that moment cause I hoped there would be some repository out there, I could just fetch my files. There was none, none of the latest Ubuntu version or any close old version. Turns out distribution packages are so big (sensibly) it's not that simple to deploy a remote master repository. That was my "wtf open source" moment. I still somehow managed to fix it without any stackoverflow or AskUbuntu. Anyway that's just one case top off my head.


Uggh. This is such juvenile criticism about something that you do not understand. You really need to look at the design of most modern(ish) operating systems in the last few years including Android and ChromeOS which are already mentioned in the article.


Clearly I don't understand, and neither the article nor you were of any help.


If you want to guard against persistent malware, you don't want your system partition to be writable. You also additionally need a chain of trust in the boot process to get to this (trusted and signed) immutable file system. That still leaves the issue of user-downloaded apps/code and how to run them securely in a sandboxed manner. Which Android does reasonably well, but it is perhaps too restrictive for server/general use cases which silverblue is trying to address.

About the configuration issue, if configurations are transactional, you get transactional properties. And that would be quite significant. If you hose your system with an apt/yum update, you don't have much recourse unless you also took a filesystem snapshot before it (which you can do with zfs/btrfs/lvm-thin etc. and people shoehorn these things for precisely this reason). They are all different means to approximate the same end which is transactional package management.


System files aren't writable by normal users in any Linux distribution.

System recovery from backups is pretty easy and well understood too, so I'm not sure what benefit this would bring.

If you want transactions you can install on btrfs and use apt-btrfs-snapshot to automatically take snapshots. It seems this isn't that well-known though, probably because the problem it solves isn't very serious.

As for Android, it's already a frustrating system on phones, something like that on desktops would be total trash.


> The operating system is delivered in images that are created by utilizing the rpm-ostree project. The main benefits of the system are speed, security, atomic updates and immutability.

The article never mentions speed (or performance) again. Is the OS somehow expected to be faster because it is mounted read-only?


No, I wouldn't expect this system to be faster in a meaningful way. Actually maybe a bit slower since fewer libraries will be cached / memory mapped between applications if they carry their own copies.


I'd love to see stats on how many libraries are really shared, I'd imagine it might be less than you hope (other than the big graphical ones)


On my laptop:

How many libraries are loaded:

    $ sudo cat /proc/[0-9]*/maps | grep '\.so' | grep 'r-xp'  |  tr -s ' ' | cut -d ' ' -f 6  |  wc -l 
    15429
How many unique library names:

    $ sudo cat /proc/[0-9]*/maps | grep '\.so' | grep 'r-xp'  |  tr -s ' ' | cut -d ' ' -f 6  |  sort | uniq | wc -l 
    872

Top 10 most shared libraries:

    sudo cat /proc/[0-9]*/maps | grep '\.so' | grep 'r-xp' | tr -s ' ' | cut -d ' ' -f 6 | awk '{count[$0]++}END{for (i in count) print i, count[i]}' | sort -k 2 -n -r | head -n 10
    /lib/x86_64-linux-gnu/libc-2.28.so 299
    /lib/x86_64-linux-gnu/ld-2.28.so 299
    /lib/x86_64-linux-gnu/libdl-2.28.so 262
    /lib/x86_64-linux-gnu/libpthread-2.28.so 237
    /lib/x86_64-linux-gnu/librt-2.28.so 227
    /lib/x86_64-linux-gnu/libuuid.so.1.3.0 205
    /lib/x86_64-linux-gnu/libz.so.1.2.11 200
    /lib/x86_64-linux-gnu/libpcre.so.3.13.3 186
    /lib/x86_64-linux-gnu/libresolv-2.28.so 170
    /lib/x86_64-linux-gnu/libgpg-error.so.0.26.1 169
EDIT: add grep r-xp to count code segment only, values in previous edit were overestimated.


When you're running a GUI those libraries also get shared. For example Qt might not be used by as many processes as libc, but it saves hundreds of megabytes of memory having that shared across my email/calendar/terminals/torrent client/windowManager/guiShell/pdfReader/webBrowser.

The GP did mention the 'big graphical ones,' but the amount of memory that gets saved was a bit stunning the first time I saw it.


That's a good idea to measure the sharing! Fortunately even if every app gets containerised, not all of those libraries will be duplicated. Specifically each app which forks on its own will still preserve sharing. For example 11 firefox processes I'm running now would share the libraries, whether it's running directly or from docker.


Flatpak has a notion of runtimes, that are shared among applications. All the shared libraries in these runtimes will share their mmaped regions.

On top of that, ostree does deduplication based on file checksum. So if different packages ship the same binary, it will be only one copy on the disk and again, the mmaped regions will be shared among processes.


On linux, none of these libraries are duplicated, they are loaded once, and shared across all processes that need them.


Even when the processes are loading the libraries from different paths, in different filesystems, in different containers? How does it page in data on demand if the first container that loaded the library is killed and its filesystem unloaded?

It's not easy to share libraries across containers, unless they can be built to share a base layer in a stacked union filesystem approach.


>Even when the processes are loading the libraries from different paths,

Well, in my original post (GGGP) I've defined "a library" as "a .so file" so what I can say is that the 872 distinct .so files used on my laptop will be shared among the different processes that use them.

If you assume the same library can be duplicated in two different .so files, then 872 is just an upper bound on the number of distinct libraries and further sharing could be done.

Eitherway, that is a significant amount of code sharing, which was the original question in this thread.


Why can’t the linker deduplicate libraries as they’re loaded?


kernel samepage merging already exists


Only if they're the same version (which using the older built-in package system all packages are likely built against the same version)...

If the containerised versions of apps all have different versions of dependencies (quite likely IMO as they'll have the freedom to), there won't be any sharing.


Or even different locations. (unless they're hardlinks, or COW files probably) If it's not the same block on the disk, it's going to be duplicated in cache - whether it's the same contents or not.


If the soname is the same between the duplicate copies, and another copy is in $LD_LIBRARY_PATH it shouldn't matter about location.


Do they mean speed of installing updates (compared to an RPM-based OS)?


This is great, especially for atomic update and rollback of the OS. I remember a particular painful instance of OS upgrade. I did the yum update command in the login shell and forgot to do it in a screen session. The login shell got killed after a period of inactivity, in the middle of the OS update. Afterward the OS was beyond repair; couldn't roll back or move forward. Had to reinstall.

I wished something like silverblue existed back then.


I'm a fedora user and I just gave silverblue a try. The idea itself is great but in it's current state it's basically unusable for me.

A lot of application I use are command line based and are simply not available via flatpak. You have to install these via rpm-ostree but that requires a reboot every time you install anything.

Moreover many GUI applications that are available in the fedora repos are simply not packaged as flatpaks and either require rpm-ostree and a subsequent reboot or adding a third party repository like flathub. I really don't want to give up fedoras mostly excellent repos to rely on some badly packaged, possibly malicious container.

After not being able to find my preffered media player mpv, I settled for VLC from flathub. It installed just fine but video playback was completely broken, VLC installed via rpm-ostree worked.

I also don't understand how you are supposed to install patent encumbered codecs for firefox. Usually this is solved by adding the rpmfusion repos but with firefox being installed via a flatpak from the fedora repos, this obviously does not work.

I'll probably check this out again in ~2 years and see if it's any better.


For development workflows, command line applications and such, I think you're supposed to go the container route via Toolbox. So you will have dnf etc.


As a Linux user from the 90s, I welcome this change. RPM Hell and its Debian equivalent are real and painful things. When disk space was a premium, system dynamic linking made sense. Today, it absolutely does not. rpm-ostree is a bit ugly. Snap has the right idea of doing both system services and apps. Fedora should do the same.


>system dynamic linking made sense

I too look forwards to having to manually updated all security patches for each binary in the system.


This is what Nix gets right. Even if static linking is used, if some dependency is updated, all packages that have that dependency in its transitive closure get recompiled.


FreeBSD’s pkg does that, but they’re recompiled in the package server and you just download the updated binaries built against those releases. It is very fast.


Sounds a bit inconvenient if something low level gets changed, no? If libc changes you’re in for a day of builds…


Who says you need to compile anything? Nix allows you to download the binaries from their "build cache" if you so choose to, the same is true for guix.


But then, you’re downloading GB of stuff, just like windows update.


The brave new OSTree/Flatpak world needs build systems that know how to do security updates. There's a lot of work in this area in the Dockerverse; maybe it will cross over.


What happens is that half of your security updates never happen because it depends on individual app providers who have no skin in the game to do so this is unfixable unless apps that are insecure aren't installable.


Before Silverblue, Red Hat and Fedora maintain a list of custom build scripts for all packages that apply patches and security updates.

After Silverblue, when they run in Flatpaks, they can still maintain build scripts that achieve the same thing.

The distribution itself can even maintain a common base image for all flatpaks in the official repos, retaining all of the code sharing of existing systems, but with the benefit of a more robust and modular solution when they need to make exceptions. End users will also be able to more reliably use applications that are not supported by the distribution proper.


Would you rather each update of said binary be dependent on the author or some volunteer? That's how you get Debian stable.. No thanks.


On the author of the single vulnerable library instead of the hundreds of authors of projects making use of it, yes please.

The fact that things like flatpak, snap, nix all explicitly try to address this problem with platforms/base layers/grafting etc. is very telling. They are all a trade off of your security vs. the dev's convenience, which might be necessary to succeed.


I love Debian stable for this. And then we get MX, Deepin, Ubuntu and Mint..


I prefer the quality control that something like Debian stable provides, developers aren't always interested in packaging their application the right way or are just ignorant of how to package their application for one of many distros all with their own package format and tools.


While I can appreciate the security advantages of snap packages, I can't help but resent the fact that the output of the mount command is now polluted with dozens of lines unrelated to mounted disks.


Thank you, it's nice to know that I'm not the only one who thinks this. You could use an alias to hide this:

alias df='df -x squashfs'

Given that squashfs is a read-only filesystem, I don't know why this isn't done by default. No one needs to worry about how much free space is left on an ro volume.

I also go a few steps further and disable udev (-x devtmpfs) and tmpfs (-x tmpfs) as well.


Agreed on that front. That's really my biggest issue with snaps.


Try findmnt(8) like:

    findmnt --df --types


The author takes a long time to explain what problem this is going to solve. Only at the bottom of the page did I get a vague idea.


>“Team Silverblue” or “Silverblue” in short doesn’t have any hidden meaning.

Don't "Bill Revues", "Evil Rubles", "Rebels I Luv", "Urb Level I", "I'll Sue Verb", "I Blur Elves", "Be Evil Slur", and "I Serve Bull" qualify as hidden meanings?

(Not to mention "I Beaver's Mullet", "Brutalism Levee", "Album Televiser", "Ever Liable Smut", "Evil Slum Beater", "Melt Bra, Sue Evil", "Be Real Evil Smut", "Evilest Bar Mule", "Leave Stumblier", or "Blames True Evil"...)


Well, they're not hidden anymore.


Also "Must be real evil"


Also look at openSUSE MicroOS, which provides the same core idea (transactional root fs), but with some key advantages like not using rpm-ostree and instead using plain RPMs.


How is that fixing the issue of incompatible configuration changes? This is typically the reason why I see boot or start problems, i.e. I have made some changes to some configuration and the format, or some option changed with a package upgrade and I suddenly can't boot into the gui anymore. In contrast I can't remember when I updated a system and something stopped working because 2 libraries were incompatible. To me this is really solving a non-issue.


The contents of /etc is rolled back when you boot into the previous deployment.


How do they do things like security updates (e.g. OpenSSL)?

I mean, if the system is immutable, do I have to download an install a completely new image? How often do such updates arrive?

And what does immutable even mean in practice? Do I have to start from a CD image or some special boot mode every time I want to install system updates?


It's based on a piece of technology called "ostree". Fetching updates means downloading new objects from a remote object store, setting up a hardlink farm in a special location in disk (somewhere like /ostree/deploy), and during early boot, it will switchroot into there. So you can have multiple "filesystem roots" (directories on the same rootfs) and the mechanism can set things up to atomically swap between them, without wasting much file space (anything shared between them is shared, since it's hardlinks).

The details can be found in the ostree documentation: https://ostree.readthedocs.io/en/latest/manual/atomic-upgrad...


What's to stop malware from modifying the hardlinks and injecting dodgy processes during the next reboot ?


The immutability of the OS is not, alone, meant as an in-depth security measure. Don't run malware as root. If you want a fully trusted boot system, you'll have to use features like Secure Boot to verify that the BIOS and bootloader have not been tampered with.

ostree supports signing commits and trees with GPG signatures, and like git, all objects are content-addressed by SHA256 hash, so it is possible to verify that the entire root tree and all objects within it have been signed by some trusted party.


> “Team Silverblue” or “Silverblue” in short doesn’t have any hidden meaning. It was chosen after roughly two months when the project, previously known as Atomic Workstation was rebranded. There were over 150 words or word combinations reviewed in the process. In the end Silverblue was chosen because it had an available domain as well as the social network accounts.

It personally made me think of "Silverlight".


Oooh, this is poor timing for me.

I'm about to get a new laptop for work, I usually use Fedora. Should I gamble on using SilverBlue? I'll have to think long about this one.


IMO, no. I run Silverblue on a machine I can afford to have not work at any given time. It's been rough on occasion, especially around stuff like graphics drivers. If you don't have a disposable machine I think this stack is definitely too early in development.


Many thanks for that.


Concept is interested, but read-only rootfs is stupid, really. It's kind of lock-in.

Of course, ro - great for security, but if something happens with any critical system component like bootloader - I prefer to able patch/fix it myself and don't wait days/weeks for distmakers.

Clear Linux use similar concept, but they allow write access and handle whole fs tree and bundle depends on server side.


If you know how, there's an escape hatch.

There are basically multiple filesystem trees under the hood (shared with hard links to avoid duplicating file data), and at boot time you'll get one of them. These are known as "OSTree deployments", and they're found in /ostree/deploy, and they're actually mutable, it's just the bind mounts into that location that are mounted RO.

Anything in the bootloader configuration is not part of the deployment (from what I remember), and so it's mutable.

See the docs here: https://ostree.readthedocs.io/en/latest/manual/deployment/


How is this different from traditional embedded Linux system? Most firmwares have a read-only rootfs.


Applying those tools to desktop systems, where users make decisions about what's on the system.


Flatpaking all the things? I'm not sure why there is this push for Linux to have the "download and double click" install experience of windows / Mac. Convenient to install sure, but as a user its a nightmare to maintain/update.

All people on Linux really need is an xdg-open standard for opening a package manager / running an install command.


They are updated and maintained automatically, even if your distro isn't.


What do you mean by 'maintained' automatically? Many flatpaks use their own custom compiled dependencies that are outdated. I took a frequently-used dependency used to decode untrusted data (ffmpeg). Many Flatpaks on Flathub use outdated ffmpeg versions. Some examples:

- VLC ships with a slightly older version of ffmpeg (4.1.3) with two known CVEs:

https://github.com/flathub/org.videolan.VLC/blob/f1b27c13b13...

- MakeMKV uses an outdated ffmpeg (4.1.0), which has several known CVEs:

https://github.com/flathub/com.makemkv.MakeMKV/blob/3c44c8bc...

- Openshot uses an outdated ffmpeg (4.0.3) which has several known CVEs:

https://github.com/flathub/org.openshot.OpenShot/blob/ec2077...

This is what you get when every application ships custom dependencies, rather than having a consistent package set.

(I like the idea of Flatpak, but I think it hasn't found its optimum yet in terms of dependency management.)


Blindly updating ffmpeg without the app being tested for it is a recipe for disaster -- ffmpeg has made API breaks in the past, and that meant that when an ffmpeg system update was required, all projects depending on it would need to upgrade to the new API.

So often, a distribution would be held back on an old ffmpeg (perhaps patched with some of the CVE fixes by a distro maintainer who might not be familiar with the codebase) to isolate the churn of upstream.

flatpak lets app maintainers update at their leisure, which actually gets them on a faster update cycle.


Aren't flatpaks sandboxed? Does that help mitigate the risk somewhat?


Theoretically yes (minus you are only one kernel vulnerability away from elevated access). However, a lot of Flatpaks take blanket access to the home directory or host filesystem:

https://github.com/search?q=org%3Aflathub+%22--filesystem%3D... https://github.com/search?q=org%3Aflathub+%22--filesystem%3D...

So, many applications use it as a distribution mechanism and not so much for sandboxing. Of course, this is bound to get better over time when applications are modified to support sandboxing better and can use portals.

IMO you need both: isolation through e.g. sandboxing and timely security updates of applications and all their dependencies. Flatpak currently provides the former for some applications and the latter is completely dependent on the maintainer of the Flatpak.


Right up until they're not because they pegged the version of some library.

If things were so easily automatically updated and maintained we wouldn't need flatpak.

One benefit is that if you have some software in the chain blocking updates others can update.

This may actually improve overall security.

I think I just argued myself out of hating flatpak. :/

Which is cool, because that os-tree switching thing sounds like btrfs snapshot hopping on steroids.


Are flatpaks not all sandboxed? I thought the concept of flatpak and snap was that it offered sandboxing in a way what was never implemented to normal repo packages


Flatpak uses kernel namespaces (like docker) to run software with a bundled set of libraries. From their FAQ:

> Flatpak mostly deployed as a convenient library bundling technology early on, with the sandboxing or containerization being phased in over time for most applications.

I don't really know if sandboxing is worth it for me. Running everything inside docker cotnaienrs sounds like an absolute nightmare when it comes to troubleshooting. You might think logs and things would be well defined and put in the right place for the OS to pick up, but if things were so well behaved we wouldn't feel the need for sandboxing now would we.


They are also adding per-application isolation of settings:

https://blogs.gnome.org/mclasen/2019/07/12/settings-in-a-san...

Flatpak is one piece of a broader design to secure Linux workstations. It is also intended to work in conjunction with Wayland and the in-development Pipewire. These lock down video and audio respectively, so that shared resources can't be misused by applications.


we wouldn't feel the need for sandboxing now would we.

Applications have vulnerabilities. Sandboxes help as an additional layer of security for trusted applications.

Of course, if applications are trusted and under control, a simpler mechanism like OpenBSD's pledge/unveil may be enough.


I’ve been running many applications as flatpaks for over a year without issue. Troubleshooting is not too bad either imo.


Flatpak should be ok, I think when it's run inside inaccessible containers it won't be. Are the files sandboxed off from the user running in flatpak?


No, these are bind mounted. Most Linux programs have a standard configuration directory. The application files that don't change would probably be sandboxed so that they can be easily upgraded.


Not trying to be sarcastic, but opinions like this make me sure that the Desktop Linux won’t fly.

> I'm not sure why there is this push for Linux to have the "download and double click" install experience of windows / Mac.


There isn't a desktop any more, we might as well be trying to make Linux for the mini computer if you're chasing the desktop market.


I was meaning consumer-targeted-computers (which includes laptops).


That's a meme propagated by the news, but I'm pretty sure the desktop is still a thing.


...where?

No joke. I seriously don't see desktops running rich applications around anywhere, except the mini-computer / workstation use case.

Generally people are running a glorified thin terminal with a browser or putty connection to a dosbox app. People who actually do things on their own computers generally run laptops now. The exceptions are people who do demanding work loads, and they run workstations that can handle it--more like the mini-computer than the traditional office desktop.

There are public computer terminals in libraries and such, but the only reasons these are not laptops are theft prevention and the need for a large screen, keyboard and mouse.


Laptops run desktop operating systems, and that's what pcr910303 was talking about.

Plus "putty connection to a dosbox app"? I have never heard of anyone doing that, and don't understand why they would. Or maybe you don't mean "dosbox" [1], but "console"/"terminal"?

[1] https://www.dosbox.com


Laptops aren't computers any more, they are thin clients connected a mainframe somewhere. That we call them laptops and the cloud doesn't change the use case.


I think you are almost completely wrong here. I have hundreds of students, most of whom have a laptop running a desktop OS. Practically all of them run normal desktop apps.


I mean dosbox. Look behind the counter next time you're with a human bank teller or checking in at the airport. You'll find a screen running telnet connected to an instance of dosbox running on some central server.


Bank tellers make up an absolutely tiny proportion of desktop OS users, most of whom run normal desktop apps. I have seen tellers connect to terminal apps running on a server, though, so I know what you mean.


Walk into any random company, there will be many desktops. Sure, some applications are web applications, but they will typically also use Microsoft Office and a smattering of more niche applications. We happen to live across an office tower. People sit and work behind desktops.


Interesting. The offices in the part of the world where I live generally consist of flat surfaces where employees place their company-provided laptops.

The exception is things like receptionists, or other areas where multiple employees share a common terminal. But as mentioned before, those computers are basically used as stationary, large-screen browsers or thin clients for cloud applications. Email? Excel? It's been a decade since I've seen people doing that on desktops in office environments.


Even electrical/mechanical engineering workstations are primarily laptops now. There are some desktops, but the majority are higher-spec laptops.


Obviously laptops count as desktops for the purpose of this discussion, since we are talking about desktop operating systems, which is what laptops run.


This sub thread is about the desktop form factor.


I have not seen a desktop in a corporate environment in over a decade now.


So basically we're going back to the DOS model for operating systems? Sounds good to me.


Oh no. The benefit of Linux is to be able to build your own Setup (Server,Desktop). Now with this "Solution" the user have more and more a closed System where every change creates a lot unnecessary steps to install another software.

I agree that on servers the container runtime makes a lot of sense but not on Desktops where changes happen every day.


You'll always be able to roll your own. What this is about is like running a live distro w/ "persistence", except from your own hard drive instead of external media. With comparable benefits and drawbacks, I assume - in fact some of the drawbacks of live distros could be avoided, since you could have an "initial setup" (adding users, hardware detection, basic config etc.) the results of which are persisted.


Well even tails allows to install some packages after when you run it from USB. "You'll always be able to roll your own." of course I can also build my own linux with LFS (=Linux from Scratch) but how many People do this? But you are right, to use Silverblue or not can decide every person on there own.


I agree, and I whish others could see this as what it is - a push to make everything so overcomplicated and repository-locked in the name of security that you need endless maintenance and a support contract to run even basic software on your PC, thereby taking F/OSS ad absurdum. When in reality we haven't seen significant end-user F/OSS in almost a decade.


What evidence to you have to suggest this extremely uncharitable interpretation of the Fedora project’s aims?


What's the aim of the Fedora project to Silverblue?


On workstations and laptops, no, but I work in enterprise and a _lot_ of Linux appliances out in the wild could benefit from this.


Well the question is will the enterprise Software Vendor like Citrix, IBM, Dell, HP and so on port there Software to this new Package format. Even for today it's difficult to run some SW/HW tools on "Linux" as the vendors support only a small amount of Linux distribution with specific versions.


It is the reason I'm slowly moving away from Linux. I've learned Linux, for years I've invested time and money learning everything I could. And it was fun. I even built LFS many times. I know how NOT to break it and how to fix it. And as we have seen with GNOME3, systemd, Wayland etc. RH will deliver and every major distribution will eventually adopt immutable directories and statically linked applications. So, my conclusion after all this years learning and having fun with Linux is that it was a waste of time. I understand the propaganda: it's good for everyone (who don't like dealing with Linux, the OS). And I understand the real reason: distribution developers don't like the tedious work that is compiling, linking and packaging the same software over and over again. But the feeling that I wasted my time. If it's to use a immutable, bloated, reboot-all-the-time OS, I just use any other OS.

As I posted here: https://news.ycombinator.com/item?id=20425615




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: