This LWN article makes the matter supremely confusing. The linked mailing list post is way better and clearer and it would be good to get this submission changed to it:
> … the LibreOffice RPMS have recently been orphaned …
> … will contribute some fixes upstream to ensure LibreOffice works better as a Flatpak, which we expect to be the way that most people consume LibreOffice in the long term.
> Any community member is of course free to take over maintenance, both for the RPMS in Fedora and
the Fedora LibreOffice Flatpak, but be aware that this is a sizable block of packages and
dependencies and a significant amount of work to keep up with.
This is probably how things will be moving. It doesn't make sense for every single distro to maintain customized versions of user level applications now that we have one package that works everywhere.
Flatpak is a non-starter for me. The runtimes required by my office would mean 2 KDE versions and 2 Gnome versions installed as flatpak runtimes in addition to the KDE running on the host, which quintuples the security space I need to monitor.
Sweeping package management problems under a rug doesn't actually make them go away.
No, I trust my distribution to do that for me, and more importantly update libraries as needed. With snap and flatpack I have to trust every single upstream to keep track as if there is a problem i'm vulnerable until everyone updates.
I'm very interested in some sort of daemon that regularly scans for vulnerable binaries/libraries and produces desktop notifications about this and/or hosts a web interface to review issues. I do use clamav for malicious files, but bug advisories/vulnerabilities are another area I wish I could make convenient to monitor for personal computing. I've seen things like Splunk reports in enterprise settings, but not for personal 'digital hygiene'
Anybody have a list of open source solutions here?
Not all. But I'd wager it's not uncommon for a lot of linux people to periodically check their most used software with the largest attack surfaces. Browsers, document viewers/editors, mail clients, decompressors etc.
Just being sandboxed though isn't a help if you actively want the software to process sensitive data! If you want to read a private document in LibreOffice, you have to be sure your copy of LibreOffice and every library it calls is trustworthy and reasonably secure.
I dunno, if it's sandboxed and doesn't have network access then the only attacks I can think of are either really indirect (embedding an attack in saved files in hopes of hitting another user) or really targeted (altering a specific document or phrase when it's read). Assuming the sandbox works and includes blocking of network access, what attack do you see a word processor performing even when handed sensitive data?
Suppose your word processor is allowed to write to arbitrary files in your home directory. It writes a line into your .profile that CURLs your sensitive data to it's server.
This has a level of indirection, but that just delays things until the next login. With a little more work there are infinite other places you could inject that might run sooner.
> Suppose your word processor is allowed to write to arbitrary files in your home directory. It writes a line into your .profile that CURLs your sensitive data to it's server.
Yeah, that's like the poster child of sandboxing; I am very much supposing that my word processor is not allowed to touch arbitrary files.
Er, yes? I am ignoring the supply chain, because I'm commenting on the sandboxing, which is separate from who's publishing the software. (The only connection I see is that whoever publishes the flatpak is responsible for setting the default sandboxing config, which is important but I don't know how else you'd do it. But then RPMs aren't sandboxed at all, so what does it matter who signs it? We're talking about sandboxing here.)
I dont care about sandboxing though; the valuable data is what the apps themselves are handling so it doesn't help.
But at any rate the multiple runtimes with multiple upstreams are why it's a non-starter from a security PoV; the uselessness of sandboxing is more just icing.
I literally don't care about sandboxing. The data the apps themselves handle is the only thing valuable; protecting the rest of the system is pointless since I can reimage essentially for free at any point.
If malicious software takes over my browser it can steal my identity, wreck my finances, and ruin all my business and personal relationships. The fact that it can't add a printer is interesting but not really at the same level of concern.
As far as I know, Flatpak offers a fair amount of control over sandboxing. It's just that isn't particularly useful for some thing like a document processing application that you usually want to be able to use on files in various parts of your system. The alternative might be giving it access to one particular directories, and then copying files there when you want to work on them. But that's already pretty cumbersome, something only the paranoid would bother with.
Realistically, I know that I do not have the skills to evaluate complicated applications and their complicated dependencies for security characteristics, so I am 100% reliant on packagers, maintainers, etc. for my security anyway. I'd rather just donate to the people publishing and maintaining the packages and hope that they are doing the job well, than fruitlessly attempt to fuss over it myself.
> It's just that isn't particularly useful for some thing like a document processing application that you usually want to be able to use on files in various parts of your syste
The good news is there is a solution to this. The trusted system shows a file picker, and then grants access to the sandboxes application.
I didn't realize this existed, it seems a lot like what Apple is doing for the photo gallery in recent iOS versions. That's a nice extra safety feature.
I suppose additional security features inspired by mobile systems couldn't hurt either. Like a pop-up whenever something accesses the clipboard.
That's roughly what our workflow does (DFARS requires it or something like it for CUI) and this story annoys me because the entire selling point of RHEL (which is what we use) is that you aren't having to audit a bunch of random upstreams because they do it for you. It's why their going all-in on flatpak was such a surprise to a lot of us.
That's fair. Maybe some large organization will be willing to sponsor their maintenance of RPMs, or of the Flatpaks directly. After spending hours trying to package even simple programs in RPM and DEB, I am very sympathetic to the idea that one cross-OS packaging format is enough. Why shouldn't there be "enterprise audited Flatpak" like anything else?
Sandboxing is also an orthogonal question from packaging. OpenBSD sandboxes applications using pledge; it doesn't require maintaining multiple parallel runtimes to do it.
Do you use flatpaks? Each application brings with it a desktop runtime (Gnome or KDE or sometimes just the FDo that underlies Gnome and KDE), and each of those desktop runtimes is versioned. The problem is these don't get updated on Flathub at the same time, so the vector art editor brings one version of Gnome and the IDE brings a different version of Gnome.
Now, it's possible to build my own flatpaks and keep everything updated, but if I'm doing that I might as well just build the actual software and use the distro's package management system.
> It doesn't make sense for every single distro to maintain customized versions of user level applications now that we have one package that works everywhere.
It does make sense for at least one distro to be offering this, as there are numerous reasons to favor the traditional linux distro way of doing things. Flatpak is better for a lot of use cases, but it's missing a lot by not having a centralized system of checks and balances provided by a Linux distro.
Fedora Flatpak brings back a lot of the advantages of traditional Linux distributions, but unfortunately Fedora/RHEL likely won't be maintaining a Fedora Flatpak version of LibreOffice.
Because there are pros and cons for each type of packaging but native distro packages and Flatpak will go-exist. It does already on my machine.
Distribution packages:
+ necessary for base system
+ small memory requirement
+ small package size
+ all checked by distro
+ work well together
+ on fix, fixes it for all
o lot of work for maintainers downstream
- bugs hard to figure out for upstream
- problematic with closed-source
Flatpak:
+ programmer packages is/her software once and only once
+ support all distributions i.e. Linux
+ use selected dependencies
+ control-groups and namespace are in place for lock-in
+ autonomous offline package handling e.g. “storeable on thumbdrive”
- huge maintenance burden, especially security, on programmer
- higher memory usage
- big package size
Usually if I see something entirely new (e.g. Marker some years ago) or something using Qt-Libraries (e.g. Zeal) instead of my native Gtk based environment, it is a candidate for Flatpak. Marker is now in native repos of Arch, therefore is use it now from there. While I kept OpenRA as Flatpak because I need usually the on provided by upstream.
I think Flatpak is ideal for new stuff and especially closes-source software. Programmers cannot do the repeated error of supporting some special outdated distributions. It is a miracle to me how anyone can think that packing for a specific distro is their job? It is not. The job is documenting how to package the software and allow redistribution. With Flatpak we allow them to do package themself if desired but just once and for all.
A weak point of Flatpak is that facepalm Canonical is doing it very own show with Snap. How often does Canonical need to fail with Mir, Unity, Upstart and now Snap? The server is closed-source and it is against the community. Even if it would be better it is dead. And no, we don’t need two competing…please just commit fixes to GNOME e.g. type-ahead-find?
And I don’t want to hype Flatpak:
GNOME-Software could use less memory itself, we need an easy approach for payment (and repeated payment?) and it stores many small file on disk. And I wonder which security requirements the Flathub requires?
> It is a miracle to me how anyone can think that packing for a specific distro is their job? It is not. The job is documenting how to package the software and allow redistribution.
I think the mentality of authors who package their projects for a specific distro comes from their familiarity with Windows, where there is only one distribution (more or less) and packaging for it is naturally the responsibility of software authors. They mistakenly apply their windows experience to the new task, probably entirely unaware of the faux pas.
Often, knowing that there are a myriad of Linux distros, they mistakenly assume that the community of linux users are expecting them to support dozens of distros individually, testing on each one, and they become reasonably upset with that (mistaken) obligation. So they stamp their foot down and say "I support Ubuntu 13.04 specifically but if you use another distro then you're on your own!!" Then they go on to complain about fragmentation and say that linux users need to pick one distro and stick with it. Really, nobody expected them to support any specific linux distro in the first place but that is poorly communicated to otherwise experienced programmers who are new to linux.
I also think it are developers familiar with Windows but didn’t want to start…with that.
I did wonder how even companies like Amazon came up with that approach? For example their own MP3-Downloader back then. So someone else has written “clamz”. Even Valve was on that train with Steam, first only shipped for Ubuntu until Arch and others nudged them and asked them “to allow redistribution.”. At least this shows, talking to each other helps :)
And now Valve uses itself Arch ^^
> A weak point of Flatpak is that facepalm Canonical is doing it very own show with Snap. How often does Canonical need to fail with Mir, Unity, Upstart and now Snap? The server is closed-source and it is against the community. Even if it would be better it is dead. And no, we don’t need two competing…please just commit fixes to GNOME e.g. type-ahead-find?
Hey now, Canonical provides valuable service of researching how to not do something so others can avoid those mistakes.
The biggest minus of containers for me, including Flatpak, are that even with xdg desktop portal workarounds they still fail to actually integrate into the desktop and make or break features that depend on this simply don't work. And because it's a container it's nigh impossible to debug and find a fix. In the end containers are even more fragile than depending on system libs and fighting future shock.
... the new correct API to use to work around containers and all the problems being containerized causes.
>Portals are the framework for securely accessing resources from outside an application sandbox. They provide a range of common features to applications, including: Determining network status, opening a file with a file chooser, opening URIs, taking screenshots and screencasts [...]
All things being containerized messes up which require workarounds. xdg desktop portal attempts to provide these but most of the time it fails to do so fully.
I've tried several times to understand how Flatpak works and how to integrate a CMake build with Flatpak.. and I'm honestly befuddled. Granted a quick Google search does show the situation has improved a bit in the past year.
I imagine if you're trying to package a Zig or Clojure application it's going to be a lot of figuring things out on your own
AppImage is also troublesome. I just spent a good while debugging ours.
The TL;DR is that AppImage amounts to a self-extracting archive, and you need an application where all the binaries and libraries have a relative rpath. So rather than loading /usr/lib/libz.so, it uses $BINARY/../lib/libz.so.
That part's not too bad, but what can cause a lot of trouble is that there's a fair amount of stuff that uses dlopen at runtime. So you might find that you think you packaged everything, but then the app loads some module from the host system, the API isn't compatible and it explodes in some confusing fashion.
So yeah, Linux app distribution is a pain no matter what it seems.
And AppImage distribution has the same fundamental security problem as static binaries, making it difficult or impossible to replace downstream dependencies. When a dynamic library needs a security update (e.g. heartbleed in libssl), the shared library can be replaced system-wide. When a static executable or AppImage needs a newer version of a library, you need to update the program entirely, and there is no way to do so across all programs system-wide.
And conversely, when a dependency introduces a critical vulnerability, the AppImage or static build is unaffected.
But on a system where everything is updated at the same time through a package manager, I don't see that there's a material difference, unless you as the upstream app developer decide not to build against the latest security updates of your dependencies; but if that's the case, that's a completely separate issue from dynamic vs. static linkage.
> And conversely, when a dependency introduces a critical vulnerability, the AppImage or static build is unaffected
What's more common in practice ? Widely used libraries introducing critical vulnerabilities in 2023. Or old vulnerabilities being discovered/exploited, and patched?
> unless you as the upstream app developer decide not to build against the latest security updates of your dependencies; but if that's the case, that's a completely separate issue from dynamic vs. static linkage.
That's precisely the problem isn't it? What if "You the upstream app developer" is on a long vacation, or abandoned the project? Multiply this by N where N is the number of different app developers.
Compare with "you as the OS admin". If you're using your system, you update that vulnerable dynamic library, and you're done.
If users of other machines are on vacation or abandoned their machine, that's not your problem. Your system is secure.
So I feel parent's point had everything to do with dynamic vs. static linkage.
If a new version of a dependency introduces a vulnerability, then dynamic libraries allow you as the sysadmin to stick with the secure version.
For package managers, difference is how often the decision to update a library must be made. Dynamic libraries mean that the package maintainer of libFOO must be aware of security issues in libFOO, and update accordingly. Static libraries or AppImages mean that the package maintainer of every user of liBFOO must be aware of security updates in libFOO, which is a much higher burden.
You only need one hole. A newly introduced vulnerability compromises the system as soon as one package updates. A fixed dependency only works when every single program is updated.
Can't you just add an rpath to your main executable (which cmake does by default for release builds that aren't installed, fwiw) or use a launcher that sets LD_LIBRARY_PATH?
In general a program that calls `dlopen()` must either ship with the libraries being opened (in the case of OpenSSL, which iirc only does this with libcrypto or one of its targets in some cases?) or be able to resolve them without the help of the loader via default paths.
But all that said if you don't want your app to explode on different distros you always need to vendor your dependencies, including transitive dependencies. That's just the sad reality of shipping programs that are dynamically linked.
Yeah, that's kinda the problem. If your dependencies are complex enough you may have a tough time figuring out when you've packaged everything.
Surprises may be lurking anywhere. It might be loading files based on something read from a config file, or concatenating tokens, so you'd have a hard time knowing you got everything for sure without reading the source for every library your application loads, and every library your direct dependencies load.
And then everything may still work until 3 distro releases later something finally becomes binary incompatible.
To be hones, developers should be doing that work anyway to figure out what dependencies they really need and which ones can be replaced by lighter ones that don't pull in gigabytes of crap.
And also the situation isn't really different on Windows except that you have to do it from the start because there is no illusion of the base system providing a bunch of random libraries for you.
Red Hat will almost definitely work with upstream to "fix the problem". Red Hat engineerings mantra is "Upstream first", the financial cost of maintaining a difference from upstream is a detractor to encourage users from using upstream builds.
"It doesn't make sense for every single distro to maintain customized versions of user level applications"
Why it needs to be a customized package? Software is software and every Linux application which is open-source can be compiled and placed in /usr/local or whatever non-system level directory.
Why is it so hard to compile the latest supported version and package it in a distro-agnostic way?
Well your excerpts aren't really clearing up all confusion IMO, but even the original message is somewhat confusing too:
>> … will contribute some fixes upstream to ensure LibreOffice works better as a Flatpak, which we expect to be the way that most people consume LibreOffice in the long term.
Here you left out the part that they'll do that only until older RHEL releases, that still have LibreOffice support are EOL:
> We will continue to maintain LibreOffice in currently supported versions of RHEL (RHEL 7, 8 and 9)
> with needed CVEs and similar for the lifetime of those releases (as published on the Red Hat
> website). As part of that, the engineers doing that work will contribute some fixes upstream to
> ensure LibreOffice works better as a Flatpak, which we expect to be the way that most people
> consume LibreOffice in the long term.
I.e., they don't plan to fix anything besides issues w.r.t. the (only older?) release branches supported by RHEL <= 9, until that is EOL too.
And while they first hint that only RPMs packages are affected and implicitly suggest that distribution via Flatpack is the way forward (yuck), they then contradict that part by telling that they'd find it OK if a volunteer picks up LibreOffice support for both, RPM and Flatpack, up again in Fedora. I.e., reading that it seems that they don't plan to actually support the Flatpack distribution either? Or is this a Fedora specific Flatpack repo?
No, they plan to maintain the RPMs for RHEL 7/8/9 and to contribute fixes for Flatpak, which is how they expect people to consume LO (without Red Hat support) in RHEL 10 and perhaps in future Fedora releases.
I for one am a periodic LO user (for both work stuff and personal sheets like tax returns) on Fedora and will switch next week to Flatpak to start reporting bugs. I already use Flatpak for OBS, Ferdium and VSCode, with no issues except that I need to use ssh access to open VSCode projects on localhost (because of some known sandboxing issues).
> support for both, RPM and Flatpak, up again in Fedora
Fedora has a Flatpak repository that is separate from FlatHub. LibreOffice developers post their builds to FlatHub, and those are usable from Fedora without involvement from Fedora developets.
I think you may be misinterpreting it. They’re saying “we’re not maintaining LibreOffice RPMs beyond the current supported versions of RHEL, because we think Flatpak is the way forward, but we recognise that there are currently problems with it, so we will put some effort into fixing those, so that us no longer offering RPMs isn’t disastrous”.
https://lwn.net/ml/fedora-devel/20230601183054.12057.45907@m...
Key excerpts:
> … the LibreOffice RPMS have recently been orphaned …
> … will contribute some fixes upstream to ensure LibreOffice works better as a Flatpak, which we expect to be the way that most people consume LibreOffice in the long term.
> Any community member is of course free to take over maintenance, both for the RPMS in Fedora and the Fedora LibreOffice Flatpak, but be aware that this is a sizable block of packages and dependencies and a significant amount of work to keep up with.