Hacker News new | past | comments | ask | show | jobs | submit login
Canonical’s Snap: The Good, the Bad, and the Ugly (thenewstack.io)
55 points by slyall on July 11, 2016 | hide | past | favorite | 41 comments



The technical criticisms here are not really directed at Snap, but at the trade-offs inherent to the sandboxed-application packaging model in general.

Speaking as day-to-day desktop Linux user, a working sandboxed-app model would be a BIG improvement over rpms and debs. I would LOVE to be able to install any version of any application, or multiple versions, independently of OS and library versions, without having to compile from source code, grapple with conflicting dependencies, etc.

Are there any F/LOSS alternatives to Snap that are as widely used and tested? If there aren't any, Snap is really our "least worst" alternative.


As a Linux user since many years, I like very much the traditional package management. I think the efforts should be put on reliability and hardware handling. I have migrated from 14.04 LTS to 16.04 recently. I am using a NAS drive. After my do-release-upgrade -d, internet was not working anymore because of systemd circularity problem. I had to learn how to create systemd configuration files to describe remote filesystem mounts.

The focus when clicking on chrome did not work anymore (http://askubuntu.com/questions/760654/cick-to-focus-is-not-w...).

When my computer enters in sleep mode, I can wake it with a press on enter. The next time, it enters in sleep mode, I can not wake it up anymore.

I have fixed the two first issues and worked around the third one. I dream ubuntu becomes more used, but these kind of issues is blocking.


I agree that it should get easier, from release to release, to upgrade, but I am not surprised by the bugs you encountered. On the plus side, the first 'really stable' Xenial release is due soon [1] though I can't say that would guarantee your bugs will be squashed, even then - however, it is important to log them [2] to make sure they get some attention.

1: https://launchpad.net/ubuntu/+milestone/ubuntu-16.04.1 2: https://help.ubuntu.com/community/ReportingBugs


My two other recent issues were with my ssh dsa key that had to be regenerated in rsa and with libnss3 update that broke popcorntime. The non working wake on lan is a many years old bug in my network driver where the correction patch was not accepted.


The concept of a sandboxed app is to run non-unix philosophy monolithic "and the kitchen sink" applications that are not integrated with the system.

In that way there's not going to be much FLOSS interest. No one in FLOSS-land wants to admin a box with seven different statically linked SSL libraries all of different ages and different vulnerability exploits all relying on the app vendor as a single point of failure as the only source of non-FLOSS binary packages. FLOSS devs could "improve" the sandbox system by having the OS provide one patched up to date and secure SSL library for all applications to dynamically link to. Being FOSS this is not a legal problem and the OS vendor existing means one person on the planet has to be "in charge" of it and it'll just work for everyone. It doesn't work so well with commercial closed source non-FOSS vendors. Oh well. Sucks to be them. I don't miss them.

Or, how does "does one thing and does one thing well and interoperates with all the other finely honed tools" FOSS philosophy match with a sandboxed app architecture? It seems the opposite, the purpose of the sandbox is to create a NIH monolithic app. If I wanted to run windows, I'd just run windows, not emulate its architecture on a unix-like system. It seems like a lot of work to turn linux into windows... why not just use windows and not mess up linux?

The pool of FLOSS sandbox devs would be restricted to FLOSS devs who want to shoot FLOSS in the foot to make life easier for non-FLOSS devs and harder for everyone else, and disagree with core beliefs of FLOSS philosophy of design and architecture. Other than that lack of motivation, no problem.


There is allegedly a standard, xdg-app, from freedesktop.org. Flatpak, mentioned in the article, is AFAIK the only existing implementation of the standard, and seems to be associated with Gnome and loosely associated with Fedora.


Xdg-app was renamed into Flatpak. It is a standard like Snappy probably has a document describing the technical workings. Flatpak (though I like it) is too new, preventing anyone from calling it a de facto standard.

For Flatpak, the scope is limited to just desktop apps. Then to break out of the sandbox it'll provide 'Portals'. They work a bit like Mac OS X / Android / iOS. The goal is to limit the permissions by default. User interaction of portals should be great, you don't really notice the difference. Portal support will only come to GTK as of 3.22 (September 2016), see https://blogs.gnome.org/mclasen/2016/07/08/portals-using-gtk.... Until Qt/GTK+ supports these portals Flatpak does NOT fully sandbox else you'd have problems using the app. Sandbox is done using standard kernel functionality (namespaces, etc).

Snappy requires AppArmor. Without that, no sandboxing. There's a promise to also support SELinux, though not sure what timeframe they intend. Snappy on distributions without AppArmor (e.g. Fedora) will NOT sandbox the app! Snappy IIRC is not just limited to desktop apps, which IMO limits the sandbox options.

AppImage can do sandboxing using other software (Firejail) which again relies on AppArmor. It seems the AppArmor profile comes with the app, which IMO defeats the security. This as IMO it would be nice to not have to trust the app you just installed.

For all above: any effective sandboxing requires Wayland/Mir, not X11. Noticed someone said you might be able to make X11 secure by putting an app within xephyr/xnest security filtering like program, but IMO that's a blue sky idea (tell me once it works and one of the sandbox solutions provides full integrated support).


Not blue sky at all. The reason any "app" (ugh) can read the keyboard is that they share the root (not related to the root account) window.

Set each up as having their own root window and the problem vanishes.

Some will complain that the solution screws up GPU graphics. But thats only the case because devs could not be assed to hash out a proper channel through X, and instead went straight for the hardware. Never mind that there is a patched xephyr with opengl support out there.

BTW, another option is xpra.

https://xpra.org/


For all its naming shenanigans, xdg-app is a gnome development. And frankly these days Fedora is the testbed for all things Gnome and Freedsktop. This because the people involved ae more often than not RH employees with direct access to RH, and thus Fedora, resources.


Not exactly, but close: https://nixos.org/nix/



AppFS ( http://appfs.rkeene.org/ ) allows you to do all of those things, except does not involve sandboxing in anyway. Sandboxing is orthogonal to the issue of presenting the files on disk for the application. Something could do sandboxing on top of AppFS and gain all the deduplication and run-all-versions-all-the-time benefits of AppFS.


there was once something called "GoboLinux" that mounted static packages, and used a next of links to connect everything together. I like that approach. Shame it didn't progress.

I imagine something similar might come alone using docker, but really we just need LFSH reform.


Gobolinux still exist.

What it does is put the output of a "make install" or similar into its own name/version branch in the directory tree, and then use symlinks to place those files within a traditional FHS for backwards compatibility.

As with any small distro is has a problem with manpower, especially as certain parties involved with big name distros are hell bent on rebuilding everything around their vision of the Linux desktop.

I think the current maintainer is aiming at getting a new release out "soon", and was looking into providing a tool that could build an ad hoc "jail" for a program and its specific tree of dependencies.


Hmm, perhaps docker, or whatever flatpak uses for containers/isolation.


Flatpak uses cgroups managed by systemd, and Gobolinux do not use systemd (i suspect systemd would freak majorly at how Gobolinux does things).


>Are there any F/LOSS alternatives to Snap that are as widely used and tested?

While not released yet I'm thinking WebAssembly and browser APIs should eventually cover this use case - I expect browsers to expose more sandboxed versions of native APIs and the browser will become the new OS for desktop OS.

People will probably start experimenting and packaging their own browser builds with extra APIs that don't get released in to web browsers right away - like Electron - instead of running node it would just expose more stuff to the browser JS/WASM.


While not released yet I'm thinking WebAssembly and browser APIs should eventually cover this use case - I expect browsers to expose more sandboxed versions of native APIs and the browser will become the new OS for desktop OS.

Yes, let's throw away all the great toolkits (Qt, Cocoa, etc.) that we have, which have developed into an extremely stable base over the past 20 years, with look & feel consistency between applications, and replace it by some experimental browser tech and constantly reinvented web frameworks. </sarcasm>

On a more serious note: I think sandboxing application for existing toolkits is a problem that is far easier to solve than changing browsers into general purpose operating systems with consistent UIs.

In fact, Snap, Flatpak, etc. already implement many of the necessary bits and it's been done before on UNIX (Sandboxed macOS applications). Of course, Linux has to switch to Wayland for GUI isolation, but that'll be true for any solution.


>Yes, let's throw away all the great toolkits (Qt, Cocoa, etc.) that we have, which have developed into an extremely stable base over the past 20 years, with look & feel consistency between applications, and replace it by some experimental browser tech and constantly reinvented web frameworks. </sarcasm>

I don't think you understand how WASM works. Qt can compile and work on GLES 2.0 (WebGL) and WASM is just a sandboxed bytecode that maps closely to CPU instructions and C memory model, meaning you could run Qt apps inside of WASM/Browser sandbox, and given the small size of WASM binaries compared to ASM.JS that's feasible even considering the size of Qt.

Browser with WASM is just a sandbox on top of native APIs and some chrome - it's essentially the app model for desktop with low friction distribution - you have back/forward, easy switching/kill, resource sand-boxing, etc. just like you do on your phone.


I don't think you understand how WASM works.

I do. And most of this you could already do for a while without Web Assembly, either native using e.g. NaCl or compiled to Javascript/Asm.js (Emscripten). Running Qt in the browser it not some future, it is already possible.

The problem is that if you go that route, you lose a lot of platform integration. Moreover, you throw in another layer for what is practically a solved problem (the Linux kernel has all the necessary bits for sandboxing, which Chrome et al. also use for sandboxing).

it's essentially the app model for desktop with low friction distribution

As someone who distributes a Qt application on Windows, Linux, OS X. I don't need this. I just need to be able to distribute one app bundle that works on all Linux distributions. OS X and Windows are already a walk in the park. And by having a native app, I have proper integration now (e.g. Full Screen, Dock, and file association support on the Mac).

just like you do on your phone.

My phone uses native apps, thank you.

(Though is it is in the process of switching to LLVM bytecode.)


>Running Qt in the browser it not some future, it is already possible.

It's possible, it's just not feasible - NaCl is chrome only and has some limitations around distribution and the tool-chain is very specific (Google provided port of Clang/LLVM), asm.js/emscripten sucks both in parse/compile time and in download size.

>Moreover, you throw in another layer for what is practically a solved problem (the Linux kernel has all the necessary bits for sandboxing, which Chrome et al. also use for sandboxing).

What you get is a cross platform layer. You can then add in platform specific extensions and code around those as optional features. The only systems that lets you do something similar right now is full blown VMs like JVM/.NET and this is lower level, potentially allowing more languages to target it.

Like you said browser will just wrap the native interfaces to provide compatibility - low friction cross platform app development/deployment.


> Of course, Linux has to switch to Wayland for GUI isolation

It took Keith Packard a few hours to fix this problem in Xorg.



This unfortunately reads like a hit piece with no information of Snap at all. It's like the author did not even install Snap raising questions about the premise and purpose of the article. Where is the good, bad and ugly?

For instance branding Ubuntu initiatives like Snap or Mir as 'NIH' and leaving Redhat supported ones like XDG, Flatpak, Systemd etc is itself an act of politics and manipulation.

This leaves little room for informed discussion. Linux is heavily politicised with tons of vested interests and thousands of folks and projects sponsored by or in the employment of various players.

Maybe its time to stop tiptoeing around the politics and let it out in the open so there can be more informed discussion and less of the simplistic PR driven fantasy narratives like 'fedora is independent but Ubuntu is not' that this article pushes.


This piece is only about politics, which is the usual drama: most players have an NIH-attitude and we'll end up with three different standards. No one is helped except for news outlets who can proceed to construct a nice dramatic 'he said, she said' story.

It would be more interesting to see an in-depth technical comparison of Linux app container implementations, so that an outsider can make an informed choice.


Politics are quite important to do right and politics is different from NIH. The article notes two important things:

1) Canonical wanting to do big announcements. This causes problems if you want to work with communities. They'll try and work and get support, but one person per distribution isn't the same as working with the entire community. So when the announcement implies that there's broad support, the communities feel like it is a big lie. That'll destroy the willingness to cooperate.

2) Canonical likes to keep control. E.g. the CLA. Canonical has special rights nobody else will have. Companies don't like this at all, it changes the working together into helping a competitor. It doesn't make business sense. Canonical tries to solve things by e.g. saying you keep the copyright, but that is just ignoring the actual issue. This CLA has been an issue for various Canonical projects for many years. An "we might change this" I don't trust. Similarly, Canonical rolled this out while the code only supports the Canonical proprietary app store. Someone else now made a open/free source (not sure which) store, but at the moment the experience quite assumes just one app store.

Those two things together make people want to refrain from spending time (doesn't matter much if they're paid or not).

Note: I'm biased and I like Flatpak as well as AppImage. I don't see the problem in having multiple standards. Canonical is IMO very good at having a great Ubuntu experience. They're (again IMO) terrible at doing cross distribution work due to reasons explained above.


I too am leaning toward Flatpak. I expect it, like other products from Red Hat (systemd, pulseaudio) will eventually be the winner in the end.


Canonical want to be in control because they are looking at trademarks etc. Also they are a small player in the pool compared to certain other companies, and thus have to fight harder to be heard.

Btw, all you need to distribute snaps is a web server afaik.


I think it's more that there are 3 different factions.

The modernists (Redhat, Suse, Arch, ...)

Cannonical (which are also modernists, but Cannonical and the modernists don't work together)

The don't change anything faction (Gentoo, Slackware, ...)

> It would be more interesting to see an in-depth technical comparison of Linux app container implementations, so that an outsider can make an informed choice.

If one takes MIR/Wayland as an example, the differences will probably be minuscle, so social factors will be way more important than technical ones.


I dunno about "don't change anything" with regards to Gentoo.

Frankly both Gentoo and Slackware is in the end about letting the admin/user have the final say. Even if this means blowing their virtual foot of with a virtual shotgun.

You can see this in how gentoo provides all manner of useflags that ebuilds take into account, and how slackware have no dependency enforcement.

The impression i have is that Canonical and certain DE people get in a row over trademarks and UX.

Canonical wants to present a certain UX when people use their distro, while the DE people wants a fixed experience for their DE (to the point of reshaping whole distros to their demands).

Those two goals clash.


NIH?


Not Invented Here


That didn't really give any details about what is good, bad or ugly about Snap.


This is again more political bullshit. I know exactly what is going to happen. We are again going to have 3 different ways to package containerized applications and someone is going to come along and create another `fpm` but this time for containerized applications. I'm going to go back to pretending that `deb` and `rpm` don't exist and my packaging format is `fpm`.


https://xkcd.com/927/ "How Standards Proliferate" :P


"Snap is the equivalent of static linking". So why not statically link? Encouraging software to work with static linking would be helpful for unikernels also.

With delta compression, I think my weekly updates on Arch are at least 200MiB, not even counting particularly large packages. Over a month, nearly half the system is replaced. For static linking (or equivalent) to be practical for those of us with limited bandwidth connections there would need to be some excellent cross-binary delta compression. In some situations it is useful, but for a general purpose system I think shared libraries make more sense.

OpenPandora's PND system is another example of prior art. http://pandorawiki.org/Introduction_to_PNDs


> CoreOS and Red Hat are sponsors of The New Stack.


Is it correct that I'll need to login with an Ubuntu One account to install a Snap package?


Of course not. That wouldn't make any sense. Do you need to log into any account to install rpm or deb package?


You do not need to login to anything to install a snap.


systemd a superior solution ?

Guh.

If we had not systemd and had kept it simple stupid, there would be no need for snap: - you compile the application in static (à la windows) - and you do a chroot (or as they call it a namespace).

Ksss, I am wrong, I see it.

But then, when you make a multi-tiers architecture, how do you make stuff communicate? Well, easy, web services.

But how do you know where they are? How do you print? How do you register PlayMediaKey as a way to start your application? How do you make sure the «right application» can set the time (ntp/system preferences) while other applications are kept away?

Guh ....

registry base? Autodiscovery? You need a bus? An IPC?

And what of the mess of Xorg? How do I prevent applications to see or intercept informations they should not see?

Most engineers have troubles with a basic group/permission model already and OSX/AD have proven that a more complex schema based on a complex namespace often in practice results to poorer results.... how do we make access secured?

Oh! you still have the "one application in foreground has all rights"... the kind of tablet model à la iOS. But then, we are losing the advantage of multitasking... like listening to music while writing or playing games.

So back again in square one of being confined BUT able to cooperate.

JAVA and its JVM was a way to answer this problem: let's forget all of this and control at the application layer of the JVM.

But then, how do you talk to hardware?

java or not, application needs to know about the hardware, but we have no fixed address for HW devices (minor/major nodes are long gone, and PCI/USB vendor id are not pointing to a single HW target since long)

So maybe we could have a virtual HW that talks to the real HW with a translation in the middle ? (VMware, Xen...)

Ho! But I want to make sure I develop for a single target!

Well, good luck unifying games dev in a single image while making sure the screensaver don't override it, while not preventing security applications to do emergency locks... and still having the right level of acceleration...

And then, the world of confinment is biting his tails always asking itself the same questions.

For the old persons, let's just accept that easy image deployment could be like on Amiga/C64/amstrad &al: a unique HW, with a single application. You jump to org and execute the ASM in single mode. Problem solved.

But the more we expect from easy to code with cooperation (wouldn't it be ridiculous to put printer/HD/SD driver in a text application, network/audio/video driver in a browser) the more our apps are dependent from an ecosystem that is a moving target.

Basically there can be no easy confinement (application snapshot) without a unified abstraction of HW and services as well of their properties whereas the complexity of OS is growing (more HW, more protocols, more "required dependencies" à la systemd).

Basically confined image deployment on heterogeneous system/HW is a chimera unless you confine yourself to very basic tasks that requires no dependencies (like mono post calculus in FORTRAN using stdin/stdout).

We are just in the hell of the dependencies. And I don't think there is any silver bullet for this situation. This problem grows more than linearly in complexity as a cross product of all the potential differences in the configuration space.

Let's forget about chimeras.

The complexity is dooming the costs model of application development. As the HW/OS landscape is splitting in different direction with legacy systems still needing updates, development has to face the world of a constant progress/obsolescence where devs are told to forget about past platforms to be hype, and customers have the need to keep their legacy systems in production.

Sometimes there is no solutions to a problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: