Hacker News new | past | comments | ask | show | jobs | submit login
Abcdesktop – a cloud native desktopless system (abcdesktop.io)
207 points by o- on April 25, 2022 | hide | past | favorite | 77 comments



This seems like a cool concept, but I didn't try it out because the demo requires FULL READ-WRITE access to my GitHub account, its organizations, and private details! Surely reading basic user info would suffice.


I paused for the same reason.

I logged in using my Gmail. I assume a sizable number of people here have a throwaway Gmail. It didn’t need access to anything in the mailbox like with GitHub.

Neat project. Cool for a proof of concept, but not solving any needs I have.


The demo.abcdesktop.io OAuth configuration has changed. The demo doesn’t require FULL READ-WRITE access to GitHub account, anymore.

It uses only Personal user data Profile information (read-only), just to display your user name in the top bar. The scope OAuth 2.0 has changed from ‘user’ to 'read:user' You can use a GitHub OAuth safely, it was a misunderstanding in GitHub doc OAuth


Same here, I'm thinking about installing it to try it out but I'm not going to give them full access to my profile.


Nope. Hard pass.


Good point. Since the Travis incident I don’t give any app write access to anything.


Same.


Why do I keep getting this distinct impression that people use containers to keep reinventing OS processes, but "in the cloud!"?

Next thing, you'll see announced a platform for reusable container services that can be dynamically linked from other containers - to avoid including them multiple times in your application containers that share the same version, and we'll have come full circle.

Is there a subtle detail about packaging software for distribution as isolated executables that I'm missing?


It's the same reason people keep inventing VMs. Every process is also a VM (and can only do what the hypervisor... kernel... allows), but it's a more convenient abstraction to run a new kernel and its processes in a hardware emulating VM within that VM for many kinds of workloads, despite the overhead of emulating hardware.

Containers are a middle-ground, and a useful abstraction -- where you can have a group of processes that can share some resources and be accounted for together, but without the need to emulate hardware or a second hypervisor (kernel).

For lots of workloads, though, processes would be enough. Some more could be done with additional work to kernels to help group processes together (Linux cgroups, Solaris contracts and zone facilities) and more container-aware schedulers (Not sure what Linux has, Solaris has a 2-layer scheduler for its containers).

I was heavily into containers in the 90s and 2000s, and then VMs when they became more viable with Xen 2, but I've been moving things back to just processes.


And then, one day, single-system-image will be back in fashion and processes will migrate across the cluster - like in the good old days of Openmosix, but maybe with Kubernetes-like robustness...


If I give someone a sample docker-compose file, they can immediately run my service regardless of OS. If I distribute manually, I need to provide instructions for setting up the proper dev environment and packages for several common OSes and distros (brew maybe, apt, rpm, etc.).

Speaking personally, I know how to write a proper Dockerfile, (and it's a skill that's learnable in a couple hours). I have no idea how to distribute packages through other formats and avoid footguns.


There's a lot more to running a service than getting it running easily. Longevity/stability through applying updates should be engineered too.

The problem is when using Docker, you can't tell what the dependencies are for your app. Telling me i need node or Go or php and/or what database or redis, etc etc gives me an instant feel for how the deployment, as well as how security updates, need to be applied. Docker is just a black box by comparison... at least to me. All my attempts at running docker solutions on small vms have ended badly (ports opened publically, rampant disk use, poor log files management, lack of security updates...). Seriously, I wish devs would at least list the tech stacks they're using in their apps in the readme.

However, i do grok that people who've embraced the ecosystem would like it that way... but not all techs like it.

For me, the best projects are those that lay it all out, warts and all and then have a separate repo that manages the docker state.


> The problem is when using Docker, you can't tell what the dependencies are for your app.

That's all in the Dockerfile though... it's a simple and standard way to show how to install something. It's just like a makefile, but it's one that always works whatever the environment you are in, whatever the dependencies that you already got in your system and in most case, whether it has been maintained or not.

> ports opened publically

... and that wouldn't have happened if you used something else? How so? Unlike any other solution where each application choose how ports are configured, on Docker you actually need to be aware of the port to open and specify it. I means sure you could have not known that by default it does it over 0.0.0.0 but that would be true for almost any applications (and that's when you are even aware of the ports, that's a basic CTF challenge to have an unknown port open).

> rampant disk use

That's a good point, I agree that the storage can be quite annoying, but at the same time, you handle that the same way you would handle that with any other software, simply by knowing where the storage goes and why.

> poor log files management

I love how log is handled with Docker, there's a single log output and that's it. It's the source of truth for logging, easy peasy. You want your logs to be pushed to another system? Well connect it to the Docker logging system and that's it.

> lack of security updates

Could you develop that one? You are responsible to keep updating whatever you use, whether it's a docker container or an application. One or the other doesn't change that. Maybe you means that it's so easy to make a Docker image that it's just as easy to stop updating it thus making it less secure for whoever use it? I means sure... that would make it less secure but it's still possible for any application to be abandoned, it is your responsibility to make sure whatever you use will be maintained.

> Seriously, I wish devs would at least list the tech stacks they're using in their apps in the readme.

I haven't seen many Docker images that doesn't do that. I do have seen a few that doesn't go into depth on how to setup the environment, but the tech stack, that's mostly a given.

Any good examples of that?


The dependencies in your Dockerfile is as opaque as you wish. I've seen a lot of public available images built "FROM base" or similar, hidden somewhere in their maybe public CI. There is nothing in Dockerfile that actually shows your dependencies.

The same with logfiles. Some images contains everything, and the instructions is to have volumes for their poorly written applications log output. Just because Docker-the-application does stdout doesn't mean that the processes running in a container does stdout.

For security updates you need to be sure that the maintainer of the image is not only looking for updates to the application, but also all explicit and transient dependencies. In a classic setup with a package manager and explict dependencies those are the responsibility of other teams/individuals.

That said, Docker with friends has very much changed the landscape for the better, and I would have a hard time going back to maintaining least denominator java versions, python virtualenvs and tracking down incompatible shared libraries.


> If I give someone a sample docker-compose file, they can immediately run my service regardless of OS

_if they're already bought into the docker ecosystem_, this is true. if not, then they first have to go read up on docker first: figure out how to install it (OS-specific), enable the docker system services (i think systemd more or less standardizes this step), configure a user that has permissions to manage docker deployments (also frequently OS-specific), etc.

not saying docker is or isn't a worthy tradeoff between balancing distribution work between the code authors and the OS packagers and the users. just don't blind yourself that it is another thing that users have to learn before they can use your stuff.


Docker's managed VM strategy for Windows and MacOS is really powerful, I'm not sure why other applications don't do that.


> Next thing, you'll see announced a platform for reusable container services that can be dynamically linked from other containers - to avoid including them multiple times in your application containers that share the same version, and we'll have come full circle.

That unfortunately already exists; Kubernetes namespaces and Helm Charts.


No, you're basically right.

To the extent that containers solve a problem, it's mostly a problem with package management, and with UI for process isolation/permissions/hardening.


No you're not wrong, it's the constant churn of "our current isolation between users/processes/apps isn't as good as we hoped, so lets just run virtualise the whole OS" that has been driving multiprocessing since the 70s.


> Is there a subtle detail about packaging software for distribution as isolated executables that I'm missing?

In practice it's a nightmare, particularly for legacy software that demands a ton of dynamic linking to system-installed libraries.


Yeah, but that's a problem with dynamically linked dependencies. If you statically link all your dependencies into a single executable, doesn't it work conceptually the same as a Docker container with all your tech stack contained in a single package?


Even a statically linked program can have dependencies on the system it’s running on, like system fonts and root certificates. Containers ensure a standard environment that accounts for those differences, in a language-agnostic way.


Yep, but when you're dealing with something to run your desktop or other software inside most of that software is dynamically linked tools and utilities.


Whether its a nightmare or not depends on what you practise.

Dependency heavy environments such as e.g. node or python make it a nightmare. But large stable base library environments with selectively chosen dependencies are usually easy to manage, e.g. .NET.


> Next thing, you'll see announced a platform for reusable container services that can be dynamically linked from other containers

DLL --> JCL ... Joinable Container Libraries


> Next thing, you'll see announced a platform for reusable container services that can be dynamically linked from other containers

I think that wouldn’t be a bad idea. It’s good for RAM and disk space usage while still isolating processes. Would be useful for IoT devices without much memory or space that still have to run docker.


No, there are blatantly obvious things about packaging software that you are missing.

- you will need a copy of each platform running in order to build the binary - it's X-times++ as much work to package for X number of platforms. - The code you chose might not compile well on all platforms without code changes. - Dependency conflicts can be a pain.

oh boy, i could go on.


You can dynamically link container services from other containers. Look up "docker layers".


You can dynamically link one service from another container. Choose wisely.


This looks cool, nice work. I was briefly trying to work on a "cloud desktop" system at one point but found it too much trouble to get it working properly.

> Because docker containers are lightweight and run without the extra load of an operating system, you can run many graphical applications on a single kernel.

This is an odd thing to say. Can't I already run many graphical applications on a single kernel even without docker containers? And wouldn't that be even more lightweight?


> This is an odd thing to say. Can't I already run many graphical applications on a single kernel even without docker containers? And wouldn't that be even more lightweight?

I think what is written between the lines is that they compare it to full OS virtualization. That need for that results from the desire to isolate multiple users from each other.


I applaud the work that went into this, but unless I'm missing something, they've recreated a terminal server, where nomachine has delivered desktops for ages and works over a connection as poor as dial-up.


Is this using Guacamole for its VNC client? In college [2012] I ran Guacamole on my home server for RDP remoting into my desktop PC from a Chromebook, and the performance was excellent. Curious to see what the experience is like with more newfangled Cloud Native stuff.


"abcdesktop.io = NoVNC + X11 + Docker"

It's using NoVNC.


Guacamole is impressive I agree.


Webtop is attempting something similar to this using Guacamole rather than NoVNC. https://hub.docker.com/r/linuxserver/webtop

I know very little about how the two compare.


stop encouraging users to pipe curl to the shell, darn it.


Is that bad?


The most probable issue related to this kind of behaviour is pastejacking.

It's possible for the server to detect that you're actually using curl (with the help of the user agent of other methods) and also that you're piping it to an interpreter.

Knowing that, the server could send you a malicious payload, that wouldn't be apparent when only downloading the file otherwise.

Some people think this isn't the real issue, that (the lack of) code signing is the real problem. I don't disagree with that, but really, people should look at the code they're going to execute, whenever possible.

And when I say "whenever possible", I sure believe a few lines of shell script deserves to be inspected. Even if you lose the 0.5 seconds of automation the pipe provided. I mean, we're not talking about millions of lines of kernel code, here.


How can the server tell whether you're just curling or curling and piping that output to an interpreter?


Weird timing magic, if I recally correctly: https://news.ycombinator.com/item?id=17636032


No worse than downloading a pre-built binary off the internet and running it.


If you follow that to it's logical conclusion then "everything is terrible and nothing is secure" (which is probably the reality)

My personal gripe with piping curl output to sh is about expectations. If I have some binary I'm running, I have _some_ expectations about what it will do, same for a Makefile, and RPM, etc. None of those things are guaranteed to do what they're supposed to but I have some idea what _should_ happen.

Unless I read through the script, I have less expectations about what the script is going to do. It says it will install my program but is it going to pull in dependencies from my package manager? Download and compile something, shove binaries in my $PATH, edit dot files in $HOME?


Makefile, RPM , and shell all essentially do the same thing and have the same power to violate your system.


To re-iterate, for me its not really about the potential for abuse, its more about expectations. You can abuse anything (though I'd also argue that signed packages from companies like RedHat and Canonical are usually up to a higher standard than the random Shell script plucked from the internet.)

If someone places a shoe box on your doorstep you might expect shoes to be inside. Of course, there's a non zero chance its full of bees. But shoes are a reasonable guess.

If someone places a cardboard box on your doorstep with no writing on it whatsoever, there's no expectation about whats in the box. (Unless you hear the bees buzzing from a distance.)


Except it's been demonstrated you can detect a pipe to shell server-side.


Is there any technical reason for not making it work with user / authentication when people download, configure and run their own instances?


Yes, it is for security reason on demo.abcdesktop.io instance. abcdesktop.io supports LDAP, LDAPS, Active Directory services, and Basic, NTLM and Kerberos authentication.


Since to be usable we need a modern WebVM, improperly named browser for legacy reasons, so we need a desktop merely used as a bootloader for thw WebVM what's the point of having a remote tools, witch means someone else computer (or at least another computer) + a desktop + the dependency on the network just to work?

Honestly I think it's about time to tell management that's nothing costly nor complicated in maintain desktops with custom deploys not just the default install by some vendor, and in supporting them instead of investing in servers with much more resources just to centralize for the point of merely centralize...

Sorry for being rude, I've used Apache Guacamole time ago and I still ask me why it was even though in the first place...


Looks neat. Seems to have a lot of overlap with Kasm Workspaces (https://www.kasmweb.com/) which is another cool project.


Running a web browser already requires a substantial desktop system.


or a lightweight tablet

or old laptop/desktop

or maybe even a phone


For a cloud native desktopless system I was thinking it should not be remote desktop based. Is there a cloud desktop, that is a bit more like NeWS? I mean text rendering client side as HTML/CSS, image and video view via img/video tags. A sprinkle of JS to make it smooth and dynamic. With normal windows and maybe even a taskbar would be splendid. Possible X or Wayland integration, but as separate windows in such a system. Is there something like this?



Thanks. That is not exactly what I had in mind as far as I understand what those projects do. It seems that they run entirely in the browser. Server just serves HTML/CSS/JS.

I was thinking about a desktop that would be rendered on the browser, but applications would run actually on the remote server. So if you would log in from a different computer you would have exact same desktop opened. A bit like X with this difference, that the text rendering would be done straight on the browser without canvas.

But that's a good entry point.


I got a 404 when attempting to view the demo, as the link says one URL but actually sends my browser to https://www.abcdesktop.io/demo (instead of the demo subdomain)... not the most inspiring first impression, I must admit.


Looks cool. FYI, the "demo" link appears to be returning 404 Not Found


The correct link is https://demo.abcdesktop.io/ The way it's written, but it seems that the hyperlink that's bound to the written link is different (https://www.abcdesktop.io/demo).


The 'bad' URL is only the first hyperlink (at the beginning of the 'Quick online preview' paragraph). The 2nd time the URL is shown (in the small paragraph) the hyperlink is correct


Not sure how the timing works, but that might be an unfortunate choice of name. Or maybe there's no real audience overlap, so it's ok.


Care to elaborate?


I think they're referring to the pop song, "abcdefu" (https://en.wikipedia.org/wiki/ABCDEFU)


On mobile it says:

ABC + docker = Vnc + x1

Docker wraps to the next line. Very confusing


There are two instances of the string "deskop" in the first paragraph that should probably be "desktop".


Thank you for this issue, the "deskop" means "desktop" and have been fixed.


What's the advantage as with guacamole and/or spice?


is... anyone able to get to this thing? has it been Slashdotted?


What is the performance like? I’ve found that VNC is garbage even in on a fast local network, let alone over the internet. I’ve yet to find something that works anywhere close to as well as Microsoft RDP, which feels like sitting in front of a local machine as long as you don’t try watching HD video or something.


Spice [1] is better than VNC. Shells.com [2] uses this protocol.

[1] https://www.spice-space.org/index.html

[2[ https://shells.com/


Nomachine works fine for me even on relatively crappy connection.


A good VNC client should give you responsiveness on par of RDP. VNC tunnel through web socket to a client running in a browser window may push it to the annoying zone though.


What’s a good vnc client and server that actually pulls this off? I’ve never had any luck with it either. At least not on Ubuntu.


It's a little specialist, and doesn't have Linux server/host functionality (and IIRC Mac is still in beta), but if you want to remotely connect to a Windows 10+ computer - Parsec works really well. I've used it at work to share access to a beefy desktop for running CAD (SolidWorks) and multimedia (mostly Adobe Premiere Pro/CC) software, without having to be at the computer - we use it from the local network, and from across the country, with fairly satisfying performance.


If it is from linux to linux, I'll use xpra https://xpra.org/


TigerVNC


from the abcdesktopio github repository tigervnc this project uses TigerVNC release 1.12.0 it starts the command Xvnc and then websockify python to convert TCP socket as WebSocket


VNC can have good performance over gig LAN using raw encoding. The issues occur when you add compression (hextile, tight, etc.) which does a good job cutting bandwidth but adds a ton of latency, which is the main reason your sessions feels slow. RDP does a great job of keeping the latency down.

Note that I have never had a good experience using noVNC (the web-based solution used by this service) even over fast LAN - I think it's because of the additional overhead of the web browser.

I personally use SPICE for VDI which performs decently over LAN, but is not as good as RDP over WAN with constrained bandwidth.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: