You could probably tweak something like Barrier for the kvm/cursor move, and use XPRA to move the window, sure. No idea how much/little work it'd be but doesn't seem like it should be all that complex.
I tried Xpra for remote applications some four to five years ago when I migrated from Windows to Linux and needed a Remote Desktop (type) alternative.
I stuck with it for a while in advance of numerous other options, until I found NoMachine - which doesn't do remote applications but does do full remote desktop and has the closest 'feel' to being local-machine than anything other than Windows Remote Desktop.
I (ironically?) dislike Microsoft just that little bit extra for making Remote Desktop so damn good whilst progressively destroying the Windows experience.
I would like to try Xpra again, but I've got a growing list of "I'd like to try that's" that even the top priorities only get small bites taken out of them per week / month - and my current workflow is pretty good.
NoMachine absolutely does do remote applications. I use it for that every day. Instead of "Create a new virtual desktop", choose "Create a new custom session" and under Application "Run the following command" (the program you want to run) and under Options "Run the command in a floating window".
For remote desktop use cases, I’ve found RustDesk to be pretty good, especially if you can self host the relay: https://github.com/rustdesk/rustdesk
I especially enjoyed that I can use AV1 for the encoding (better quality even at lower bitrates), being able to switch resolutions easily and also the response times being pretty quick.
I prefer MeshCentral. Not only does it work better, it has way more features. For example, you can browse the remote system's files and use a terminal without having to stream the graphical environment. It also has some Intel ME integration which I haven't really looked into.
RustDesk is easier to get started with though, especially if you're coming from TeamViewer. However I've always been a little wary of RustDesk. The dev is (was? Haven't kept up) anonymous and seems to be some Chinese company. Even ignoring the China ties, I wouldn't trust an anonymous dev with sensitive software like this.
You may want to try x2go. It uses the older NX protocol version 3, while NoMachine is at version 4. It's good enough for my use case, and support remote applications just fine: this is how I use it.
Perhaps it would make you feel a little better to learn that RDP was created from Citrix’s ICA protocol during some cross licensing between the two companies.
I worked on using hardware acceleration to replace parts of the ICA client application’s raster functionality for “thin client” devices a lifetime ago.
NICE DCV is a good alternative if you don't mind paying. I've found the image quality, latency and multi-monitor support to be better than NoMachine. You can get a permanent licence for $180.
NICE DCV is very solid but its licensing and links to AWS pushed me to write a replacement (it's just NVENC over UDP) one bored weekend... will have to OSS it some day.
There's also RustDesk that looks great but I can't say I've used it yet.
For now yes. There's only 4 people really working on it. But the most difficult part of it was the stream itself with enough performance for video and gaming at a decent fps and I can say that I was able to play civ vi over it so at this point it's the task of cleaning up the way it works.
I'm amazed by the quantity of different Linux remote desktop solutions covered in this thread. Is the fragmentation a good sign of a healthy ecosystem?
Most people don’t realize that the stock config uses xvnc instead of xrdp as a back-end and never change that, or try xorgxrdp-glamor to get GPU acceleration.
I'm using the same as you - Debian 12. The only snag I've noticed is that it's pretty easy to make a session crash, at which point you lose the entire desktop.
Once you have GPU server side encoding and client GPU decoding all working correctly with NVENC end to end there is nothing like it in terms of speed and performance for remote work. With a reasonable ping latency like 20-30ms and quality link a user on a cheap gaming laptop connected to a decent beefy server with the GPU encoding working can perceive the remote browser window as faster than their local browser.
This is going to be very unhelpful for most but I use nixpkgs and end up applying some build tweaks to make sure all GPU capabilities are properly supported.
That said I know there is newest version 6 which Ubuntu/debian users should be able to add the xpra apt sources to get and it might all just work out of the box. I should check.
If you're only working on Linux, you almost surely want waypipe [0] instead of xpra. If you need support for other platforms, though, xpra is still a pretty good solution.
Yep. Despite what Redhat would have folks believe, xorg works fine and will continue to work fine for the foreseeable future.
Plus, if one wants things like functional screen readers, suitable-for-video-games Vulkan frame pacing [0], and many other things that you'd think would be table stakes for a project that's been running for at least 15 years, one's only choice on Linux is xorg.
[0] The only reason the Steam Deck isn't a disaster is because Valve has been carrying a patch for a "Make frame pacing not garbage" extension that the Wayland people have been refusing to merge in for the past two+ years.
SteamDeck uses gamescope which, outside of defaulting to provide XWayland as the actual API to applications, uses special Vulkan extension to essentially remove compositor from the pipeline other than things like performance overlay, making DXVK and others render as close to direct to scan out as possible.
I use xpra to run apps in VMS but seamlessly render them on my desktop. Allows me to have a qubes type workflow without using qubes. Probably not quite as secure, but you can disable features for untrusted servers.
x can't run untrusted applications; it trusts everything
any x application can spy on everything you're doing in other windows, send them messages, inject other windows into them, delete them, and post ui events to them. basically with very few exceptions any x application has total control over your account
xpra has other benefits (lower network bandwidth usage, being able to reconnect after network outages, being able to move an app from one display device to another) but that's not what the grandparent comment is talking about
Seems like a remote desktop solution is always going to be better at this—I'd imagine there's a long tail of weird website behavior trying to sync browser state.
I also haven't checked my Windows machine that it syncs with recently, so there might be issues on the latest ff etc etc. I started doing this with SeaMonkey, whose profile is simpler conceptually since it's still built on top of an old firefox legacy esr.
I'm half-heartedly trying to RTFM of 'xpra', but haven't found the 'run_scaled' script yet. If it's not too much trouble, can you please reply with the commands you use to scale an X11 program?
(I'm on Devuan Daedalus 5.0, ~= Debian Bookworm 12.0.)
(I currently use 'xzoom' to scale X11 programs, but it's a little kludgy.)
EDIT: Single quotes for all program names. Bookworm, not Bookwork.
Xpra itself ships this script, but Debian’s version is quite old. You need at least 4.1 and Debian Bookworm seems to have 3.1. Xpra seems to have an own apt repo you can probably use.
Perhaps your users don't have the same abilities with ssh?
The reason why xpra switched to paramiko as default ssh implementation is because it makes it possible to integrate (cross platform too) with the GUI so that asking for passphrases or passwords can be handled by the xpra process itself (it may already have the password or it may delegate to pinentry or gpg agent or putty agent or whatever).
This also means that ssh errors can be handled much more gracefully - natively in the code.
With openssh launched as a subprocess, the user interface is non-existent, and when the ssh process fails, all the xpra process sees is a dead process with a non-zero exit code - which is much more difficult to handle gracefully.
I find it really annoying that it seems to default to paramiko for ssh support. I already have all of my ssh setup via openssh (e.g. jumphosts, identity files, connection multiplexing), and I have to pass "--ssh=ssh" to get any of that to work.
hey, thanks! i'm not sure that makes it less complicated (it means the same xpra command line will work properly on one machine but not another) but it definitely looks like a solution!
If you don't want to change the global defaults (in `/etc/xpra`), or your personal config (`~/.config/xpra`), you should be able to stick `ssh=ssh` in a `$HOST.xpra` session file and the launcher should honour it when you open the file / double-click on it.
(add `autoconnect=true` and the launcher won't be seen unless the connection fails)
I am running linux machines and I have one headless machine. I need to keep a browser running in that machine and occasionally check on it (see what it's doing, and possibly fix things in the browser). Would Xpra let me do that from a remote machine?
With Xpra you have a headless X session on the remote machine and you "detach" and "attach" to it from the client. So whatever X applications you leave running will be right where you left them.
I use this instead of Chrome Remote Desktop and really appreciate the seamless window integration. It does, however, take some work to get it working smoothly.
That depends on how you authenticate. If you pass your SSH password on the command line then anyone on your machine doing a `ps` at the right time could see your password.
I find using a ssh-agent to load password protected SSH keys works best.
I have not used Xpra, so I do not know if it has any problems with this, but a priori I do not see why an application like Xpra would need to be aware about the screen resolution. That should be handled by the X clients and servers.
I have used only 4k monitors for more than 10 years in Linux and the only applications with which I frequently had problems were various programs written in Java by morons, which were usually proprietary applications and some times quite expensive, but they lacked such elementary customization features like allowing the user to change the font, or at least its size, while failing to use the configuration settings of the graphic desktop, i.e. the value set for the monitor DPI, like all non-Java programs.
The way Xpra works is that it's both a server and client in it's own right. So it needs to be aware for the final client involved to be able to learn about it from the other side of the chain, so it does need to be aware of it in order for applications to know that they need to adjust themselves. What this results in is that the client you're running through Xpra will just assume it's a normal 96 DPI display from yesteryear usually. I think you can do some stuff to tell Xpra that it should be a higher DPI but that'll also make it so that the application will be gigantic on a low dpi display, something that's pretty typical for X11 since it doesn't properly support mixed DPI environments (i.e. a laptop screen that's high dpi and an external monitor that is low dpi).
X11 supports mixed DPI with the RANDR extension. Applications can use the per-monitor information provided by RANDR to get the DPI information for each output, and use this to calculate font and widget sizes. Everything looks nice as long as you don't have a window spanning multiple screens. The problem is that while Qt has supported this functionality since 5.9, GTK seemingly has no interest in implementing it and most other toolkits or raw Xlib programs don't support it either.
Last I checked (somewhat less than ten years ago), it did do things with DPI. Note that if you were using it with a sorry distro-packaged version it might have been broken because proper DPI support requires a patched X11 dummy video driver or something along those lines, and if you had an unpatched one DPI stuff didn't work.
I don't think the existence of Wayland means X11 is deprecated. Lots of people (including myself) would prefer the perfectly working X11 than the feature-incomplete backward-incompatible "modern" wayland
Noticeably it doesn't export Wayland to applications running underneath unless you enable an experimental flag, and half the reason for its existence is that it can be entirely bypassed by related Vulkan layer extension.
For me, the year of Linux on the Desktop was '94. It's been my main desktop at home ever since, and for wor as well the majority of time - but with some obnoxious detours.
Not advocating for 2024 (or 2025) but I still think someday it will, having seen multiple usability and feature improvements. (Not only talking about wayland here)
It's deprecated by the people who wrote it, and AFAIK no-one else has taken up the task of maintaining it. Doesn't mean you can't use it (I still use it still, thanks to said breakage), but it's not exactly thriving.
It's still seeing regular releases. It's split into modules now, but the xorg-server module last had a release in April, I think, with multiple contributors, and at least two people are issuing release announcements.
Maybe I'll consider Wayland again in a few years (though, who knows, by then maybe I'll have fallen for the temptation to write my own X server too...), but for now, Xorg works, receiving fixes, and doesn't require me to change anything else in my workflow for no good reason.
Even funnier, I have a bunch of rando computers and servers, some with friends and family with different distros...and at any given time, I'm not sure which I'm using.
(Which I suppose means that Wayland has matured a lot..finally, but still)
At the risk of threadjacking, I'd still love to have a real conversation about how and why this Wayland thing happened (and is still arguably happening) so badly.
Specifically, how -- again, in LINUX-LAND -- a whole bunch of people decided, "nah, we're going to go ahead and break the HELL OUT OF backwards compatibility this time, even though we pretty much never do this."
Linux actually does this a lot. Jwz called it "Cascade of Attention-Deficit Teenagers" more than 20 years ago, describing GNOME.
Look at audio. PulseAudio is mostly pointless and over-engineered. Breaks a lot. You could say the same about ALSA even. FreeBSD meanwhile is still using the OSS API that Linux was on in the late 90s...
PulseAudio has already been abandoned, the new thing is Pipewire.
The specific issue with OSS was that it was never part of Linux and then with OSS 4 they made it proprietary, which killed it off entirely as far as Linux users are concerned. (FreeBSD cloned it instead.)
Oh yeah. I should have noticed that because on the Debian machine I set up for my daughter, audio suddenly broke, and it started working again when I removed pipewire.
No joke, if you remove the new thing stuff magically starts working again.
I had to restart Pipewire just this morning when audio randomly stopped working. It's funny as this is the first time it happened then an hour later I'm reading in this thread about Pipewire acting funny on other people's computers.
in my experience, unlike pulseaudio, pipewire is way stabler and usually works fine with less involvement.
... except when you have accidentally ended up with half-pipewire, half-pulseaudio setup due to half-forgotten instructions that were no longer applicable.
I ended up solving a similar case for someone else, where ultimately different programs were fighting over control of the sound devices themselves.
I have to say, that pipewire under NixOS had been easier to deal with than both pulseaudio and ALSA on crappy internal sound devices (i.e. ones requiring dmix and the like). It's nearly as plug&play as ALSA on "proper" soundcard was (like when I got a Dell Precision to work with that somehow had a Creative Audigy with 256 channel sound, so no need for dmix at all...)
Open Sound System (OSS), rather than Open Source Software, if I'm reading you right. I only recall because I've been toying with sound software for decades, trying to get various synths and music composition applications working. Apologies if that didn't address your confusion.
Iirc there was a version in the kernel tree, then "upstream" if you could even call it that developed it further (OSS 4.0). I think there was a company behind it and a nonfree license. Obviously that was a no-go for inclusion into mainline Linux.
But alsa was/is also enormously complicated with a huge library, user mode plugins and whatnot. It reminds me of a common problem that people have where they think API and implementation are one and the same, and that you cannot swap out a different implementation keeping the simple API. People would say you need alsa because it does software mixing, but the lack of software mixing wasn't an OSS API problem, it was a problem with how it was implemented.
ALSA started replacing OSS on linux before software mixing became as critical as it ended up, because people still often had proper sound cards with multiple channel hw mixers.
The assumption that you could punt software mixing to hardware or deal with limitation of only one program accessing the audio at a time took a hard hit when Intel HDA pretty much decimated presence of such hardware on PCs (AC'97 was much less pervasive for various reasons)
Ha, I'm showing my age. You're right, my brain is still at "Linux around the year 2000."
That being said, it's still the same problem.
And I think that's why I find it odd that I haven't heard the argument more plainly stated like:
"Oh look -- right around when Linux became mainstream, it began to get worse in precisely the same ways proprietary stuff was already bad. Probably a result of more business involvement, but how can we counter it."
Setting aside dogmatic complaints ahout failing to adhere to the purity tests of a hokey old religion from the seventies, in what way is Linux actually worse? Also, assuming you're referring to it getting worse in the usual ways *nix grognards like to complain about, how is any of that "in the same ways proprietary stuff is bad"? If anything Windows for instance leans even more heavily on backwards compatibility than old Linux, not less.
I know I'm going to lose points for pushing back on the pervasive HN RH-bashing, but this doesn't hold up. Wayland is being pushed by Redhat to make GNOME the only stable option? Except that Wayland is perfectly fine (many would argue better than it is under GNOME!) on Sway, KDE, etc. KDE has more bugs than GNOME, but that's independent of Wayland, it's just because KDE's design philosophy is "hardcode every feature and option imaginable" and that leads to it being impossible to QA. Anyway, this is just conspiracy theory bullshit. I swear to god Red Hat is the Soros of Linux for a certain type of guy
Watching them a long time, too many coincidences. Looks like fire-and-motion, make yourself the standard then make it hard to deviate or to keep up. If it’s not intentional, it’s incredibly damn convenient.
Why do any of these standards make it harder for other distributions and desktop environments to keep up with them? wlroots exists, and in many people's minds is much better than GNOME with Wayland. This really strange thinking.
Also, it isn't them that are making them the standard. It's independent distributions choosing to use what they produce (and not all of them do, either). Presumably, the maintainers and packagers who make those choices would be aware of these technical considerations, and capable of rejecting Red Hat's tech I'm favor of whatever hoary stack you prefer if it made it harder for them to "keep up." That seems to be something that is perfectly possible to do while still producing a usable distro, and it seems like something limit distributions are quite good at — ignoring what corporate operating systems are doing and forging their own path. Maybe it's because the technologies you are labeling as Red Hat technologies actually offer substantial improvements and push forward the cutting edge of the Linux desktop in a meaningful way, bringing it closer to the capabilities of a modern operating system?
That happens a lot in Linux world since people design software in extremely limited way in the name of simplicity and "Unix philosophy" which limits their composability.
So only interfaces became limited to what's available at that moment and the limited set of Unix software abstractions. Those abstractions are also made extremely use case specific even though there are opportunities to unifiy them like Android did with Binder, Windows did COM and dotnet.
Of course technical difficulties of designing efficient and long-term surviving interfaces also play a role. There are too many hobbyists in the Linux desktop world and too few professionals who also have to work with hobbyists. Many contributions to Linux desktop happen when people are studying and then they leave. This creates disincentives to design long-term systems because they take too long and they are a slog to design and keep up-to-date.
I think a couple of people hired to to work on Linux graphics did not like dealing with legacy code (who does?) and somehow convinced their incompetent managers that everything has to be rewritten.^1 This went just like most "let's just rewrite it and it will be much simpler and better" IT projects. Now 15 years later or so we have a fundamentally inferior replacement (with probably nicer code though) which still misses essential features and plays catch up a, a simultaneously a serious lack of investment into maintenance of X, and also a lack of investment into user visible features in apps (I think gnomes pdf viewer gets its third rewrite, but still can't play embedded videos in presentation mode).
A lot of the user community was fooled into believing that Wayland would make graphics and gaming much better any day now ^2, which never really was based on sound technical arguments.
^1 This was roughly the time when everybody wanted to build new phone or tables operating systems to capitalize on an emerging mobile market. So maintaining stability for the desktop wasn't a priority.
^2 With grotesque nonsense claims about X such that the old drawing API slow apps down even though they are unused, etc. Those "arguments" were endlessly repeated and you can still see this even here, also everybody who ever implemented a GUI application in Linux should realize that this can not be true.
And chose an architecture really interesting where everybody will recode an half-baked version of the work-in-progress protocol so each desktop-env/window-manager will have different compatibility characteristics; and where tons of actual implementation mix the graphic server with the desktop-env/window-manager so that crashes are extra fun.
There's a few related answers to your question, so I'm going to meander a bit and hope you can follow along.
First off, backwards compatibility wasn't actually broken. X11 apps work fine under Xwayland. All the weird bikeshedding that happens in Wayland isn't as important because we have working compatibility bridges and all the old stuff still works.
Second, "don't break userspace" is specifically a Linux kernel policy. The only other organization in FOSS that has such a slavish devotion to backwards compatibility is WINE[0]. Desktop environments are perfectly fine with, at the very least, breaking ABIs, because you can just recompile the ocean. I suspect this is the Free Software equivalent of "firing shots to keep the rent down" - i.e. making the software neighborhood undesirable for proprietary software vendors who will have to deal with these annoying and arguably pointless transitions every couple of years.
The reason why we needed to get off X11 is very simple: X11 is an extremely poor fit for modern hardware. The protocol supports simple drawing commands and image display, that's about it. Modern user interfaces want applications that draw onto GPU layers, composite them, and then present a final image to a compositor to be displayed to the screen. You almost can build that on X11 (modulo some frame tearing), but it's a pain in the ass and requires adopting a lot of extra protocols plus XGL which breaks network transparency[1].
The motto of X11 is "mechanism, not policy". The way this is accomplished is by presenting the entire desktop as a tree structure of windows that any client can mutate. Any widget toolkit can then build the experience it wants on top of that tree structure. The problem is that this also makes writing keyloggers and RATs trivial. X11 doesn't sandbox applications, so even if you lock down a process every other way, they can still do horrible things to other X clients and Xorg won't stop them.
Wayland fixes this by tightly restricting what clients are allowed to do to the desktop. Applications get to present their own windows, sure, but they can't touch other windows unless a specific extension is provided for their use case and the compositor allows the application to use it. In other words, Wayland is "policy, not mechanism". The downside is that now we have to codify all the slightly-different ways each widget toolkit, window manager, and desktop environment has done things under X11. This has resulted in an explosion of extensions, many of which overlap because they were made by different DEs. Wayland can of course create standardized versions of these extensions, but each standardization is an opportunity for bikeshedding.
This leads into my favorite way to tell if an application is Wayland or X11: launch Xeyes. If the eyes track your mouse over the application's windows, it's X11. Wayland doesn't have a protocol for mouse tracking, so Xwayland can't report where the mouse cursor is, unless it's on top of a Wayland window that it's already getting events for - namely, the app you're wondering about.
Ok, I suppose that is a backwards compatibility break.
[0] This also means the most stable UI toolkit on Linux is actually USER.dll.
[1] In fact, I suspect this is why Wayland was so willing to casually toss that out. GPUs and network transparency are allergic to one another.
I don't think that's the issue with Wayland. X11 has fundamental design flaws, Wayland happened because of it.
I believe the question is why Wayland happened so badly. Sure, nVidia shenanigans are contributing to bad fame, but arguably that's not the issue with Wayland. However, Wayland fixes some of X11 design flaws but ignores others - or even introduces new ones.
But, for example, HiDPI is a mess in X11 - for (I assume) pretty obvious reasons of not having HiDPI back in the day. Weirdly, Wayland does nothing to make things right - instead it just gives up on the DPI concept altogether as if physical dimensions simply aren't a thing[1]. While this sort of solves some of the issues X11 had (although, scaling works with X11 too), it's just wrong.
Then there are a bunch of other controversial design decisions (like client-side decorations - as if consistency wasn't a problem already) that are more nuanced.
___
1) Or so I believe - I could be wrong. I just wanted to configure DPI for my monitors hoping that it would help me to have things sized correctly, and have read somewhere that it's not a thing in Wayland and all they have is scale factors.
Wayland happened so badly compared to e.g. the Systemd transition or the Pipewire transition because it was developed by Xorg developers who had spent years dealing with its complexity and countless regressions caused by bugfixes breaking obscure behavior that clients depended on. As a result, they swung too far in the other direction. They came out with a minimal protocol where all the heavy lifting was done in each compositor rather than having a single common implementation. Most importantly, the base protocol was completely unsuitable for nontrivial programs. Features such as screen sharing, remote desktop, non-integer scaling, screen tearing, or even placing a window in a certain position were not supported.
Because Wayland's base protocol was unable to replace X11, virtually nobody adopted it when it was initially released in 2008. The subsequent 16 years have been taken up by a grueling process of contributors from various desktop environments proposing protocol extensions to make Wayland a workable solution for their users. This has resulted in endless bikeshedding and requires compromise between different stakeholders who have fundamentally different viewpoints on what the Linux desktop should even be (GNOME vs everyone else). As an example, the process for allowing a client to position its own windows has been ongoing for over two years, and has led to multiple proposed protocol extensions with hundreds of comments on each.
The transition would have been much smoother if Wayland had the same "mechanism over policy" philosophy as X11 and allowed for software to easily be ported to it, but long-term that could have caused the same issues that the Xorg maintainers were facing when they created Wayland.
Was Pipewire a shitshow? PulseAudio was, and it had a bunch of issues (idk about design issues - I haven't really dug into the details, but it certainly had a fair number of unpleasant bugs), but I think Pipewire transition was fairly uneventful, in a good way.
And systemd was (and still is, I guess) controversial, and there are certainly some design decisions that are questionable - but even in its early days, at the very least it worked for a number of basic scenarios and the majority of issues I've had or heard about was either in its limitations (I remember having issues with journald) or because folks didn't want to change from their preferred rc system as it already worked well for them and systemd was radically different. So, I think, it was less of a shitshow than Wayland. Or maybe I'm just forgetting things - it was a long while ago.
The story that X11 somehow forces programs to use outdated drawing primitives was never really true as a long as I can remember. Compositing on X11 exists.
In principle, it is an extensible generic remote buffer management framework. As such, it could be extended indefinitely without every breaking backwards and forwards compatibility and is also a good fit for modern hardware that essentially deals with remote buffers on the GPU. As someone doing HPC programming on GPU, network transparency and GPU are definitely not at all allergic.
Does it work for anybody? When I tried it a few weeks ago, it would freeze on any dialog boxes launched by an application, then would unfreeze if you managed to close the dialog without being able to see it.
I wish rust people spent their time writing software instead of going around telling other people to do the job for them. They only manage to get others annoyed with this attitude.
Is this either rust or wayland? Nope. Memes are fun though.
Though I have no problem with anything being rewritten in rust. It's usually just as fast, and more importantly, freaking modern. Look at `lsd` or `ripgrep`
[1]: https://www.reddit.com/r/unixporn/comments/k1eu0n/durden_wor...