For those on windows(and with an Intel WiFi card) there is a hidden but an extremely useful feature - "project" can be used to turn any other windows PC/Laptop/Tablet into a second screen, just enable "cast to this PC" in settings, then from the host PC press Win+P from anywhere and the second machine becomes a secondary display instantly. To windows it appears as a normal display, without any hackery involved.
My small company spends thousands per year on TeamViewer licenses, just because RDP can not attach to running desktop sessions (using Nvidia graphics cards). Do you think this is something that could change in the future, or is it a fundamental property of RDP?
I really dislike our reliance on TeamViewer, especially now in covid times where were relying even more on remote access.
Can you use a RealVNC variant instead? I just replaced TeamViewer with UltraVNC single click for my remote support needs, it’s free, it’s branded with my logo and my clients find it easier to use.
I'm not sure, I think I've tried it but I don't remember what the result was. Note that our goal is decently fast (20fps+) 3d graphics.
I've also tried moonlight (which is an nvidia gamestream implementation), which seems to have performance but the client wasn't very usable for this usecase.
Oh, yeah it's probably not going to handle that too well. It's not really designed for full motion video. I tried running a game over it and only got a couple of FPS (although that's over an ancient USB WiFi dongle so modern performance might be better).
Still great for general productivity type stuff though!
RDP/termsrv by design create a new session for incoming connections. However, with Windows Virtual Desktop (WVD), Windows Containers and Windows AppGuard, we definitely have a way of launching remote applications in an existing session.
Have you perhaps looked at WVD or RemoteApp? If you'd like to talk more professionally, you can email me at {first_name}.{last_name}@microsoft.com (my name is available on github.com/zeusk).
This might sound weird, but Steam allows you to stream your desktop for games and provides full control as well as stream of audio and video. Works on any device.
Wonder if it could help. Being targeted at games, it has great performance.
WebRTC is the way to the future of open source remote desktops. I don't know exactly by I have a strong feeling that google is using WebRTC for chrome remote desktop
I have trouble with RDP and a 1060, though I've not tried in many months so it might be something that has since been fixed.
The issue seemed to be switching between remote and local access, i.e using the machine locally, then from remote, then coming back to local. It would usually survive a couple of cycles of this then just be a black screen after login. Once in that state tricks to reboot the graphics driver did not work and reporting in speed working too, though the rest of the machine was up as services like IIS and file shares kept responding just fine.
My hacky solution is to run VM for most day-to-day work on that machine and remote to that, only using the bare metal when I need the fancy gfx card or otherwise the little bit of extra unpf gained by not having a virtualization layer in the mix.
Yeah I work at a games studio, we are fully remote at the moment due to covid, and disconnecting/reconnecting RDP usually crashes whatever is running on the machine at the time, due to "GPU removed" error. Also only DirectX applications work through RDP, nothing OpenGL/Vulkan based. We have some Citrix licences for that.
Are you talking about WiDi? I tried to use that on my 3gen i7 laptop. It was awful experience on Windows 7. Then I switched to Linux and never went back.
If I remember correctly, this feature is based on Miracast which means that it also works for most WiFi-enabled TVs.
Very useful for presentations, if the Intel drivers and TV manufacturer code have mercy on you that day and work without crashing, artefacts or random disconnects.
Does that ever work reliably? I've tried it on a number of different systems, and half the time it just does nothing, and when it does, the screen updating is quite choppy to the point of being unusable.
Similar here, my 5 year old laptop doesn't support it. I'm willing to accept lower performance if a nonaccelarated software route could be made to work
It feels trying to get troubleshooting for Windows is like shouting into the void, so I, as well as I think many others, have long since just given up.
I for instance bought "Assure Software Support" at $99 a year to try to get support for an unrelated (usb-c dock) issue. And there seemed to be no way to actually use it, and I just gave up, resentful that I paid Microsoft a support fee.
This. There doesn't seem to be a pricepoint which effective for getting both help, and information back inside Borgs (of all kinds) to actually effect change.
I had a problem at work with Microsoft X.509 certificate management. I even found the reasonably capable, actual decision-making person who did X.509 at a conference, and he was pretty clear: nothing I said to him there, or in any other forum was going to change pace inside the company on the problem at hand. (he did point out it affected the DoD so I was somewhat assured it was going to get fixed)
OTOH when it was pointed out how badly Microsofts TCP implementation behaved, they did make changes. So "it depends"
I’ve submitted plenty of detailed bug reports with zero response or just form replies, so I don’t waste my time shouting into the void anymore. Having an actual response that isn’t just “we appreciate your report and will have someone look into it” might help.
There should be a better way of reporting a bug on Windows than posting it on a public forum full of spam that is only accessible from a Metro app which does not come with LTSC.
To add, gpu accelerated, low to non cpu usage as well with recent drivers, unlike the 99% other solutions, no more whiney fans.
It reliably works with recent hardware only, you may see few performance profile options to choose from, that is an indication of HW accel.
The fact that this part isn't linked to from the "Cast to wireless display" section of settings is absurd. Win10 control panel is a mess. I actually made a real effort to find the setting. Your post was the only thing to point me in the right direction.
I could be mistaken, but I figured I'd point this out since it was my experience, the host PC must have wireless (context: I had a PC with only an ethernet connection, no wireless card).
It will take advantage of the ethernet connection, making for a more responsive experience, but however it is designed requires wireless even if the both computers are connected to ethernet.
I connected a usb wireless dongle and it did the trick for me.
It’s best used for desktop apps and things like PowerPoint. It starts to lose frames and have tearing when doing video. I doubt it would be acceptable for any gaming.
It works well enough, but it’s not going to satisfy people who are concerned with things like benchmarks and FPS.
Yep, it does - the underlying technology is based on Miracast and works only with Wifi adapters made by Intel because it's point-to-point. Sadly it doesn't work through an AP connection at all.
Oh, and I played a bit with multiseat yesterday. If you want, every VNC user can have their own mouse/keyboard, with independent focus.
Though that currently takes a bit of fiddling (`swaymsg seat seat_name attach "virtual pointer"` + one wayvnc instance per pointer), wayvnc reportedly has a branch where it's done automatically.
For those lucky to have VGA port, I've recently did a setup with iPad 3 as second display for laptop [0] with VGA and a few resistors to create dummy display.
Yeah, "add virtual ___" is one of the hardest and most complicated thing in Windows, probably worse on macOS, definitely where FOSS OS shine.
For Windows "software display" driver implementations, from top of my head there are DisplayLink, Fresco Logic FL2K driver, and a homebrew one by q61.org[1] but none are open sourced.
Of those, I’ve had the longest (since 2007 and then 2009) and most reasonably reliable experience with the DisplayLink family, to the point that I look for that and avoid alternatives. For example, USB parts from Startech, Targus, Belkin, Kensington, and Plugable leverage DisplayLink.
Thanks! I didn't find a way to create a dummy display with native Win10 tools. There was a way to do it in previous minor releases, but with recent Win10 updates Microsoft has removed it or disabled. So went for a hardware solution instead, as I didn't want to install any 3rd party software. That's quite reliable solution, just need to be careful if moving laptop too much :)
macOS is a pain as well. There was a fairly well documented way (in the open source world) of writing a stub driver to create a virtual display. Apple axed it without notifying anyone when their own "Use an iPad as a second screen" support came out. All the third party apps that provided that function broke overnight, notably Duet Display.
Considering Duet works again, I wonder if the mechanism they use was quietly shared with them or if it was publicly shared.
I see that there are HDMI and DVI-D dummy adapter editions, but they're a lot more expensive. I wonder if you could just get a cheap mass-produced HDMI-->VGA or DVI-VGA adapter and then push in resistors to save a few bucks.
With DVI-I, the resistor trick should work (it's just VGA), but not as easy as just folding some 1cent resistors around.
for anyone that's interested in this general concept, also take a look at 'barrier', which is the continuation and fork of synergy. use one keyboard and mouse to drive multiple independent PCs at one desk, roll the mouse/keyboard off the edge of one screen and onto the other.
sort of an inverse KVM. and without the ridiculous cost/licensing issues of the official synergy.
An often overlooked feature of synergy is the ability to share clipboards between machines. You can copy a URL from your email and paste it into a browser on another box. Particularly useful if the reason you're doing synergy is to do cross-platform testing and validation.
The thing that made me stop using it was that they punted security to be someone else's problem, so you had to set up some ssh tunnel and be sure to run it only over that. It's not so bad on Linux or OS X, but that's quite a bit of extra work on Windows.
Does barrier take care of authentication or session encryption?
If I had to guess, no, barrier is much like VNC in that it's expected you have a ssh wrapper set up with public/private key authentication.
I've only used it between macos and linux machines, so that's easy. An example of the very tiny shell script that I use for VNC-over-SSH to a remote machine.
In which the VNC daemon on the remote machine only listens on its own localhost, and I use ssh to form the tunnel then use the vnc client on my workstation to connect to localhost:5902 to access it.
echo "localhost port 5902 for the VNC client to remotehostname.net"
I actually think this is better because for a very small open source project like barrier, that might literally be developed by one person, the workload and time/effort to be ABSOLUTELY CERTAIN you've implemented the crypto libraries correctly is a lot of work and worry.
Whereas if you use ssh you can be fairly certain that it's been battle tested by a huge number of people who have a lot more time and resources than yourself.
> It's not so bad on Linux or OS X, but that's quite a bit of extra work on Windows.
Nowadays, we have wireguard, so you can create a secure little network to run this sort of thing over much more easily.
Running tailscale (https://tailscale.com/) on each machine you're using, and then using their tailscale private ips with synergy, should be both secure and work painlessly across those three platforms
If you're on the same layer 2 broadcast network segment (typically some machines in a home office plugged into the same dumb switch, or all on the same VLAN), the time/effort to do this with ssh is a lot less than using wireguard to talk between two machines that are literally plugged into the same switch.
Since the typical use case for barrier is to have something like two desktop PCs, each outputting to two displays but with no mice or keyboards, and one laptop in the center, where you want to use your laptop's keyboard and trackpad to run everything.
The comment above was about how running ssh forwarding correctly on windows is involved and has awful UX, which is true in my experience too.
Tailscale has much better UX, so it solves that problem.
In addition, wireguard is just as simple to setup as ssh (again in my experience), can operate over local LAN too, and some people have found it to have better performance than ssh forwarding (such as https://news.ycombinator.com/item?id=21162273).
I often have problems with barrier. Sometimes, it'll just refuse to stop when press "stop". It'll continue replicating the mouse to the other computer.
Other times, it'll refuse to start working again, requiring me to restart one or both computers, with know way to know which is messed up.
If I could guarantee one of the non-subscription ones would work properly, I'd probably pay. But since Barrier is apparently a fork of Synergy, I don't trust it. And most of the others have weird monetization, or I've heard bad things about their customer service... So I don't trust any of them.
In the end, I'm still using Barrier and just dealing with its problems.
I had quite a few problems with it but I think they mostly went away when I stopped closing the barrier window. It can't properly handle picking up the running session when you open up a new window so when you close the window the service becomes an orphan. Then it will try to start a new session on top of the other which can result in confusing behaviour.
I have a 2016 iMac which has an amazing screen, but sadly apple removed target display mode in 2012, which makes it useless for me. It has been collecting dust, since I can't stand Mac OS except when I really have to use it (mostly some sound production things at work).
I can't fully vouch for this setup but there are some dirt cheap HDMI-to-USB devices on Amazon (got mine for $15) now. You can run HDMI out from your main machine to the USB of your iMac and then capture the input with OBS and run it full screen. A little hacky but it should work.
I've been using one to pipe an old Android phones camera to a virtual cam on my laptop.
Anyway, the advantage would be you can plug directly into it and get full performance (no lag, full bandwidth) because you've basically converted it into a monitor.
Because I have a Linux workstation and I want to have 2 screens. The iMac screen is waaay better than my main screen, which is why I want to use it. In daylight, those extra 150 cd/m2 really help
Not sure if this
helps the dual-screen desire, but to make more use of it, couldn't you install linux directly on the iMac? I did that (install pop os) recently with a Macbook Air 2012, and it works flawlessly as far as I can see. All special keyboard buttons, sound, sleep/wake, battery, trackpad (minus gestures), etc. etc.
Boy would this comment have saved me a lot of time had it been posted on the original discussion a couple of days back!
I did eventually find that project, but only after having written what is essentially the non-VNC half (creating and managing evdi displays) of it from scratch...
How's the latency on that? I've been looking for a good "dock" solution for my phone but as it's type-c port is only USB 2 it doesn't support video out with charging.
Pretty good. I used it for a couple of years doing mobile development from a Windows machine and it was very responsive. The setup was a bit odd (licensing issues) and I ended up switching to scrcpy which is free and also very good:
Came here to say the same. If everybody included just a single high level picture like that, you’d give newcomers a real head start in understanding the code.
That is exactly what I think when I see any project on github without such a diagram. I would contribute to more simply if they had a brief diagram like the one I did :)
Thank you all! Tons of great ideas, now I scratch my head and thinking on how to get them into a simple API. Here is what I think virtual display API should look like: https://github.com/pavlobu/deskreen/tree/master/drivers
Btw, deskreen has been shared enough times now to become popular on "TypeScript LibHunt" too https://www.libhunt.com/lang/typescript (disclosure: LibHunt founder).
Definitely seems like a helpful project. The stars on GitHub have jumped from 1 to 1,900+ in just one week! Enjoy the ride and thanks for open-sourcing your work.
Since android 10 and its desktop mode would be nice to have virtual displays there as well. On rooted phones it shouldn't be hard to get such a driver into a kernel. Android distros like LineageOS would be probably more than happy to incorporate such a solution.
Another thing is Linux, drivers for virtual screen already exist, check xvfb and xorg-video-dummy
It usually requires you to have an access to internet due to how WebRTC works signals should be exchanged on internet somewhere. Deskreen solves this problem by running local signaling server, so you don't need access to internet
So, combined with @gambiting saying something similar exists on Windows, I'm struggling to see the utility of Deskreen given the two main OS's already have such functionality built in.
To me, this project seems much more useful than these platform specific apps. ANY screen with a browser means much more compatibility than iPad only for the second screen.
Is this the correct tool if you are trying to use a cheap tablet as a wireless display for a raspberry pi?
Edit: I don't think this would be a good solution. It mentions getting a qr code from the host machine and approving the connection inside the application. These things would be really hard to do on a pi with no screen over an ssh connection.
It might be super interesting if someone could make a native Android/iOS app for this since mobile web browsers can't do "real" full screen. Would be useful for turning spare tablets into displays and also perhaps for quickly checking color calibrations of photo adjustments on mobile displays.
When I'm coding, I generally have code on one, and the other has a browser with documentation, notes, issue tracker, or just used for searches etc. Usually that monitor is split and also has chat visible.
When I'm testing/debugging, one monitor might be the web app I'm working on, the other is the code I'm stepping through. Or I'll be tailing a log file or watching something in a database while using the app.
I do lots of client/server stuff, so sometimes I have a monitor showing the server-side (web UI and/or logs) and the other showing client-side (cli, logs, and/or local UI). The key useful thing is seeing the client do an action and send something to the server, and seeing the server instantly react: that's not possible when you can't see everything.
In some cases, I'm debugging remotely, and a monitor will be partly or completely dedicated to ssh/rdp/vnc to the remote system(s), with the other used for browser or cli app I'm testing. Usually chat, documentation, source, issue tracker is mixed in there too.
There's a couple things I work on that have long running builds (10+ minutes) or integration tests that takes a bit over 30 mins. Both are long enough that I'll do something else while waiting and suddenly 2 hours pass before I remember, so I like keeping the build status page visible somewhere to avoid that.
Even for non-work/coding sometimes I just have YouTube or Netflix open (either 1/4 window or full-screen) while I'm browsing the web on another.
To me, working on my laptop is doable, but feels like having one hand tied behind my back when compared to working with multiple monitors and a proper keyboard and mouse.
Vertical oriented 1080p: chat windows, sometimes replaced by multiple command line windows.
Horizontal 4K: Emacs, Visual Studio Code, Browser with repositories/jira, command line windows.
Horizontal 4K: Browser with documentation, secondary VS Code windows, Outlook, Teams.
Macbook Pro screen: Finder, calculator, more command line windows.
Most of my "main" work takes place in the 2 horizontal 4K screens. The other 2 screens are "secondary" information. Having that much screen real estate allows me to more easily collect and arrange the information I need to do my job.
If I could change anything, it would be getting a 24 inch 4K screen to replace the vertical one, and having a higher refresh rate than 60Hz on all my screens.
I hope that one day 8K screens become affordable, because I'd love even sharper text. Reading on the vertical 1080p screen seems fuzzy compared to the 4K screens. (Which is definitely a "first world problem"!!)
Sometimes I feel like in a diferent world than everyone else. I'm really happy with my 1080p devices, and upgrading them to higher resolution would feel like a waste of resources to me (I don't have especially bad eye sight, but on 1080p things are already small enough to my taste)
(Of course I'm talking only about computer monitor, for very big TV screens or for screens that I have 10cm from my nose it's a different story)
WFH have an old Apple 20" screen as my only display. 1680 x 1050. With multiple workspaces and a tiling window manager, I find it completely usable. At the office I had two screens, but don't really miss it.
Just got (as in, 2-3 hours ago) a new 4k display (32") to add to my 2x22" setup. My eyes are already happier. I'm using windows with scale set to 150% (for the 4k display) and 100% for the other 2.
For me personally, it is only really noticeable when I shift my eyes from one of the middle 4K screens to the 1080p screen.
I am a little picky about fonts and their display though. The higher resolution the screen, the better. I have my Emacs configuration using a bunch of different fonts and sizes/weights/etc for my Org-mode, Terraform, TypeScript, and other editing. For example, when a todo item is put into "in progress" the heading is slightly larger and bolder, and when that item is complete, marking it "done" changes the text to italic, extra-light, and grey to reduce its visibility.
Time spent messing around with fonts is definitely an expression of ADHD and active procrastination, but I do get a pleasing effect from it! :)
I sometimes have to rough it with only a single exterior screen! Once I had to work on only the laptop by itself. The Horror!
;)
But, more seriously since this is HN: While the extra screens are great, as is the mouse and mechanical keyboard, it's not much of a hassle at all to work in a different location. I think the extra screens, desk space, peripherals, all go towards making me more comfortable rather than efficient. Which for me, rocking the oh-so-annoying ADHD along with nerve and back pain, means I can focus on work for that much longer in one session.
I should be working from home at least until mid 2021, hopefully much longer or even permanently. A good working environment can really improve your mental health at a time when, in the USA especially, things are incredibly stressful.
Hello from another ADHD sufferer. Interestingly though I fall on the other side of the remote work issue - I cannot wait to return to the office. My brain is quite stubborn about categorizing spaces; my computer room at home is where I do some side project programming and a fair amount of gaming. Having to try to recategorize it as a working space has proven impossible, and I don't really have any other space in this house, with my wife also working remotely.
Besides, more generally the drive to the office helps put me in work mode. The whole office building is a place where work happens. Then the drive home helps put me in leisure mode. Without those neatly coded space/time contexts I have been seriously struggling.
Everybody that keeps talking about how the world is going remote has been giving me a fair amount of anxiety. If I lose my office I'm not sure how I'm going to function. Besides, my coworkers and I miss seeing each other and being able to work together in person. We don't do much pair programming, but when we do, trying to do so over screen share has proven to be significantly less productive.
But then again, I am lucky to have an amazing team of people who I genuinely enjoy being around, and who are all very respectful of quiet time to work when needed. I'm also definitely an introvert but maybe less so than the average dev.. I miss social contact!
Because I can't see what's going on on a virtual desktop out of the corner of my eye? Because I can't easily play a game on one virtual desktop and have a browser open to glance at a guide, or a discord server or ...
I don't know why you wouldn't want two screens, personally. I've worked with three before and found them useful. Virtual desktops never felt natural.
The closest I came to adopting them really was during the "compiz" era, using the desktop cube effect. That metaphor seemed to play into my brain's spatial awareness quite well. But still not quite as well as a separate screen.
> The closest I came to adopting them really was during the "compiz" era, using the desktop cube effect. That metaphor seemed to play into my brain's spatial awareness quite well. But still not quite as well as a separate screen.
This makes me wonder how your brain can handle multiple browser tabs ;)
For me personally (not who you responded to), honestly not very well. I'm always losing track of which tabs are which. When the tabs are simple content it's mostly okay; mentally it's just like having a tabbed notebook. But if the tabs are interaction- or app-centric, my brain begins to slowly jam up.
My ADHD almost certainly takes a lot of the blame for that.
Honestly I have limits with those too. Any more than fit across the screen and I start to lose track. I find it mind boggling when people say they have dozens and dozens of tabs open at a time.
Definitely. I typically have browsers open in two or three different workspaces, for different purposes and with different profiles. But within any browser session, I never have more than half-a-dozen open tabs. I can't really mentally manage more than that.
I think that having multiple screens might use less cognitive resources than having virtual workspaces, since it is very natural to move your head / mouse whereas the workspace manager is a more high level construct.
When I do purchasing for my ecommerce business I keep a spreadsheet in one screen and a browser in the other. Occationally I do from my laptop with one screen and it takes longer and I make more mistakes because the spreadsheet is huge and there is too much cognitive overhead when I have to keep flipping between windows/virtual desktops.
Also when coding. I keep docs and messaging apps on one screen and I can keep the other screen clean with only code. It really is faster and easier with two screens.
I've seen this question a few times and have been looking for hard sells, especially because I spent quite a few workdays during summer in a hammock without a second screen.
I've found about two hard sells so far:
Online presentations and demos to people. For example, showing some slides to some people, with some jumps to code or terminals. In this case, it's extremely valuable to have the call with webcams open on a second monitor so I can keep tabs on the expressions of the audience in order to adjust tempo. Switching the contents of the presentation screen / shared screen without warning is jarring and confusing to the audience. And commonly used communication programs do not support sharing one virtual desktop easily, only screens and/or windows.
And sometimes, it is valuable to be able to display more information at once. Sometimes, during larger outages, it helps to be able to display more key metrics of the system on one screen and poking the system on the other screen. This one can be done with virtual desktops, yes, but it has less overall cognitive overhead for me to just have a stable screen with the monitoring data and/or logs on it available instead of constantly flipping between desktops.
And again, during screen sharing, people hate it and get confused if I flip the main screen around too much between monitoring data and shells. So having a work screen to share with people on a call and another screen for information helps reduce that.
A lot of other use cases are convenient with more screen space, but good virtual screens with hotkeys work almost as well I've found.
Virtual desktops are useful, but often a physical screen is better. For example, it's much more convenient to glance to and fro from code and logs/stack trace.
This is why cursors on text terminals would blink.
But I'm not sure making a mouse pointer blink is the right thing to do.
The whole Windows-Icons-Mouse-Pointer environment was developed when 640x400 was a high resolution and 22" monitors were considered huge. So easy to pick out the pointer at rest, but I can see how you can get lost on basically a 4K resolution TV.
Guess I'll jump in too :). I have two 27" 4k monitors, the one directly in front holds work, the one off to the left has docs, chat, email, etc. I also use virtual desktops and I could arrange all this stuff with those, but having to switch back and forth is a minor cognitive hurdle I choose to avoid.
I tend to run both. The second screen let's me have source/editor on one and reference material on the other. My daily driver is a three screen setup( laptop display and two external monitors). There's usually some combination of logs, app ui, source, and refence material across all three so I can switch between them without having to think about which virtual display they are on. In support situations, I'll have a screen share running on one, chat on another, and likely reference material on a third. You could do it with a single huge display, but it would need to be huge and have a way to subdivide it into reasonable slices for easy window snapping. Plus, you aren't going to get that in anything that can be used as a laptop ...
The web browsers inspector is my main use at the moment. The M1 MBP display is simply too small to have the inspector and the browser comfortably on screen simultaneously, and its not possible to flick between virtual desktops (or application windows) without losing the context (some targeting stuff is hover-based, for example). Plus its annoying tweaking a css style, flicking back and forth all the time instead of seeing your change reflected as you change it.
I grew up on programming on a Mac SE, so layering feels natural to me versus needing to have everything on screen at once. That said, a 26” 4K monitor “above” (with the resolution increased a bit) my 15” MacBook Pro has been a good setup in WFH mode.
It's more screen estate. Where it's usefuld depends on what you are doing.
I often have one screen for "actual" work and the other screen for chat/mail/... then checking for something going on there isn't a full context switch, but a short look.
Sometimes in development a second screen can be useful to have a screen full of different code files and second screen for documentation or the app or logs or whatever.
During video chat i have the faces of the others on one screen and notes and other things on the other screen or during social videoconfs during covid I have faces in one screen and second screen for a (board) game we are playing ...
It always depends on what you are doing. Things one can do are many.
Is this a TeamViewer/AnyDesk level of interactivity or just a Twitch level of interactivity from 2nd screen? As in 2nd screen can be used to control the main source screen or just to watch what happens there?
Works the same as an additional monitor- which is what it is. The Dummy Plug creates the second monitor thru native support in the OS. Deskreen connects your additional device to it via a web browser. I installed it on Ubuntu and had it working in two minutes, using my iPad Pro 12.9” as a second monitor ;-) Looks and works good and decent latency.
indeed, i tried it myself now. but there was something in the wishlist of features about remote control. it should be possible to capture mouse clicks and most keyboard input in the browser and send it back. don't know how much effort that is though
I've tried this the past three times it's made it to the front page. Using both an iPhone and iPad with safari I was unable to even connect the server. It acts like it's loading then fails.
I really want something that's like spacedesk (another one of these) but without using a HDMI dummy plug, spacedesk is glitchy however doesn't need a plug and can support many devices.
Hi. It's open source so expectations can be lower than for commercial projects. It's made on pure enthusiasm. And we are figuring out a way of how to hack the OSes to get rid from dummy display plugs. (just look at repo README.md) So stay tuned, in future releases wi will be updating on that. Cheers