Hacker News new | past | comments | ask | show | jobs | submit login
Deskreen – Turn any device with a web browser to a second computer screen (github.com/pavlobu)
723 points by maydemir on Jan 24, 2021 | hide | past | favorite | 199 comments



For those on windows(and with an Intel WiFi card) there is a hidden but an extremely useful feature - "project" can be used to turn any other windows PC/Laptop/Tablet into a second screen, just enable "cast to this PC" in settings, then from the host PC press Win+P from anywhere and the second machine becomes a secondary display instantly. To windows it appears as a normal display, without any hackery involved.


My team at Microsoft works on the underlying code, I'm not sure if it is strictly Miracast but it definitely makes use of Indirect Display.

We use Indirect Display driver model for lots of other things such as RDP, Graphics over USB etc..


My small company spends thousands per year on TeamViewer licenses, just because RDP can not attach to running desktop sessions (using Nvidia graphics cards). Do you think this is something that could change in the future, or is it a fundamental property of RDP?

I really dislike our reliance on TeamViewer, especially now in covid times where were relying even more on remote access.


Can you use a RealVNC variant instead? I just replaced TeamViewer with UltraVNC single click for my remote support needs, it’s free, it’s branded with my logo and my clients find it easier to use.


I'm not sure, I think I've tried it but I don't remember what the result was. Note that our goal is decently fast (20fps+) 3d graphics.

I've also tried moonlight (which is an nvidia gamestream implementation), which seems to have performance but the client wasn't very usable for this usecase.


You can try Parsec as well. Not created for this purpose but works very well for productivity. In my experience maintains 60fps.


Oh, yeah it's probably not going to handle that too well. It's not really designed for full motion video. I tried running a game over it and only got a couple of FPS (although that's over an ancient USB WiFi dongle so modern performance might be better).

Still great for general productivity type stuff though!


What issues did you have with Moonlight?


Sorry for the late reply.

RDP/termsrv by design create a new session for incoming connections. However, with Windows Virtual Desktop (WVD), Windows Containers and Windows AppGuard, we definitely have a way of launching remote applications in an existing session.

Have you perhaps looked at WVD or RemoteApp? If you'd like to talk more professionally, you can email me at {first_name}.{last_name}@microsoft.com (my name is available on github.com/zeusk).


This might sound weird, but Steam allows you to stream your desktop for games and provides full control as well as stream of audio and video. Works on any device. Wonder if it could help. Being targeted at games, it has great performance.


Why don't you use FreeRDP, X2Go or rdesktop - all open source???


Those are clients.


WebRTC is the way to the future of open source remote desktops. I don't know exactly by I have a strong feeling that google is using WebRTC for chrome remote desktop


Would you mind clarifying the problem you are facing?

I am able to RDP freely onto desktops with rtx 2070/gtx 960 cards no problem.

Is this more something specific involving some degraded performance/reduction in features?


I have trouble with RDP and a 1060, though I've not tried in many months so it might be something that has since been fixed.

The issue seemed to be switching between remote and local access, i.e using the machine locally, then from remote, then coming back to local. It would usually survive a couple of cycles of this then just be a black screen after login. Once in that state tricks to reboot the graphics driver did not work and reporting in speed working too, though the rest of the machine was up as services like IIS and file shares kept responding just fine.

My hacky solution is to run VM for most day-to-day work on that machine and remote to that, only using the bare metal when I need the fancy gfx card or otherwise the little bit of extra unpf gained by not having a virtualization layer in the mix.


Yeah I work at a games studio, we are fully remote at the moment due to covid, and disconnecting/reconnecting RDP usually crashes whatever is running on the machine at the time, due to "GPU removed" error. Also only DirectX applications work through RDP, nothing OpenGL/Vulkan based. We have some Citrix licences for that.


Are you talking about WiDi? I tried to use that on my 3gen i7 laptop. It was awful experience on Windows 7. Then I switched to Linux and never went back.


If I remember correctly, this feature is based on Miracast which means that it also works for most WiFi-enabled TVs.

Very useful for presentations, if the Intel drivers and TV manufacturer code have mercy on you that day and work without crashing, artefacts or random disconnects.


Also this requires support from the WiFi chipset. Most laptops likely have support, but not all desktop WiFi dongles.

I understand it’s not the usecase, but would be cool if this also worked over ethernet.


Sounds marginally better than the third party implementations of AirPlay that I’ve used.


Does that ever work reliably? I've tried it on a number of different systems, and half the time it just does nothing, and when it does, the screen updating is quite choppy to the point of being unusable.


I remain annoyed that an Intel driver update took away Miracast from one of my laptops. It worked fine and then it was gone.


Similar here, my 5 year old laptop doesn't support it. I'm willing to accept lower performance if a nonaccelarated software route could be made to work


I have tried this many times over the years on plenty of different computers and have never once got it to work a single time.


Next time it doesn't work for you, please file a report with Win+F (Feedback hub) and we can take a look at what's failing underneath.


It feels trying to get troubleshooting for Windows is like shouting into the void, so I, as well as I think many others, have long since just given up.

I for instance bought "Assure Software Support" at $99 a year to try to get support for an unrelated (usb-c dock) issue. And there seemed to be no way to actually use it, and I just gave up, resentful that I paid Microsoft a support fee.


This. There doesn't seem to be a pricepoint which effective for getting both help, and information back inside Borgs (of all kinds) to actually effect change.

I had a problem at work with Microsoft X.509 certificate management. I even found the reasonably capable, actual decision-making person who did X.509 at a conference, and he was pretty clear: nothing I said to him there, or in any other forum was going to change pace inside the company on the problem at hand. (he did point out it affected the DoD so I was somewhat assured it was going to get fixed)

OTOH when it was pointed out how badly Microsofts TCP implementation behaved, they did make changes. So "it depends"


With feedback hub, the reports along with any diagnostics get routed to engineering teams assigned to the feature.

The only downside imo is we (engineers) cannot directly respond to a end-user feedback.


I’ve submitted plenty of detailed bug reports with zero response or just form replies, so I don’t waste my time shouting into the void anymore. Having an actual response that isn’t just “we appreciate your report and will have someone look into it” might help.


To the user, though, this feels precisely like shouting into the void.


There should be a better way of reporting a bug on Windows than posting it on a public forum full of spam that is only accessible from a Metro app which does not come with LTSC.


Why did you hijack the shortcut for search for this, anyway?


Agreed, I've never gotten it to work, or when I have, it very quickly failed, and disconnected thereafter.


To add, gpu accelerated, low to non cpu usage as well with recent drivers, unlike the 99% other solutions, no more whiney fans. It reliably works with recent hardware only, you may see few performance profile options to choose from, that is an indication of HW accel.


Huh, I never knew about this.

Go to Settings > Projecting to this PC to enable it. You may need to install the Wireless Display optional feature first.


The fact that this part isn't linked to from the "Cast to wireless display" section of settings is absurd. Win10 control panel is a mess. I actually made a real effort to find the setting. Your post was the only thing to point me in the right direction.


I could be mistaken, but I figured I'd point this out since it was my experience, the host PC must have wireless (context: I had a PC with only an ethernet connection, no wireless card).

It will take advantage of the ethernet connection, making for a more responsive experience, but however it is designed requires wireless even if the both computers are connected to ethernet.

I connected a usb wireless dongle and it did the trick for me.


Intel WiFi specifically? What's the special bit that makes it work?


It probably includes a license to some other software required for this to work


WOW this is huge, thanks so much!


How laggy is it over Wi-Fi?


It’s best used for desktop apps and things like PowerPoint. It starts to lose frames and have tearing when doing video. I doubt it would be acceptable for any gaming.

It works well enough, but it’s not going to satisfy people who are concerned with things like benchmarks and FPS.


Nothing to measure it with but i never think about the lag when i do it over hotel wifi


I believe this exclusively uses WiFi direct and skips the AP completely, but I can't completely confirm with a quick google.


Yep, it does - the underlying technology is based on Miracast and works only with Wifi adapters made by Intel because it's point-to-point. Sadly it doesn't work through an AP connection at all.


Works simultaneously here, Intel and most phones support multi-role.


Yes, I just mean it doesn't work over your WiFi connection, so you can't connect another device elsewhere on the network, it has to be in range.


For those interested in something similar with sway, you can:

- Add a headless display with `swaymsg create_output SOME_NAME`

- Configure its resolution and position like you do for other outputs

- Start a VNC server on that output with `wayvnc --output=SOME_NAME`

- Start a VNC client somewhere else you want to use your second screen (there are some that are browser-based)

Unfortunately it is somewhat undocumented: https://github.com/swaywm/sway/issues/5553 but you can find some info on the net: https://www.reddit.com/r/swaywm/comments/k1zl41/thank_you_de...

I saw in that readme that they wanted ways to get rid of dummy adapters. That above is one such possible way for sway.


Oh, and I played a bit with multiseat yesterday. If you want, every VNC user can have their own mouse/keyboard, with independent focus.

Though that currently takes a bit of fiddling (`swaymsg seat seat_name attach "virtual pointer"` + one wayvnc instance per pointer), wayvnc reportedly has a branch where it's done automatically.


Thanks for the tip, that's definitely something I'll try out.


> with sway

This made me think, "Sway. Sway. Sway. Sway in the Morning."

Not sure how many hip hop fans there are here...


I know we'll be downvoted to death for this.

Sway is so deeply rooted in the game and remains a solider for the culture to this day. Always a positive beam of positivity from him.


Are we being downvoted because it's not related?


I'm so much older that it made me think of Dean Martin.


Same! Made me smile :)


For those lucky to have VGA port, I've recently did a setup with iPad 3 as second display for laptop [0] with VGA and a few resistors to create dummy display.

[0] https://blog.zenlot.xyz/post/ipad_second_display/


Cool hack, but is that seriously the easiest way to ask Windows for a second display? That is nuts!


Yeah, "add virtual ___" is one of the hardest and most complicated thing in Windows, probably worse on macOS, definitely where FOSS OS shine.

For Windows "software display" driver implementations, from top of my head there are DisplayLink, Fresco Logic FL2K driver, and a homebrew one by q61.org[1] but none are open sourced.

1: http://q61.org/en/chibimo/build/


Of those, I’ve had the longest (since 2007 and then 2009) and most reasonably reliable experience with the DisplayLink family, to the point that I look for that and avoid alternatives. For example, USB parts from Startech, Targus, Belkin, Kensington, and Plugable leverage DisplayLink.


Thanks! I didn't find a way to create a dummy display with native Win10 tools. There was a way to do it in previous minor releases, but with recent Win10 updates Microsoft has removed it or disabled. So went for a hardware solution instead, as I didn't want to install any 3rd party software. That's quite reliable solution, just need to be careful if moving laptop too much :)


You can buy dummy headless display dongles very cheaply, e.g.: https://www.amazon.co.uk/Headless-Display-Emulator-Headless-...


But why do you need hardware for this? Surely this is a purely software problem.



Yea, they want to get rid of that but need to know more about how to handle it in the lower levels of the OS (mentioned on the github page).


macOS is a pain as well. There was a fairly well documented way (in the open source world) of writing a stub driver to create a virtual display. Apple axed it without notifying anyone when their own "Use an iPad as a second screen" support came out. All the third party apps that provided that function broke overnight, notably Duet Display.

Considering Duet works again, I wonder if the mechanism they use was quietly shared with them or if it was publicly shared.


Had to do this years ago for headless bitcoin miners with multiple GPUs.


I see that there are HDMI and DVI-D dummy adapter editions, but they're a lot more expensive. I wonder if you could just get a cheap mass-produced HDMI-->VGA or DVI-VGA adapter and then push in resistors to save a few bucks.

With DVI-I, the resistor trick should work (it's just VGA), but not as easy as just folding some 1cent resistors around.

https://www.geeks3d.com/20091230/vga-hack-how-to-make-a-vga-...


for anyone that's interested in this general concept, also take a look at 'barrier', which is the continuation and fork of synergy. use one keyboard and mouse to drive multiple independent PCs at one desk, roll the mouse/keyboard off the edge of one screen and onto the other.

sort of an inverse KVM. and without the ridiculous cost/licensing issues of the official synergy.

https://github.com/debauchee/barrier


Synergy dates back to Silicon Graphics hardware.

An often overlooked feature of synergy is the ability to share clipboards between machines. You can copy a URL from your email and paste it into a browser on another box. Particularly useful if the reason you're doing synergy is to do cross-platform testing and validation.

The thing that made me stop using it was that they punted security to be someone else's problem, so you had to set up some ssh tunnel and be sure to run it only over that. It's not so bad on Linux or OS X, but that's quite a bit of extra work on Windows.

Does barrier take care of authentication or session encryption?


If I had to guess, no, barrier is much like VNC in that it's expected you have a ssh wrapper set up with public/private key authentication.

I've only used it between macos and linux machines, so that's easy. An example of the very tiny shell script that I use for VNC-over-SSH to a remote machine.

In which the VNC daemon on the remote machine only listens on its own localhost, and I use ssh to form the tunnel then use the vnc client on my workstation to connect to localhost:5902 to access it.

ssh -v -L 5902:127.0.0.1:5901 -C -N -f -l myusername -i ~/.ssh/my_ssh_id remotehostname.net

echo "localhost port 5902 for the VNC client to remotehostname.net"

I actually think this is better because for a very small open source project like barrier, that might literally be developed by one person, the workload and time/effort to be ABSOLUTELY CERTAIN you've implemented the crypto libraries correctly is a lot of work and worry.

Whereas if you use ssh you can be fairly certain that it's been battle tested by a huge number of people who have a lot more time and resources than yourself.


> It's not so bad on Linux or OS X, but that's quite a bit of extra work on Windows.

Nowadays, we have wireguard, so you can create a secure little network to run this sort of thing over much more easily.

Running tailscale (https://tailscale.com/) on each machine you're using, and then using their tailscale private ips with synergy, should be both secure and work painlessly across those three platforms


If you're on the same layer 2 broadcast network segment (typically some machines in a home office plugged into the same dumb switch, or all on the same VLAN), the time/effort to do this with ssh is a lot less than using wireguard to talk between two machines that are literally plugged into the same switch.

Since the typical use case for barrier is to have something like two desktop PCs, each outputting to two displays but with no mice or keyboards, and one laptop in the center, where you want to use your laptop's keyboard and trackpad to run everything.


The comment above was about how running ssh forwarding correctly on windows is involved and has awful UX, which is true in my experience too.

Tailscale has much better UX, so it solves that problem.

In addition, wireguard is just as simple to setup as ssh (again in my experience), can operate over local LAN too, and some people have found it to have better performance than ssh forwarding (such as https://news.ycombinator.com/item?id=21162273).


I often have problems with barrier. Sometimes, it'll just refuse to stop when press "stop". It'll continue replicating the mouse to the other computer.

Other times, it'll refuse to start working again, requiring me to restart one or both computers, with know way to know which is messed up.

If I could guarantee one of the non-subscription ones would work properly, I'd probably pay. But since Barrier is apparently a fork of Synergy, I don't trust it. And most of the others have weird monetization, or I've heard bad things about their customer service... So I don't trust any of them.

In the end, I'm still using Barrier and just dealing with its problems.


I had quite a few problems with it but I think they mostly went away when I stopped closing the barrier window. It can't properly handle picking up the running session when you open up a new window so when you close the window the service becomes an orphan. Then it will try to start a new session on top of the other which can result in confusing behaviour.


I never close the window, though. Just minimize it. Thanks for the info, though.


Sound - is there any good tool for transferring that over local network?

The tool I've tried gave poor results with lots of latency.


Barrier doesn't work well if your displays don't have similar DPI - the mouse cursor will slow down or speed up when you move to another display.


This is also compatible with different operating systems, I remember running this with OS X and Windows 10. Pretty wild.


I now regret paying for Synergy!



Note that it is definitely possible to use this without those horrible dongles and even if you don't have any spare video ports at all.

Unfortunately, on both Windows and Mac, open-source solutions don't exist because no single developer wants to deal with driver signing.

On Linux, you can do this a number of ways:

- the evdi kernel module can create fake video outputs (I'm currently working on a CLI for it - stay tuned)

- xrandr can force-enable disconnected video outputs (I believe even if you've used up all the CRTCs)

- xrandr with Intel drivers can create virtual screens

- swaywm has a config option to make virtual screens


Thanks for sharing! Is drivers signing that painful ?


Well to start, it costs like 500$ a year [0].

This does seem to be changing [1], but I haven't looked into what the new procedure is/will be.

[0] https://www.digicert.com/code-signing/driver-signing-certifi...

[1] https://docs.microsoft.com/en-us/windows-hardware/drivers/in...


I have a 2016 iMac which has an amazing screen, but sadly apple removed target display mode in 2012, which makes it useless for me. It has been collecting dust, since I can't stand Mac OS except when I really have to use it (mostly some sound production things at work).

I am very much looking forward to try this.


I can't fully vouch for this setup but there are some dirt cheap HDMI-to-USB devices on Amazon (got mine for $15) now. You can run HDMI out from your main machine to the USB of your iMac and then capture the input with OBS and run it full screen. A little hacky but it should work.

I've been using one to pipe an old Android phones camera to a virtual cam on my laptop.


Latency could be a slight issue though.

The display underneath should be connected via LVDS, so it should be relatively easy to give it native inputs.


This is amazing. Thank you.



The hardcore option (within my ability to conceive of, possibly beyond my ability to execute):

https://www.ifixit.com/Answers/View/117066/Can+I+Mod+an+earl...

This will involve some kind of DisplayPort-to-LVDS board (or HDMI/DVI/VGA-to-LVDS). LVDS is the type of input on LCD panels.

If "amazing screen" means retina, one question is whether there is even a board available that can output high enough resolution for good results.

This page seems to have good info: https://jared.geek.nz/2015/apr/driving-fpdlink-displays

Anyway, the advantage would be you can plug directly into it and get full performance (no lag, full bandwidth) because you've basically converted it into a monitor.


You could also try Shells [1] to breathe life into an old computer.

[1] https://shells.com


Why not dual boot Windows or Linux? VMs work fine too, been using them for a decade now.


Because I have a Linux workstation and I want to have 2 screens. The iMac screen is waaay better than my main screen, which is why I want to use it. In daylight, those extra 150 cd/m2 really help


Not sure if this helps the dual-screen desire, but to make more use of it, couldn't you install linux directly on the iMac? I did that (install pop os) recently with a Macbook Air 2012, and it works flawlessly as far as I can see. All special keyboard buttons, sound, sleep/wake, battery, trackpad (minus gestures), etc. etc.


If you want to go further down the rabbit hole, here someone tinkering to get target display and linux to work together:

https://floe.butterbrot.org/matrix/hacking/tdm/


On Linux, I've used https://github.com/rhofour/evdi-vnc to make a secondary screen that you can connect to with any VNC client.


Boy would this comment have saved me a lot of time had it been posted on the original discussion a couple of days back!

I did eventually find that project, but only after having written what is essentially the non-VNC half (creating and managing evdi displays) of it from scratch...


See also Vysor: https://github.com/koush/vysor.io by the creator of ClockworkMod.

It streams Android and iOS UI to desktops with actionable controls.


How's the latency on that? I've been looking for a good "dock" solution for my phone but as it's type-c port is only USB 2 it doesn't support video out with charging.


Pretty good. I used it for a couple of years doing mobile development from a Windows machine and it was very responsive. The setup was a bit odd (licensing issues) and I ended up switching to scrcpy which is free and also very good:

https://github.com/Genymobile/scrcpy


Thank you very, very much for putting an architecture and state diagram in the README.md. Nice work!


Came here to say the same. If everybody included just a single high level picture like that, you’d give newcomers a real head start in understanding the code.


That is exactly what I think when I see any project on github without such a diagram. I would contribute to more simply if they had a brief diagram like the one I did :)


Welcome! :)


Thank you all! Tons of great ideas, now I scratch my head and thinking on how to get them into a simple API. Here is what I think virtual display API should look like: https://github.com/pavlobu/deskreen/tree/master/drivers


Btw, deskreen has been shared enough times now to become popular on "TypeScript LibHunt" too https://www.libhunt.com/lang/typescript (disclosure: LibHunt founder).

Definitely seems like a helpful project. The stars on GitHub have jumped from 1 to 1,900+ in just one week! Enjoy the ride and thanks for open-sourcing your work.


Hey, that is great news! Thanks for sharing it! :D


Since android 10 and its desktop mode would be nice to have virtual displays there as well. On rooted phones it shouldn't be hard to get such a driver into a kernel. Android distros like LineageOS would be probably more than happy to incorporate such a solution.

Another thing is Linux, drivers for virtual screen already exist, check xvfb and xorg-video-dummy


Does this add any sort of touch capacity to the OS if the input device has touch screen?

In other words, can I use this app to draw from a tablet to something like Gimp on my main computer?


If you're on Linux, Weylus will let you use your tablet as an input device for Gimp.


Hi, this may be added in the future releases depending on how we collaborate on making new cool features. Cheers


No, it is just for display.


You don't need Deskreen, you can extend your Windows desktop to a second screen on any device easily by using: Zonescreen (https://zoneos.com/zonescreen/) or TightVNC with DFMirage Driver (https://www.tightvnc.com/download.php). For an in-browser VNC client there are: noVNC (https://novnc.com/) or Apache Guacamole (https://guacamole.apache.org/) or ThinVNC (https://sourceforge.net/projects/thinvnc/). As you can see, there are already many solutions for this use case. I also have made a VNC-like remote screen using Kindle 3 (https://github.com/niutech/kindle-vnc).


hey, I have this awkward feeling when my original post got less hype on hackernews than the current one :) my original post: https://news.ycombinator.com/item?id=25820533


Don't let it stress 'ya. Happens all the time here :)


Haha no problem. I'll not post during working days here anymore :D only weekends from now on


I think, it’s not only timing. It’s how HN works. Usually, you need someone with “good karma” to submit and upvote it. Otherwise - good luck.


Alright, will keep a note on that ^



I did not expect this kind of performance. Neat!


Hey thanks! Enjoy!


“Dummy Display Plug” sounds like something from Neon Genesis Evangelion. :)


Baka! You're the dummy, Shinji! Now get in the robot!


You could also use built in browser screen share. (instead of electron app)


It usually requires you to have an access to internet due to how WebRTC works signals should be exchanged on internet somewhere. Deskreen solves this problem by running local signaling server, so you don't need access to internet


I corrected some spelling mistakes.

https://github.com/pavlobu/deskreen/pull/34


Anyone knows how to have the same functionality using X window system?


Looks like Weylus is what you're looking for:

https://github.com/H-M-H/Weylus


https://xpra.org/trac/wiki/Clients/HTML5 this seems like a start. Never tried it though.


For those on Mac, there is Sidecar to use your iPad as secondary display for Mac (https://support.apple.com/en-us/HT210380).

So, combined with @gambiting saying something similar exists on Windows, I'm struggling to see the utility of Deskreen given the two main OS's already have such functionality built in.

I guess cross-platform might be the only use ?


To me, this project seems much more useful than these platform specific apps. ANY screen with a browser means much more compatibility than iPad only for the second screen.


Sidecar is only possible if your Mac and iPad are fairly new models. Also, it doesn't allow one to use e.g. another MacBook or iMac as a screen.


I use YAM display[1] with a 2012 macbook and an ipad mini from 2015, which works great!

I don't think you can use another mac as a second screen though.

[1]: https://www.yamdisplay.com/


Also check out Luna Display if you want to use another Mac as a display: https://astropad.com/product/lunadisplay/


What about my Mac and my Android tablet?


Is this the correct tool if you are trying to use a cheap tablet as a wireless display for a raspberry pi?

Edit: I don't think this would be a good solution. It mentions getting a qr code from the host machine and approving the connection inside the application. These things would be really hard to do on a pi with no screen over an ssh connection.


It might be super interesting if someone could make a native Android/iOS app for this since mobile web browsers can't do "real" full screen. Would be useful for turning spare tablets into displays and also perhaps for quickly checking color calibrations of photo adjustments on mobile displays.


What are people using their second screens for?

And why not use a virtual workspace manager instead?


When I'm coding, I generally have code on one, and the other has a browser with documentation, notes, issue tracker, or just used for searches etc. Usually that monitor is split and also has chat visible.

When I'm testing/debugging, one monitor might be the web app I'm working on, the other is the code I'm stepping through. Or I'll be tailing a log file or watching something in a database while using the app.

I do lots of client/server stuff, so sometimes I have a monitor showing the server-side (web UI and/or logs) and the other showing client-side (cli, logs, and/or local UI). The key useful thing is seeing the client do an action and send something to the server, and seeing the server instantly react: that's not possible when you can't see everything.

In some cases, I'm debugging remotely, and a monitor will be partly or completely dedicated to ssh/rdp/vnc to the remote system(s), with the other used for browser or cli app I'm testing. Usually chat, documentation, source, issue tracker is mixed in there too.

There's a couple things I work on that have long running builds (10+ minutes) or integration tests that takes a bit over 30 mins. Both are long enough that I'll do something else while waiting and suddenly 2 hours pass before I remember, so I like keeping the build status page visible somewhere to avoid that.

Even for non-work/coding sometimes I just have YouTube or Netflix open (either 1/4 window or full-screen) while I'm browsing the web on another.

To me, working on my laptop is doable, but feels like having one hand tied behind my back when compared to working with multiple monitors and a proper keyboard and mouse.


I have 4 screens on my desk. From left to right:

Vertical oriented 1080p: chat windows, sometimes replaced by multiple command line windows.

Horizontal 4K: Emacs, Visual Studio Code, Browser with repositories/jira, command line windows.

Horizontal 4K: Browser with documentation, secondary VS Code windows, Outlook, Teams.

Macbook Pro screen: Finder, calculator, more command line windows.

Most of my "main" work takes place in the 2 horizontal 4K screens. The other 2 screens are "secondary" information. Having that much screen real estate allows me to more easily collect and arrange the information I need to do my job.

If I could change anything, it would be getting a 24 inch 4K screen to replace the vertical one, and having a higher refresh rate than 60Hz on all my screens.

I hope that one day 8K screens become affordable, because I'd love even sharper text. Reading on the vertical 1080p screen seems fuzzy compared to the 4K screens. (Which is definitely a "first world problem"!!)


Sometimes I feel like in a diferent world than everyone else. I'm really happy with my 1080p devices, and upgrading them to higher resolution would feel like a waste of resources to me (I don't have especially bad eye sight, but on 1080p things are already small enough to my taste)

(Of course I'm talking only about computer monitor, for very big TV screens or for screens that I have 10cm from my nose it's a different story)


WFH have an old Apple 20" screen as my only display. 1680 x 1050. With multiple workspaces and a tiling window manager, I find it completely usable. At the office I had two screens, but don't really miss it.


Just got (as in, 2-3 hours ago) a new 4k display (32") to add to my 2x22" setup. My eyes are already happier. I'm using windows with scale set to 150% (for the 4k display) and 100% for the other 2.


I use 200% for my 4K screens, but I'm also older (late 40s) with poor eyesight.


For me personally, it is only really noticeable when I shift my eyes from one of the middle 4K screens to the 1080p screen.

I am a little picky about fonts and their display though. The higher resolution the screen, the better. I have my Emacs configuration using a bunch of different fonts and sizes/weights/etc for my Org-mode, Terraform, TypeScript, and other editing. For example, when a todo item is put into "in progress" the heading is slightly larger and bolder, and when that item is complete, marking it "done" changes the text to italic, extra-light, and grey to reduce its visibility.

Time spent messing around with fonts is definitely an expression of ADHD and active procrastination, but I do get a pleasing effect from it! :)


I find that I am more sensitive to low resolution when I don’t have my contacts in since it takes a blurry image and makes it blurrier.


You must feel seriously under-equipped when working anywhere else than your desk ;)


I sometimes have to rough it with only a single exterior screen! Once I had to work on only the laptop by itself. The Horror!

;)

But, more seriously since this is HN: While the extra screens are great, as is the mouse and mechanical keyboard, it's not much of a hassle at all to work in a different location. I think the extra screens, desk space, peripherals, all go towards making me more comfortable rather than efficient. Which for me, rocking the oh-so-annoying ADHD along with nerve and back pain, means I can focus on work for that much longer in one session.

I should be working from home at least until mid 2021, hopefully much longer or even permanently. A good working environment can really improve your mental health at a time when, in the USA especially, things are incredibly stressful.


Hello from another ADHD sufferer. Interestingly though I fall on the other side of the remote work issue - I cannot wait to return to the office. My brain is quite stubborn about categorizing spaces; my computer room at home is where I do some side project programming and a fair amount of gaming. Having to try to recategorize it as a working space has proven impossible, and I don't really have any other space in this house, with my wife also working remotely.

Besides, more generally the drive to the office helps put me in work mode. The whole office building is a place where work happens. Then the drive home helps put me in leisure mode. Without those neatly coded space/time contexts I have been seriously struggling.

Everybody that keeps talking about how the world is going remote has been giving me a fair amount of anxiety. If I lose my office I'm not sure how I'm going to function. Besides, my coworkers and I miss seeing each other and being able to work together in person. We don't do much pair programming, but when we do, trying to do so over screen share has proven to be significantly less productive.

But then again, I am lucky to have an amazing team of people who I genuinely enjoy being around, and who are all very respectful of quiet time to work when needed. I'm also definitely an introvert but maybe less so than the average dev.. I miss social contact!


Because I can't see what's going on on a virtual desktop out of the corner of my eye? Because I can't easily play a game on one virtual desktop and have a browser open to glance at a guide, or a discord server or ...

I don't know why you wouldn't want two screens, personally. I've worked with three before and found them useful. Virtual desktops never felt natural.

The closest I came to adopting them really was during the "compiz" era, using the desktop cube effect. That metaphor seemed to play into my brain's spatial awareness quite well. But still not quite as well as a separate screen.


> The closest I came to adopting them really was during the "compiz" era, using the desktop cube effect. That metaphor seemed to play into my brain's spatial awareness quite well. But still not quite as well as a separate screen.

This makes me wonder how your brain can handle multiple browser tabs ;)


For me personally (not who you responded to), honestly not very well. I'm always losing track of which tabs are which. When the tabs are simple content it's mostly okay; mentally it's just like having a tabbed notebook. But if the tabs are interaction- or app-centric, my brain begins to slowly jam up.

My ADHD almost certainly takes a lot of the blame for that.


Honestly I have limits with those too. Any more than fit across the screen and I start to lose track. I find it mind boggling when people say they have dozens and dozens of tabs open at a time.


Definitely. I typically have browsers open in two or three different workspaces, for different purposes and with different profiles. But within any browser session, I never have more than half-a-dozen open tabs. I can't really mentally manage more than that.


That's like asking, what do you use your 3-room apartment for? Can you not just do with 1 room and a large cupboard?


Well, I wouldn't own a 3-room apartment if I could change those rooms with a keystroke and effectively own a 9+ room apartment.


But your real estate doesn’t change, you just redecorate your 1-room apartment each time you want to do something different


I think that having multiple screens might use less cognitive resources than having virtual workspaces, since it is very natural to move your head / mouse whereas the workspace manager is a more high level construct.


When I do purchasing for my ecommerce business I keep a spreadsheet in one screen and a browser in the other. Occationally I do from my laptop with one screen and it takes longer and I make more mistakes because the spreadsheet is huge and there is too much cognitive overhead when I have to keep flipping between windows/virtual desktops.

Also when coding. I keep docs and messaging apps on one screen and I can keep the other screen clean with only code. It really is faster and easier with two screens.


I've seen this question a few times and have been looking for hard sells, especially because I spent quite a few workdays during summer in a hammock without a second screen.

I've found about two hard sells so far:

Online presentations and demos to people. For example, showing some slides to some people, with some jumps to code or terminals. In this case, it's extremely valuable to have the call with webcams open on a second monitor so I can keep tabs on the expressions of the audience in order to adjust tempo. Switching the contents of the presentation screen / shared screen without warning is jarring and confusing to the audience. And commonly used communication programs do not support sharing one virtual desktop easily, only screens and/or windows.

And sometimes, it is valuable to be able to display more information at once. Sometimes, during larger outages, it helps to be able to display more key metrics of the system on one screen and poking the system on the other screen. This one can be done with virtual desktops, yes, but it has less overall cognitive overhead for me to just have a stable screen with the monitoring data and/or logs on it available instead of constantly flipping between desktops.

And again, during screen sharing, people hate it and get confused if I flip the main screen around too much between monitoring data and shells. So having a work screen to share with people on a call and another screen for information helps reduce that.

A lot of other use cases are convenient with more screen space, but good virtual screens with hotkeys work almost as well I've found.


Virtual desktops are useful, but often a physical screen is better. For example, it's much more convenient to glance to and fro from code and logs/stack trace.


But keeping track of where the mouse pointer is is much easier using a virtual desktop.


I think, for most, the advantages of a second physical monitor outweigh any mouse tracking disadvantages!


This is why cursors on text terminals would blink.

But I'm not sure making a mouse pointer blink is the right thing to do.

The whole Windows-Icons-Mouse-Pointer environment was developed when 640x400 was a high resolution and 22" monitors were considered huge. So easy to pick out the pointer at rest, but I can see how you can get lost on basically a 4K resolution TV.


You can set up accessibility features to help with this. Make it larger or configure it so shaking it makes it much larger for a short time.


The answer to the second question is obvious: because you can't see both at once.

As for the use cases, here are my most common layouts (3 monitors - old 4:3, 1440p, vertical 1080p):

- preview/debugger | code | docs (programming)

- slides | video call | chat (online presentations)

- gallery | photo | same photo but 1:1 zoom (photo editing)

- effect controls | timeline,preview,more controls | clip bins (video editing)

- chat | game | game wiki


Guess I'll jump in too :). I have two 27" 4k monitors, the one directly in front holds work, the one off to the left has docs, chat, email, etc. I also use virtual desktops and I could arrange all this stuff with those, but having to switch back and forth is a minor cognitive hurdle I choose to avoid.


I tend to run both. The second screen let's me have source/editor on one and reference material on the other. My daily driver is a three screen setup( laptop display and two external monitors). There's usually some combination of logs, app ui, source, and refence material across all three so I can switch between them without having to think about which virtual display they are on. In support situations, I'll have a screen share running on one, chat on another, and likely reference material on a third. You could do it with a single huge display, but it would need to be huge and have a way to subdivide it into reasonable slices for easy window snapping. Plus, you aren't going to get that in anything that can be used as a laptop ...


The web browsers inspector is my main use at the moment. The M1 MBP display is simply too small to have the inspector and the browser comfortably on screen simultaneously, and its not possible to flick between virtual desktops (or application windows) without losing the context (some targeting stuff is hover-based, for example). Plus its annoying tweaking a css style, flicking back and forth all the time instead of seeing your change reflected as you change it.


Some of us use multiple screens and virtual workspaces, it's not mutually exclusive.


I grew up on programming on a Mac SE, so layering feels natural to me versus needing to have everything on screen at once. That said, a 26” 4K monitor “above” (with the resolution increased a bit) my 15” MacBook Pro has been a good setup in WFH mode.


Looking at the matrix. Encoded ofcourse cause theres way too much information and the image translators only work for construct program.


It's more screen estate. Where it's usefuld depends on what you are doing.

I often have one screen for "actual" work and the other screen for chat/mail/... then checking for something going on there isn't a full context switch, but a short look.

Sometimes in development a second screen can be useful to have a screen full of different code files and second screen for documentation or the app or logs or whatever.

During video chat i have the faces of the others on one screen and notes and other things on the other screen or during social videoconfs during covid I have faces in one screen and second screen for a (board) game we are playing ...

It always depends on what you are doing. Things one can do are many.


Is this a TeamViewer/AnyDesk level of interactivity or just a Twitch level of interactivity from 2nd screen? As in 2nd screen can be used to control the main source screen or just to watch what happens there?


Works the same as an additional monitor- which is what it is. The Dummy Plug creates the second monitor thru native support in the OS. Deskreen connects your additional device to it via a web browser. I installed it on Ubuntu and had it working in two minutes, using my iPad Pro 12.9” as a second monitor ;-) Looks and works good and decent latency.


can that second monitor be controlled with the mouse and keyboard from the device where the browser is running?


I tried and I couldn't.


indeed, i tried it myself now. but there was something in the wishlist of features about remote control. it should be possible to capture mouse clicks and most keyboard input in the browser and send it back. don't know how much effort that is though


Twitch does the same. Run Twitch in browser and broadcast with OBS your main screen. Then any device can see your main screen


The main issue I see here is the interaction on the "customer" side. There are too many buttons to click.


I've tried this the past three times it's made it to the front page. Using both an iPhone and iPad with safari I was unable to even connect the server. It acts like it's loading then fails.


Hi its not normal, coz other users don't have these kinds of things happening. It may be a firewall on your desktop, wifi or even your phone.

Also you need to have an access to private networks if you are on Windows.

Cheers


Thank you for replying! I've actually been using and enjoying your product for the past day or so.

"Also you need to have an access to private networks if you are on Windows."

This was the problem. Being the bonehead that I am, my home network profile was accidentally set to Public network, rather than private.


I really want something that's like spacedesk (another one of these) but without using a HDMI dummy plug, spacedesk is glitchy however doesn't need a plug and can support many devices.


This demo has got latency at the end of the video https://youtu.be/adY2SnGT358 (very small latency)


Hi, I have to mention the latency will be if you use slow wifi, slow or old devices. Cheers


Amazing accomplishment. Does anyone know if it works on Kindle Fires?


Hi thanks! It can be anything that has LAN or wifi and runs a web browser that supports WebRTC


I guess I'll have to get a dummy display connector for my current video card. I'm still wondering how Duet Display is able to do so without it.


Yeah, me too. They don't disclose it coz their whole business is running on this small ad-hoc hack :)


sucks that to get a non-mirrored screen you'd currently need a fake dongle. Kinda takes the value proposition away a lot.


Hi. It's open source so expectations can be lower than for commercial projects. It's made on pure enthusiasm. And we are figuring out a way of how to hack the OSes to get rid from dummy display plugs. (just look at repo README.md) So stay tuned, in future releases wi will be updating on that. Cheers


Neat idea. Do you have a video demo somewhere?



Would it be possible to run a 2nd screen as small PIP on the desktop?


requires a dummy hdmi dongle, but far cheaper than DUET


Not required for Mirroring but if you want to use as a second new screen then it's!


is it possible to output the image buffer to a file or stream it over the internet? I have favorited this submission.


It's based on WebRTC and is way more flexible than a file stream:)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: