In WSL1, running "wsl git status" on a moderately sized repo on an NTFS (Windows side) drive or SMB file share is nearly instantaneous.
In WSL2, running the same command takes over 30 seconds. WSL2 is a massive hit to the seemless experience between the two operating systems with filesystem performance from Linux to Windows files orders of magnitude worse. Yes, unzipping tarballs or manipulating and stat syscalls are cheaper now on the Linux side. The downside performance loss is, however, staggering for the files I had in C:\
Don't even get me started on how long an npm install took.
One of the truly wonderous things about WSL1 was the ability to do something like this in a PowerShell window:
Now performance across the OS boundary is so bad, I wouldn't even think of using "wsl grep" in my C drive. Or "wsl npm install" or "wsl npm run test" or any of that.
It's very depressing because WSL1 is so, so promising and is so close to feature parity. WSL2 should definitely stick around and has use cases, for Docker it's unparalleled. But for daily driver mixed OS use, WSL2 has made me very unhappy. I think I'll be deconverting my various WSL distributions because the performance hit was too much, it just was absolutely unbearable to even run simple commands in my Windows side.
My understanding was that there were some hard-to-impossible problems to solve to really accelerate the filesystem access from the Linux side under WSL1.
That meant that people doing disk-intensive workloads on the Linux side noticed a big slowdown compared to a native Linux system - certainly e.g. running a big test suite, or a git checkout felt really incredibly slow.
The switch to a VM flipped this relationship round - so now the formerly native / NTFS side is the second-class citizen, but you get the expected performance when putting your files on the "Linux side". For me (doing fs-intensive Rails development), this was a big win.
WSL2 will also quite happily gobble so much memory that Windows slows to a crawl (especially filling Linux's disk buffers on file copies) - that seemed like an odd default, - you just have to pop a .wslconfig in to restrict its usage.
I agree with the other posters here that the WSL1 approach seemed far more elegant, and probably the only way to "not see the joins"- with WSL2 we're worrying about filesystem boundaries _and_ memory now, probably forever. So I hope someone still working that nice seamless syscall layer for a future WSL3.
Long bet: WSL3 will just be Microsoft dropping the NT kernel altogether and replace it with the opposite compatibility layer (like Wine) running on top of the Linux kernel.
It probably won't happen anytime soon, but to me it looks pretty inevitable in the long run : because of Azure they already spend tons of engineering time on the Linux kernel nowadays, and maintaining their own proprietary kernel won't make much economic sense for long, exactly like maintaining their own browser engine.
That was not my experience with WSL1; I was regularly running into unimplemented features. Some examples: the Z3 solver used clock_gettime for timeouts, and their specific usage was broken in WSL1 so you'd get random failures depending on how long the solve took. And don't get me started on running Chromium.
It felt like WSL1 would be running into the long tail of compatibility issues for years.
But I'll admit that I don't try to use it the way you do, I run WSL precisely so I'll never have to launch cmd.exe.
Some useful network related stuff wasn't implemented either in WSL1 (NETLINK_ROUTE\RTM_GETROUTE, AF_PACKET family). This meant even good ol' nmap was out.
Clearly the best solution is for Microsoft to (a) write a proper ext4 driver for Windows and (b) find some way of embedding SIDs into ext4, then you could just format the drive as ext4, boot off it, and have the improved performance.
(This is mostly a joke, but the performance of NTFS for certain operations has always been abysmal, and having a virus scanner injecting itself into all the operations only makes it worse.)
AFAIK the main problem is that Unix's file permissions do not cover Windows' permission model. That would be tolerable on a data partition, but a system partition is going to use all kinds of very particular permission setups on system binaries etc.
You might be able to model that stuff as xattr, but then it could be problematic to mount that ext4 partition into Linux because applications might be copying files without respecting the xattrs.
>AFAIK the main problem is that Unix's file permissions do not cover Windows' permission model.
Well, since Microsoft has been borrowing more and more ideas from the Linux ecosystem, it would not surprise me that a Windows 10 successor would include some kind of compatibility layers for different file systems.
Why don't they just replace Windows with their own Linux distro? :D WSL2 cannibalizes Windows from the inside out, and all that's left is Sphere. Seems like the most efficient solution.
I wonder if this is like the IBM PC, which was a GOOD THING invented by a sort of an offshoot of IBM culture. Then IBM higher-ups stepped in and tried to control the platform (PS/2, OS/2, microchannel, etc)
WSL is attracting people to windows. But the endgame isn't to lose them to linux. So they have to tie it into windows more. But if they make it too slow and bloated they might lose.
Much of the performance problem comes from layers on top of NTFS itself- it's not just the virus scanner. Ext4 might be faster but I doubt it would be enough to ditch WSL2 for those use cases that need it.
Also, some of the "performance problems" are simply different access models. Windows and NTFS tries to provide some database-like ACID characteristics, including transactions at the level of batches of file updates with commit/rollback support. Ext4 and Linux (intentionally) make few such guarantees and so it shouldn't be surprising have very different performance profiles, just as you might expect between a NoSQL database that makes no ACID guarantees and an SQL database with multiple types of locks and several types of transaction behaviors.
I feel like they don't advertise this enough. I was under the impression that it's a one way street from wsl1 to wsl2. Since wsl2 is not strictly better than wsl1, it's nice to be able to convert and pick the trade-offs you want.
Yeah, the mistake of using numbers instead of names gives the impression that 2 is strictly better than 1 and that migrations are unidirectional upgrades.
Even just letters like WSL-A and WSL-B might have given a better impression.
Unfortunately Git on Windows is also extremely slow. Especially using Magit in emacs, which does a lot of git calls, works much, much faster for me if it's sshing to a Linux VM for each call then running natively on Windows.
Git is not slow for me. Don't know about Emacs, but I'm using git from command line and from Idea and it works just fine, instantly for ordinary tasks. Commiting 1000+ files takes few seconds.
Given the fact that Git is used for Windows development with monster monorepository, I think that something's wrong with your setup rather than Git on Windows in general.
Git itself is decent, the problem is that Magit calls git a lot of times for a single GUI action. For some things, it can call git 5-10 times for a single key press. If every git invocation is around 1 second, that becomes a noticeable delay...
I use Emacs in WSL, along with a suite of other tools like rust-analyzer, and the experience is _lightyears_ beyond trying to run those tools under regular Windows.
I think the popular Windows development tools will get support for remote development with WSL. JetBrains is working on it for IntelliJ and I can't imagine Visual Studio will be far behind.
WSL2 is really not designed for using Linux tools on your NTFS-based filesystem. Store everything on the WSL filesystem, that works perfectly. If you need GUI tools try VcXsrv.
What do you propose if you want to use a windows program to edit those files?
For example I use intellij on Windows but want to compile and test on the Linux machine. If it takes 30 seconds longer than wsl 1, why would I bother changing?
What is the actual point of wsl if not for the cross compatible filesystems
WSL never really got the cross compatible filesystems working though: I eventually found myself giving up on it and just using cygwin. I honestly don't understand why WSL gets so much attention when cygwin is just so much more compatible with everything and includes essentially every package I have ever wanted?
I've wondered the same. A lot of it is PR from Microsoft, including this post here probably.
There is a Reddit sub for WSL (/r/bashonubuntuonwindows) and it's apparent that MS has PR people on it pimping each new release. Reminding these mostly new developers that Cygwin has been around (with all its problems) for years, brings on a flood of downvotes.
And now it's exactly the same with WSL2; when someone has an issue with some esoteric networking feature that is still not supported on WSL2 beta versions, I'll often remind them that VMWare Player and VirtualBox have been around for a decade and will solve their problem, while also including all sorts of nice features like shared folders, drag & drop, copy and paste integration, etc. But they don't want to hear it. They've been fed so much marketing that WSL and WSL2 are really something incredible...
That's not my experience with VS Code on WSL 2. I have been using it for months using the remote extension, hosting my git repos in the Ubuntu subsystem, it works like a charm and feels very responsive.
Maybe wait for an IDE update that handles properly WSL 2?
Then there is no WSL 2 handling in your editor. VSCode remote extension works in the same way for either WSL 2, or a full-fledged Linux VM, or even a remote Linux server.
As someone who run a Linux VM side by side all times, I really don't get WSL 2.
VSCode made some unique design choices which enable them to support connecting to any Linux server, VM or not. In contrast, these design choices may not be possible for other IDEs. So, because WSL 2 is, effectively, a Linux VM, supporting it in editor is harder than supporting WSL 1.
As for "I really don't get" part, I wanted to say that WSL 2 sounds like a regression to me, WSL 1 makes it possible to achieve something (namely, local-ish cross-"os" net/process/file-system integration) that is entirely impossible otherwise, while WSL 2 is a nice packaged-up solution but functionally does not do more than people already get (Hyper-V).
One thing that's very valuable to me in WSL (both 1 and 2) is the automagic network settings that make ports available between systems - so if I start listening on 127.0.0.1:1234 in Linux, I can connect to that on Windows and vice versa.
For wsl2, Vscode has integrations that let you do exactly that. I use python primarily and it lets you use the python interpreter installed on wsl. I assume other IDEs would have something similar or at least let you develop using a remote machine, but in this case you would configure it to point at your VM instead.
As long as your processes and files are from the wsl vm, it is extremely fast. I rather use the wsl shell, so all of my files are in the vm.
The problem is that special integration is required.
Personally I don't like VS Code, I too use IntelliJ IDEA, which will probably end up having support, but it didn't last time I tried.
On my Macbook I also use Emacs and GUI versus terminal shouldn't be an issue. I'd want Emacs from inside a WSL bash, I'd want it from the Windows GUI too. So that's going to be a headache.
Since version 1903, the proper way to access Linux files for writing is to invoke explorer.exe from within WSL. A transparent 9P mount is created for the working directory and files are made accessible through a regular Explorer window.
This has been changed. WSL 2 VM now runs a 9p file server, and on the Windows side it mounts to \\wsl$. Of course, the performance are degraded. It would certainly take longer for Intellij to index your project.
I have my projects in WSL and IDE (jetbrains) in Windows. Works fine, obviously IDE file system responsiveness is lower than native but the execution / build performance of project in WSL makes up for it.
I tried this on WSL1 and it absolutely didn't work for any project larger than the typical Hello World example. Trying to use a polyglot project with a bunch of Java, Scala, Go, various plugins like DB views, etc. would grind Jetbrains on Windows to a halt as it simply couldn't sync with the project files on Linux due to slow IO.
I've used Linux VMs on Windows before - VMWare Workstation has been around for over a decade and has a lot of bells and whistles that make the experience tolerable, but again, the IO is too slow to share Windows and Linux apps between filesystems, so you're basically forced to develop 100% in the VM, IDE included. If you're locked to a Windows laptop because of your employee's IT rules, it's better than nothing, but not optimal, and I wonder why people are so excited for WSL2 when VMs that have more features have been around for over a decade.
I've been trying to find a non-Apple solution for a decade, and it just doesn't exist. And as Apple has been ignoring developers and MacOS itself for the last 5 years, and Linux is still riddled with the same problems it has had for 20 years, the options for developers are becoming less and less.
Deactivating Windows Defender for the subsystem‘s storage folder helps somewhat, but I agree that the situation makes the WSL close to broken for many potential users.
Yeah I really don't get the desire to do unixey development on Windows. Boot up a VM and SSH into it, if you really must have Windows. It's not like you have to buy a license to use Linux. I keep struggling to understand what WSL brings to Windows. First it was a totally incomplete distro, and now it's just a fucking VM. Seems like a gimmick more than anything.
Virtualbox, HyperV, etc will all allow you to access your Windows files on the guest OS. If that doesn't work, just set up an SMB share and map it. Why all the complication? Does clicking one button to install a distro really serve anybody? Why do you want to use the fucking awful Windows Update mechanism to update your kernel? Updating the kernel in Linux is so fast and easy...
You learn so much more about Linux by running Linux. Why are we trying to abstract that away? I think it's Microsoft's desperation to keep devs from continuing to jump to MacOS and Linux.
I too have found this, with one exception: fork/exec is appallingly slow. This means ad-hoc scripts in Cygwin should be written as much as possible as pipelines and not loops; bash string functions should be used over sed, grep etc. whenever you can help it.
But with that caveat in mind, Cygwin turns Windows into an acceptable Unix for command-line purposes.
I don't think it's quite as good as a development environment when you're targeting Linux though. That's where WSL (in either form) makes a lot more sense.
Apparently[1] virtio-fs wasn't mainlined in Linux until 5.4, which was rolled last November. I thought it had been mainlined years ago. That helps to explain why support is lacking everywhere.
But what's the point of that? If I wanted my files to be on Linux I'd use Linux. I'm on Windows exactly because of things like this: the ability to use proper UI tools like Explorer to manage my files. How do I do that with WSL2?
It's not a network path, it's a pseudo-device. Windows uses the \\ UNC file paths for a lot more things under the hood than just network access. There's a bunch of rare device file paths that you'll get UNC paths for, and every folder path canonicalizes to at least one UNC path for multiple reasons.
Though it is powering that pseudo-device by a 9p-based file server under the hood, it's not a network accessible path, it's only available to the local system.
The trade-off between WSL1 and WSL2 (and you can have both on the same system and migrate distros both directions between the two) is mostly how often and where do you expect to need to deal with the 9p file server between your operations. In both versions Windows needs to use the 9p server to access Linux files, in WSL2 Linux also needs the 9p server to access Windows files.
At a high level it's much closer to a Win32 Namespace [1] that appears like a network path. UNC stands for Universal Naming Convention, there's no "Network" in the UNC abbreviation as there are many namespaces other than just network paths. Which is why the $ was chosen for the name because it is a valid Namespace character but not a valid system name in Windows, and they wanted to avoid the problem of people with systems named "wsl" suddenly unable to be accessible over the network, because Namespaces have higher priority than network paths. You could think of it as bypassing the network, but it is maybe more accurate to view it that network access is a fallback of UNC paths after all local Namespaces have been checked if they support the path.
Also, yes the current implementation backing that namespace/path is a Plan 9-based network file server, but that's an implementation detail that could change, seems to handled under the covers of the Namespace a little more directly than usual network access (including avoiding a localhost "loopback"), and probably something subject to change as WSL's needs change.
I don't blame MS for giving up. Think about how complex it is to maintain a custom version of the Linux kernel which isn't a kernel but a wrapper for your very foreign OS. I'm surprised they even went that route.
My guess is they people who put it together were under the assumption that Linux should be as simple as to implement as the old Windows Subsystem for Unix and POSIX API.
So if accessing ntfs from wsl is now slow, you can put the files in wsl instead. The problem then, if you're using this as a dev machine, is how do you edit them in Windows? I want to use my JetBrains IDEs to edit wsl files, if that doesn't work I'd just stick with dual boot.
This was the use case I was really hoping for, also. I've been back and forth with various employers over the years, depending on their requirements - Windows only (it works for Java), Cygwin, VMware Workstation with Linux VMs, MacBook, Linux on the hardware, etc.
WSL1 was unusable as there was no way to run IntelliJ or Eclipse on Windows, and have it work on large projects sitting in the WSL filesystem - the file IO was way too slow, and the instantaneous feedback you expect from Jetbrains products just wouldn't work.
VMs on Windows would work, but again only if you were developing 100% in the Linux VM, IDE included, and just used Windows for Office and whatever else was required. But at that point, you still had to deal with all the Linux issues like ugly fonts and broken plugins, with all the problems of slow VM file IO.
Cygwin also similar to WSL1 - just wouldn't work for anything that required real Linux underneath.
Linux direct on the laptop works, but with all the same problems that have been around for 20 years and never seem to get fixed - broken multi-monitors, ACPI issues, driver support for Nvidia, video conferencing being too slow or unsupported, no MS Office, etc. I just don't have the time or motivation to spend hours every week babysitting a Linux laptop.
Macbooks are definitely the way to go, I'm just worried that my employer will balk at the cost of the new $3K 16" MBPs next upgrade cycle.
WSL1 was a great invention but Microsoft gave up on it, either because of the filesystem performance problems or because of the debuggers. https://github.com/microsoft/WSL/issues/2028 (lldb, rr, delve all affected). This looks like a dreaded case of the first 90% is easy, it's the second 90% that is hard. Imagine implementing a translator for a vast majority of Linux syscalls just to find certain flavors of ptrace are just not doable. I do not have insider knowledge to ascertain this happened but this would be my educated guess.
WSL2 is a VM like any other VM with an uncertain promise for better networking experience and even less certain promise for cross OS file performance which is much, much worse than WSL1 which was already abominable. https://github.com/microsoft/WSL/issues/4197#issuecomment-60...
It was a very nice dream, pity it didn't work out.
Because I am using an eGPU Windows 10 needs to stay as the primary OS on the laptop. I bought a little fanless machine from Aliexpress (with laptop-like hardware) for <$300 USD it'll be my home Linux server. What can one do?
I guess https://www.reddit.com/r/VFIO/comments/am10z3/success_thunde... could be a solution if I wanted to go back to Linux primary but I really badly don't want to. Constant hardware headaches were par for the course -- I was solely Linux 2004-2017. I don't want to be again. If there would be a cheap remote sysadmin service... but it doesn't exist. QuadraNet will sysop a server for $39 a month, that'd be awesome for a laptop... but I have never seen anyone doing that.
I haven't used all variants of VMs but my experience with VMs is very different from WSL2. For example:
* Smooth set up. Don't have to install some large commercial 800MB MSI like VMware workstation, download some Linux image, go through partitioning of file system etc.
* Well integrated. I can open up a terminal and it acts as any other window in my system (meaning I don't get the window in a window effect as you get with a new VM).
* My file system is mapped automatically. No need to set up Shared Folders or whatever manually.
* Better startup perf. WSL starts in a second on my computer. Never had the same experience with full VMs. Even if I use something like alpine just starting VMware or VirtualBox takes a lot longer than starting WSL.
Saying it is like any other VM seems just incorrect. Saying it didn't work out seems even more misguided.
> Saying it didn't work out seems even more misguided.
That may be, but the rest of your comment seems to be unrelated to the matter at hand since you merely listed some advantages of WSL2 instead of addressing the disadvantages that are causing people trouble.
I have been using multipass very successfully the past few weeks. It's a full fat VM and has an experience very similar to that of WSL. https://multipass.run/
WSL was very cool from a pure tech standpoint, but I've never been clear what the actual use case for it was. WSL2 seems to be more along the lines of coLinux, which I felt the same way about when it was new.
I use it all the time. I develop software for Windows but being able to use various Linux utilities and software is super convinient.
Many of them have ports to Windows but it's just easier when it's all already available. Some days ago I needed to run some penetration test software and while supposedly I should be able to download the code myself and build in Windows, just installing using apt is a lot easier.
Well, if you want to use multimedia and do web development on the same machine, what are you going to do? Linux support is somewhere between nonexistent and utterly broken for the first one and the same can be said for the second on Windows. So your choices are, 1. run Linux primary, put Windows in a VM 2. run Windows primary and put Linux in a VM 3. Give up and just run a separate Linux server.
What problems did you experience in web development on Windows? I worked with Linux, Mac and Windows, but I didn‘t have any problems on any platform with a typical modern webpack/react/angular/typescript/elm etc.. stack. Even docker support with Hyper-V is okay imho.
The Linux multimedia story improves significantly if you avoid Nvidia GPU hardware. My work desktop (AMD Radeon) and laptop (Intel HD Graphics) work fine, and perform as the hardware should.
This seems like comment from early 2000s. I don't remember last time I head problems with BT or sound, and I went through dozen of installations in last 3 years (for me and others). I had to give up on Nvidia drivers but Intel graphics serves me great.
And these were times when I needed community help, most of the time I could get it working by pairing again or some such nonsense. It never worked reliably, in general.
Note I switched to Windows as my daily driver in 2018 January.
Canonical broke wlan and OpenGL for many of us when they decided to replace fully working closed source drivers, with work in progress open source replacements.
So even Ubuntu isn't necessarily a guarantee of stability.
Yes. Yes, I did. At least with Arch I could keep on working because only one of BT / printer / scanner broke on update, most of the production bits kept working. When I ran Ubuntu the system shattered every six months so badly I couldn't work for 2-3 days.
Been running some flavour of Debian for > 10 years. Now on my dell XPS 13 9350. Upgraded to Debian 10, a kernel regression broke the brcfmac driver for my Broadcom wireless. Now I need to disable power saving on the WiFi card. Still, sometimes the WiFi card just dies. Sometimes reloading the kernel module works, sometimes only a hard reset will do.
Broadcom unfortunately has never been well supported on Linux. I've always used ThinkPads for running Linux and have never had these sorts of hardware issues (and I keep laptops for at least 8 years). You need to buy a machine with running Linux in mind (or more specifically, Debian, which has even less out-of-the-box hardware support for laptops).
There are about 5000 of us who did choose a laptop for its keyboard and pointer: it's the ThinkPad 25 Anniversary Edition. I have an SK-8855 already and also ordered the new TEX Shinobi so my unwavering stance on my laptop needing a proper keyboard+pointer might change but for now my choice of weapon is the TP25.
> Web development on Windows has been like surfing for those of us doing Java and .NET web development, since like ever.
This similitude baffles me. Do you mean "had to wear protective equipment, had to drive constantly between locations, we were knocked over every few seconds, and we risked drowning several times"...?
It means riding the top wave while enjoying the Sun, hearing the cool background music coming from the loudspeakers on the beach, and feeling like a champion.
Development. At work we rely heavily on it to get work done because having an almost-linux-like environment is much better than having to run a WAMP stack or similar silly stuff.
It's made mine and my coworkers lives so much easier compared to before, so I'm certain use cases exist. Since WSL2 runs on HyperV, for some of my coworkers it's not an option, since they rely on VMware and similar for other essential work.
I use Docker on macOS daily for work. I love it, and am grateful for it, but the experience is NOTHING like using Docker on Linux. Docker for macOS runs in a virtual machine, and requires dedicating RAM to the daemon. You aren’t directly using the host OS’ kernel either, which can lead to wonky behavior, especially in regards to networking (IME).
There are a number of use cases where Docker for Mac and Windows are painful compared to Linux, and to overcome those problems (like fs performance on shared volumes) people spend a lot of time building hacky solutions.
- Bind mount performance is appalling, even with the delegated/cached modes. I understand the reasons for this, but it's still a major issue for some workloads. To use PHP development as an example, where a popular framework boot-up can read hundreds of files per request (as well as writing out to caches), it's not uncommon to see 5 second response times for a page that displays "hello world" and nothing else. Thankfully there are tools like docker-sync and Mutagen which, while it's one extra thing to set up, get you back to nearly native performance.
- com.docker.hyperkit is nuts. I can have a single container idling in the background doing nothing, hear my laptop fans spin up, and know without checking that com.docker.hyperkit is using 200% CPU for no discernible reason. Restarting the daemon brings things back down...for a while.
At the office we use Linux, but since I've been working from home on my Mac I've gone back to Vagrant for as many projects as I can. Heavier and far less easy to orchestrate yes, but I've found I actually end up with better and more predictable performance (and far less time with my laptop doing double duty as a space heater).
Docker Desktop 2.1.x with WSL1 works amazingly well.
I've been running this set up for over a year and the volume performance is superb.
Flask, Phoenix, Rails and Webpack driven apps are all a fantastic experience. For example Webpack takes 150ms to compile SCSS / ES6 JS diffs for large real world projects. Web server reloads on code change are effectively instant and I get microsecond response times in some Phoenix apps in development.
This is on 6 year old hardware too and the source code isn't even sitting on an SSD (but Docker Desktop is installed on an SSD).
In all cases, everything is running in Docker through Docker Deskop and I use the Docker CLI / Docker Compose in WSL as a client to connect to Docker Desktop.
Docker runs in a VM on macOS, with significant penalties to filesystem speed. I haven't used it for a few years, but booting the large Rails app I work on was several times slower under Docker.
One of my main pain points in Windows is that the terminal app options are so crap compared to iterm2 or even the plain terminal app. The app itself is definitely a draw too
None, as I stay away from them as far I can avoid them, Java and .NET are perfectly fine and IDEs have docker support in case I really need to deal with it.
Nah, you just target Windows with toolchains that support Windows natively, like I am expected to target UNIX with toolchains that support UNIX natively.
Your statement goes into both directions.
Guess why "Year of Linux Desktop" has failed to happen, and no, ChromeOS and Android aren't really GNU/Linux, the kernel is irrelevant to userspace languages and public APIs.
well, I definitely wouldn't use MS-DOS, because it doesn't support node or docker, but also because it's been obsolete since before Terminal.app existed.
NT supports process monitoring, inspection, and alteration in a manner that's already basically a superset of ptrace. As I mentioned in another comment, I can imagine some internal resistance to adding the necessary hooks for complete support eventually making WSL1 infeasible to support --- but for social infeasibility, not technical infeasibility.
Big companies that employ lots of smart people frequently seize up and become incapable of innovating because their internal parts mesh against each other and halt the whole machine. I suspect that's what happened here.
I guess it's subjective. I've been using Linux on Windows 10 pretty successfully for a year now to do some things that would otherwise be much more inconvenient.
> He further mentions that unzipping tarbars could see a 20 times performance increase.
Boy, do I love unzipping tarbars! On a more serious note, it’s great that WSL has gotten much faster, although it’s a little disappointing that they threw out the older, more interesting architecture to get it and just used a VM.
I agree wholeheartedly. I know it was probably a fool’s errand especially in a world where Docker for Windows was doing the same thing WSL2 does but worse, but WSL1’s design just satisfied me in a way that Hyper-V will not. Presumably, this also means you have to have Hyper-V enabled, crippling all other VM software. Hope the integration is impressive at least, to hopefully make up for it. I admittedly don’t run Windows these days but WSL is definitely one of the highlights of modern Windows and it’s hard to not follow its progress.
I also wonder how integration changes with sockets/networking, hardware access, Windows Firewall, etc. Last I checked, relations with Windows Firewall were strained by the lack of proper support for picoprocesses.
* And for me the nice one is the integration with VS Code, I just select which distro I want to work in and VS Code will start it (if it isn't running), connect to it as a remote dev environment, and be able to control it as if the VM were within VS Code. It can be done manually with VirtualBox, but VirtualBox gives me no advantages in return.
* The networking implementation makes both 127.0.0.1 which is occasionally useful for me.
That's not a feature. And in practice it's not even relevant, because you can start your VM at Windows startup and use Putty to ssh into booted system which takes a fraction of second.
Also slim CentOS image boots, well, not in one second, but something like 5 seconds. Fast enough IMO.
Last time I checked (months ago) VirtualBox just couldn't run under Hyper-V. If you Google around you'll see lots of threads, e.g. https://superuser.com/q/1208850
Apparently they claim that as of February 19, 2020, they've "Restored the ability to run VMs through Hyper-V, at the expense of performance". https://www.virtualbox.org/wiki/Changelog-6.1
Restored? That's a weird line. I don't think it ever lost it since the feature was added in 6.0 (released December 18 2018), it was just annoying to activate.
VirtualBox seems a victim of Oracle's usual apathy as a project custodian. Nearly every other VM vendor has added or accepted (Microsoft employees were even directly involved in PRs to qemu and other) performance improvements on Hyper-V.
It's almost like Oracle is trying to upsell VM servers at the expense of the day-to-day operations of the once well regarded open source project they maintain?~
I’m using hyper-v and virtualbox together, seems to be fine performance-wise. Only reason we’re using vbox is that we already have vagrant scripts set up for it and hyper-v didn’t seem to work as well as a target for vagrant.
> WSL 2 will be available on all SKUs where WSL is currently available, including Windows 10 Home.
> The newest version of WSL uses Hyper-V architecture to enable its virtualization. This architecture will be available in the 'Virtual Machine Platform' optional component. This optional component will be available on all SKUs.
I don't really grok this, does this mean Hyper-V will be available on Windows 10 Home? That's the only reason I am considering buying a pro license.
Stop blaming Hyper-V for "crippling other VM software". You ever tried to run VBox and KVM together? It's a limitation of the processor's virtualization extension.
The complaint is not necessarily with Hyper-V itself, it is with WSL2 requiring Hyper-V, thus stopping you from using WSL2 and VBox (or VMware) on the same machine - though apparently they have very recently fixed this.
Problem is once Hyper-V is installed and enabled, no other VM can even be started regardless of no Hyper-V VMs actually running. This limitation does not exist on linux afaik. I have run libvirt based VMs and VirtualBox VMs at the same time on linux just fine in the past.
That’s not what’s happening. Hyper-v is a type-1 hypervisor. When it’s running, even your Windows instance is running within hyper-v. Windows 10 has the hypervisor platform that lets other vm developers hook into the hyper-v host architecture, that’s how you can get android emulators and virtualbox running under hyper-v. It all works fine but is a little unintuitive to set up.
Most likely you just wasn't using VirtualBox own virtualization since it's cant work alongside with KVM. Current version of virtualbox just use KVM just like libvirt.
Although, I don’t know if both kinds of VMs can run in parallel. I recall at least doing so with VMWare in the past.
The thing is, when enabling Hyper-V on Windows, you can’t do anything without fully rebooting. On Linux I know for a fact you do not need to reboot to switch between VMs.
Hyper-V is a type 1 hypervisor. Even windows runs within it when enabled. Comparing to KVM would be better rather than VBox or VMWare. Try VBox with KVM to see same effect.
The things I liked the most about WSL were the ability to share the same file system (and RAM, etc.) and not have to care about partitioning any resources. And to search the Linux files on the Windows side and even modify them with tools that can handle them (yes, I know what I'm doing). Now all of that goes out the window(s)... so I'm pretty unenthusiastic about WSL2. And the 9P server has been next to useless for me on WSL1. It's so darn slow for a local filesystem, it's almost like accessing files over SMB.
It needs a VHD, which requires setting aside a partition of your disk, meaning you can't dynamically share space with your Windows files. That's a huge downside for me. (By "partition" here I'm just referring to the English sense, not the MBR/GPT per se.)
You also can't nor search/modify your Linux files directly from Windows like I mentioned. The 9P server with WSL1 has been so slow as to be unusable for some directories when I try to use it. Does it feel native SSD-speed on WSL2?
As for RAM, does it really share RAM with Windows? That's great news if it does, but I don't think I've seen any VMs do this, though I've never tried it with Hyper-V... if it does,
I'm guessing that's what lets it use memory ballooning?
It's a resizable VHD that handles its business in the background, so I don't understand the complaint. And you can access your Linux files in Windows just fine when using WSL2. Just go to `\\wsl$`.
WSL2 is the first thing to legit make me reconsider my daily-driver Fedora setup in...well, since I started using it in ~2016. I just do not think about it, it's great.
Just like what you normally use with a Hyper-V VM. A resizable VHD which grows but not shrinks, and with a size limit. Heck they even have a document on what to do when you exceed the default 256GB limit: https://docs.microsoft.com/en-us/windows/wsl/wsl2-ux-changes...
And yes the \\wsl$ is the 9P server I just tried to explain is incredibly slow compared to normal files on WSL1. I haven't tried WSL2 yet but I don't expect going through a VM would be faster.
I've never looked as to whether it shrinks it, but I assume not (that would be hard). OTOH, storage is cheap, so I've never really worried.
I never used WSL1 in anger, but accessing WSL2 over the \\wsl$ is not particularly slow. It's not as fast as native, but I don't notice it. I do almost all of my access of files on the Linux image via a terminal and VSCode-over-WSL-Remote, though.
Well "storage is cheap" for you but why assume my money and everyone else's is cheap? I'm low on space and I need to shell out hundreds of dollars to get a larger version of what I have just to use WSL2. Or I could just keep using WSL1 and save my money, especially in this economy. Why in the world would I get a new SSD just for WSL2?
This isn't about using it "in anger". I'm not pushing it to some kind of corner case, you just need to use it for real instead of trying hello-world examples. You notice this immediately as you're dealing with nontrivial folder contents. To give you an idea, this is the speed of raw grep from inside WSL1 Ubuntu:
$ time sudo grep -ri asfadsfadf /etc
real 0m0.075s
user 0m0.016s
sys 0m0.063s
This is the speed from \\wsl$ (MSYS2):
real 0m9.227s
user 0m0.078s
sys 0m0.561s
And this is the speed on the raw files from Windows (MSYS2):
real 0m0.092s
user 0m0.000s
sys 0m0.046s
\\wsl$ is literally some 60x-70x slower than direct access, and it's not because I'm "using it in anger". If you don't believe me, try it yourself with any program you prefer and see if you get similar speed before you tell me I'm wrong.
This is par for the course on \\wsl$. Explorer lags, too, if you try to browse a folder with a bunch of subfolders that actually have some contents. It's plain as daylight to me. Not noticing to me is like not noticing that your car suddenly goes 1mph instead of 65mph.
time grep -ri asfadsfadfg /home/me/python-venvs
0.79s user
0.14s system
99% cpu
0.928 total
wsl time grep -ri asfadsfadfg /home/me/python-venvs
0.83s user
0.10s system
99% cpu
0.932 total
EDIT: cleaned up and formatted the output for better visibility, reacting to your comment.
First command ran from a zsh Terminal session in WSL
Second one ran from a powershell session using the wsl "bridge" executable.
I can't make sense of your command lines (why are you passing grep to grep??), but you're comparing pure-Windows against pure-WSL? I was comparing the two of those against \\wsl$ which is the slow one...
Normal interactions from within WSL of course feel normal. It's just a VM with a fancy name after all. Which is removing pretty much all the overhead of the Windows I/O system, and which has hence been faster for ages. I'm surprised \\wsl$ would be faster though; that should have more overhead going through a VM, not less. If I ever try out WSL2 I'll have to give it a shot, but somehow my past experiences don't leave me optimistic...
Supposedly it can release memory, as of October. It wasn't working in January when I last tried it, but I guess it's worth putting it back on auto to see what happens.
Part of my clings to hope that this is strictly because rewriting all those gazillion syscalls was a PITA, and they will eventually convert on something in between WSL1 and WSL2 where the Linux "front end" is used with NT, like the opposite of a rump kernel.
VMWare [0] will also support the Hypervisor Platform Api [1] allowing it to run besides WSL2 which uses Hyper-V. VirtualBox is still struggling and runs slow if at all with WSL2 and Hyper-V [2]
This of course has implications on how you setup Docker on a Windows machine, each way having pros and cons.
This is a worrisome development. We seem to be heading toward a future where hypervisor drivers can only be provided by Microsoft, and you're out of luck if they don't do the job.
Hypervisors that need more capability are going to have to do some crazy stuff to stay compatible. One option is to save and restore the whole hypervisor state whenever it runs. Another option is to be some sort of boot loader, seizing the hypervisor capability before Windows can get to it.
Type 2 hypervisors are going to be in an interesting state. The Windows bootloader already has a flag you can easily toggle (without removing the Hyper-V role or config) but the problem is more features are starting to depend on Hyper-V being there one layer up (it's a type 1 hypervisor). I'm surprised nested virtualization can't be used for the type 2 hypervisor since Hyper-V picked this feature up a few years back and it seems like that would have solved all of the problems.
Nested virtualization only works when the outer hypervisor supports all the features that the inner hypervisor needs. If the outer hypervisor is Hyper-V, then that limits the inner hypervisor to the features that Hyper-V bothered to implement.
In other words, you can't implement a hypervisor more advanced than Hyper-V.
If you instead want to be on the outside, with Hyper-V on the inside, then you can't just write a driver. You have to implement a boot loader. You also have to implement nested virtualization, even if you otherwise had no need to do so.
Hyper-V is able to nest ESXi and KVM (and Hyper-V of course) and vice versa ESXi and KVM are able to nest Hyper-V. I'm not sure what would be limited.
Yeah that's the difference between a type 1 and type 2 hypervisor. A type 1 runs on the bare metal, a type 2 runs via drivers underneath an existing OS. Since Hyper-V is a type 1 (like ESXi) you can't use a type 2 hypervisor on the root VM to escape being under Hyper-V you either have to do some sort of nesting or disable Hyper-V from loading and reboot.
Heh, it's funny that I'm actually a hypervisor developer and I don't use that terminology. The whole team doesn't. I actually had to look up "type 1" and "type 2" to remind myself which was which. Those are terribly non-descriptive. Our terminology is "bare metal" or "OS" or "boot loader", and "VMX driver". Anyway...
Our hypervisor is far more demanding than ESXi, KVM, and Hyper-V. It needs to interact with low-level Intel processor details in a way that is not supported by any other hypervisor. It won't run correctly if nested inside any other hypervisor. If we supported running under another hypervisor, we would lose important functionality.
If it becomes impossible or impractical to disable Hyper-V, we'll need to do something strange and annoying. Perhaps we could load the driver very early in boot, before Hyper-V loads. Booting as an OS ("type 1", ugh) is an option too, but maintaining that and using it is a real pain. Probably we'd drop Windows host support before we did that.
Ever since Docker became a popular tech, VMware has had little choice: I need docker for my job, so I can't use VMware. That doesn't get them the dollars they want.
I’ve been in the opposite boat for a while. I need virtual box so can’t use docker on windows but it seems that has changed since February so will give that a try
I switched to WSL2 a month ago and it's been great. With WSL1 I'd regularly run into subtle compatibility problems but haven't seen anything like that with 2. Despite a handful of annoyances, the Win10+WSL2+Visual Studio Code Dev environment has been a lot more pleasant than OSX.
I never thought I’d say it but because of this exact setup with VS Code I’ve actually stopped using my MBP at home in favor of my desktop, which was really only built with gaming in mind. I’ve now gotten used to having all the extra computing power at hand and would struggle to go back to a laptop as my primary development machine.
I'm looking at selling my Mac and just getting an iPad to replace it - running Windows 10 + WSL + VSCode on my desktop pc is more pleasant for me to code at home with than my macbook.
I've stopped coding on my 2012 Macbook Air in favour of my desktop PC for similar reasons. The fact that my ports are shared between Windows and Linux makes the web dev I do a dream.
The new Terminal app is great too. I've got all the same split pane stuff that I rely on in iTerm, including useful keyboard shortcuts for switching between them and resizing them. I'm very impressed.
I'm currently trying to do the same, I'm sick of running out of ram on my MBP. What software/setups do you find indispensable to move to Windows? I'm not a first time Windows-only user, but first time to do full development work. For what it's worth I already use VSCode, I'm doing FE Development, and I use Docker a lot. I'm kind of unsure due to having to lose my Mac-specific tools such as Maccy (Clipboard Manager) and Alfred (Quick Launcher).
But why could not you do the same before? Even before WSL there was MSYS, which IMO is great, unless you need to compile a C program, that directly uses Linux headers.
It’s possible I could have and I just wasn’t sufficiently in the know about which tools to use. I hadn’t used MSYS. I’ve taken my current workflow so far as to even use nix as my primary package manager. Would this have been easily possible before? I ask in earnest. My setup right now is such that even mildly arcane things like that just seem to work for the most part.
Did the same. Just that i use emacs with x410. Only gripe I have at the moment that docker eats up a lot of memory. But everything else works perfectly fine. Was quite surprised and happy.
I'm agnostic to Win10 vs OSX as a desktop environment. What I like is running an real copy of Debian that starts instantly, and has transparent access to local files. I'm sure I could cobble something together with VMWare but this just works so cleanly. Homebrew is a fine option on OSX but its nowhere near is nice a full linux system.
The update to migrate the filesystem took way longer than the docs said, I ended up just letting it run and came back a few hours later. And it wasn't like I had much in there. Otherwise it was straightforward.
I switched to windows insider to get it, but WSL2 should hit GA in the next month or two.
The most annoying issues mysteriously fixed themselves after 6mo or so: a trackpad that would go haywire ever once in a while, and diagonal tearing on the screen. It was a brand new model so maybe there were driver updates.
The fingerprint scanner is way flakier than the mac. This is on a Lenovo X1 Extreme gen2.
Win10 is just a little less... elegant than OSX. But not in a way that really bothers me on a day to day basis.
Question -- from what I understand, WSL2 is closer to Linux running in a VM, whereas WSL1 was like a Windows kernel level version of Cygwin. Is that mostly correct?
Regarding that, remember a number of years ago, prior to VMs there was a patch set for the Linux kernel porting it to user space -- so you could run Linux as a user process, which ended up functioning similar to running it in a VM. Would WSL2 be closer to this model, or is it really running Linux under a stripped down Hyper V?
> Question -- from what I understand, WSL2 is closer to Linux running in a VM, whereas WSL1 was like a Windows kernel level version of Cygwin. Is that mostly correct?
Yes, that's essentially correct. You could also think of it as WINE in reverse; one big difference from cygwin is that it runs unmodified binaries rather than needing to recompile.
> prior to VMs there was a patch set for the Linux kernel porting it to user space -- so you could run Linux as a user process
> which I think is still a thing, although not super popular.
Yeah, I compiled a version last week. It's occasionally nice for kernel devs to have an env to try out new ideas, so it more or less gets maintained.
Google's gvisor project is sort of a blend of that and WSL. A reimplementation of the linux syscall layer like WSL, but as a linux process like UML instead of as a kernel module as in WSL.
Did WSL1 not use the same binaries as a regular distro? IIRC it will download regular x86-64 .deb packages straight from e.g. Ubuntu’s APT repositories.
Kind of. But from a userland perspective, Cygwin is a (very slow and complicated) layer between the application and the kernel, whereas WSL1 really is (and importantly, feels) like just another API for the exact same kernel. That's kind of its entire advantage over Cygwin, so I wouldn't really compare them like that.
There is a big difference in that WSL1 only needed to implement Linux syscalls, they could use glibc and all other libraries from the Linux distribution that you are running, whereas Wine needs to reimplement all Windows APIs to work with most applications.
Also, Wine is really a program on top of Linux, while personalities are a core feature Windows. At a low level, the Windows kernel is agnostic wrt. the system ABI. Win32 is just one personality of the Windows kernel, just as NT used to have an OS/2 personality to run 16-bit OS/2 programs.
When you boot into Windows 10 normally, you are interacting with applications run by one of these subsystems.
WSL1 is now another of these subsystem.
Like wine, you are getting an API that looks just like Linux.
But, unlike wine, that Linux is a first-class citizen as far as the OS is concerned. It would be straightforward for the Windows devs to connect the Linux subsystem across to the Windows security module (see diagram). There is no analogy for that with Wine on Linux.
Ah, I completely forgot about flinux. Seems there were a lot of neat innovations back in the earlier days, including a couple of single system images (SSI) implementations (the opposite of VMs, making a network of hosts look like a single multiple processor system).
On a Win10 host, I want to edit code in sublime and have the runtime environment in linux. I want fast code search and fast build times.
I've found no way to accomplish the above.
Code in Windows + Windows Sublime means build in WSL will be slow because cross-os i/o
Code in Linux + Windows Sublime means code search will be slow because cross-os i/o
Code in Linux + Linux Sublime + Winodws Xserver means interface is laggy because, well, Xserver
VSCode gets around this by running in a client/server model with client on Windows and Server in Linux... but then I'm stuck with an Electron based editor instead of Sublime.
What Electron-related problems have you had with VSCode? I've been using it as my primary for a few years now and it's been nothing but stellar. And that integration sounds incredibly cool.
When I tried VSCode a year ago, I found it to have an input lag, compared to sublime.
Given your comments, I just installed it right now to try it out again. I like the built in terminal & git integration. Code search performance is excellent (w/ code hosted in WSL) due to this new client/server model... and there was no input lag in the editing window.
The only lag I noticed was in CTRL-P selector, but that's a small compromise given the client/server model solves the real time waster.
I think I'm going to try it out over the next few weeks :)
The electron hate really is getting old. vscode and some other electron apps run far faster than many comparable programs with a more traditional architecture. Yes electron could be considered inelegant, but so can a great many things in computing. If it works, we should use it.
Yep. If anything, "old-school" Visual Studio is the one that feels bloated by comparison, despite being a native app. Software quality cannot be reduced to the choice of language or framework.
VSCode is truly a full-fledged IDE for TypeScript at least, and via plugins it can play the part for several other languages, including Rust. Mostly those depend on the quality of their Language Server implementations.
Not an Electron-related problem, just a general issue: What made me give up on VSCode after trying to make it work again(this March) was the fuzzy search just not being very good in my Rails projects. I can ctrl-shift-r in sublime and get to the method definition I want in 4-5 characters. It's probably my primary code navigation technique: see a thing, wanna check the def, 1-2 seconds later I'm there.
There's some VSCode extension that claims to mimic this but the fuzzy search seems much worse and it just doesn't quite work. Yes you can use the solargraph extension to sort of go to the def in vscode and it does all sorts of cool things, but I just never got into it.
It seems like a small thing but I just couldn't get over it, because I do it so often.
The main other difference is I much prefer the GitSavvy Sublime text extension to VSCode's built-in git handling. Particularly how I can see a summary of the commit I am making in GitSavvy whereas there doesn't seem to be a way to get all the file changes in one screen in VSC(though to be fair I didn't look around much for a way to do this, there probably is an option somewhere).
That being said if fuzzy search was working the same as in Sublime, I'd probably switch to Code.
Yeah, its level of support for different languages definitely varies. For TypeScript it feels like a paid IDE. For Rust and Clojure it works quite admirably, since it's the most prominent free (graphical) option for each. For Python, it's... underwhelming. That's too bad about Rails, but I also can't say I'm super surprised.
People commonly say that, but with WSL 1 it was technically quite correct: it’s a Windows Subsystem, for providing Linux. Linux Subsystem for Windows would arguably be slightly inaccurate. The name just feels so strange because Windows hasn’t had many such Subsystems (win32 has essentially been the only one this century).
Under WSL2, the WSL name is no longer technically accurate at all, but it’s what everyone knows it as, and the difference normally doesn’t matter, so they keep it.
I think of it as "Windows Subsystem for [running] Linux".
The architecture descends from the Windows NT architecture:
The user mode layer of Windows NT is made up of the "Environment subsystems", which run applications written for many different types of operating systems, and the "Integral subsystem", which operates system-specific functions on behalf of environment subsystems.
Though as other users pointed out, in WSL 2, the name is inaccurate.
There was a MS employee on twitter the other day (sorry, I forgot who) saying it was because there were legal issues with naming something with a title that has someone else's trademark as the first word.
That completely makes sense in my mind just because my favorite reddit app is called slide for reddit. I think reddit forced everyone to use x for Reddit in their name as opposed to Reddit X.
Reddit did exactly that, like five or six years ago. A bunch of apps had to change their names, which is how you end up with apps like "rif is fun for Reddit," where the first part stands for "Reddit is Fun."
For those of you who will be running Docker Desktop on WSL2 - which allows to "transparently" use the "docker" command on your other Linux that runs in WSL2. And allows to use your WSL2 "other Linux" filesystem to share volumes with Docker containers! ..
Anyway, you need to know that when this stupid Docker and/or WSL2 VM is taking up all your Windows 10 memory - not all is lost.
Using these two commands, you can force WSL2 to give back the memory it is holding prisoner!
echo 1 | sudo tee /proc/sys/vm/drop_caches
echo 1 | sudo tee /proc/sys/vm/compact_memory
Especially if the one doing the mess is the "VM" of Docker.
Now that Docker Desktop is using WSL2 instead of its own VM, you can't see the bastard in Hyper-V console at all... apparently the whole WSL2 VM is not seen in Hyper-V console, even though it IS a damn VM. But it does appear in Task Manager as "Vmmem" and taking up gigabytes of your memory.
They mean MS replacing the Windows kernel with Linux kernel. Which doesn't sound as crazy as it did just a couple of years ago. Just put all the Windows crap in a VM, similar to WSL2.
I dont understand how that would provide any benefit. First problem, all those lovely windows drivers for every piece of hardware and laptop ever made would need porting to Linux.
I would really, really love to go back to using Linux natively but sadly the hardware on this laptop is not supported well enough. So I eventually gave up and installed Windows while using WSL2 extensively.
Performance is on par with running Linux natively. I compiled my own kernel with ext4 encryption support, and it works quite well. I use it for 90% of my files. Combined with the new Windows Terminal and VSCode support for WSL, there is little friction left. (Also, to address some comments, you can view Windows files from Linux and Linux files from Windows, with some reduced performance.)
Is there a shell+terminal+font combo that folks really like for WSL? I've tried the new terminal beta with zsh and--for reasons I can't quite articulate right now--it feels "off" and I invariably proceed to boot up virtualbox for my debian, i3, kitty, fira code setup to get work done. Maybe it's the break in filesystems and $HOME?
Edit: learning that WSL2 does away with the syscall-translating tech and mandates Hyper-V, the question of whether to look into this further becomes thornier!
It looks exactly the same as xterm on my native Linux laptop.
I've even gone as far as running i3 through WSL1 / wsltty and it worked very well with 1 monitor. I stopped using it because I have 2 monitors, but if I had 1 monitor it would have been usable for day to day development.
Windows Terminal uses grayscale instead of subpixel antialiasing by default. They recently added the ability to turn on subpixel antialiasing. (I'm not sure if that's what you mean by "off".)
Zsh have a long list of nice features that Bash lacks. Two of my favourites:
1. Imagine you are in a directory with file1.ext and file2.ext. You type f[TAB], it autocompletes it to file. So far, Bash and Zsh are the same. If you continue pressing [TAB], bash prints a list of files so you can complete typing the name yourself. Zsh starts cycling through filenames.
2. In Zsh, you can type for example /u/lo/b[TAB] and it automatically expands the path to /usr/local/bin. If multiple paths match the original expression, it shows a list and pressing tab cycles through the options.
Zsh is quite feature-rich, and for the most part a drop-in replacement for Bash. See this slide deck for more Zsh awesomeness:
A great step. but I feel I have to repeat my "ceterum censeo" of saying that wsl would really benefit, if Microsoft would add a native Wayland support so that the Windows Deskop would appear as a Wayland server for the Linux VM. This would allow any Wayland-compatible Linux application to run seamlessly on the Windows desktop and fully utilizing the native graphics drivers.
I recently wanted to run some linux apps on windows (simple stuff like ls, ssh-keygen, etc) and found that most instructions were actually pointing me to install a ubuntu VM rather than use the native subsystem stuff touted a couple years ago.
I was a little disappointed in this. Running a VM is a lot more hassle than just running the apps natively.
I've been using WSL1 for a long time now and I am very happy with it.
The way I use it is to have all my files and project on windows FS and I code using windows software (VSCode, Eclipse) but when I build and test, I use WSL1, and I didn't feel that the performance was that bad since I have an SSD and I don't mind waiting for few more minutes to rebuild a project.
But the most important thing for me was that I didn't care about managing another file system. All my files are still on windows where I used to keep them, file sync and backups are working as expected, and I easily browse and edit these files on the Windows side.
For docker, I installed docker on Windows and hooked Docker CLI on WSL1 to it using some configurations. and I was happy with it.
The question is, for the way I use WSL1, will WSL2 be an improvement or a drawback for me? and should someone like me switch to WSL2?
I would love to use Windows WSL full time, along with docker containers, for general web development.
There is one thing though that no one talks about.. the fonts!!! The font smoothing is awful on Windows... will this ever change?
The new terminal looks alright, the default font is nice; but most other programs I like look awful in Windows: Sublime Text, VSCode, gVIM, ... I just can't find a monospace font that looks "thick" enough for readability and have smooth edges -- especially with dark text on light background.
> WSL extension allows for the VS Code UI to run on the Windows side with a VS Code Server running within the WSL VM
This was why I have finally switched away from developing on windows, there's something about the VS remote code server setup that will spawn tons of processes and eventually slow WSL to a crawl. Possibly my fault, I haven't gone through all my extensions and settings carefully, but also not an issue when running VS code on unix without the server.
Does anyone know if WSL2 supports utf8 characters in filenames, such as ":" and "<>" etc?
WSL had a limitation that it couldn't display such characters when I accessed a Linux parition through Samba. I know there is a Samba workaround for name mangling, but I prefer to access the actual filenames as created by org-mode.
Edit: Samba was running in an Alpine VM that mounted a ZFS partition. The network folder was mounted through Windows.
Slightly off topic but, I've been a mac user for some 6 years now and was recently forced to switch to either linux or windows for a deep learning desktop experience. I'm wondering how complete the windows experience will feel as a long-time unix user? Windows seems to have everything these days with WSL, great programming, gaming, media/rendering environments as well as super stable. What's bad?
I have a MacBook Pro that I wanted to turn into a Windows laptop.
Unfortunately if you install Windows from scratch, without going through Bootcamp to install it side by side with MacOS, Hyper-V support is disabled.
This means that I cannot use WSL2. Or Windows, since without a working Linux environment it's useless to me and I don't want to invest in WSL v1. Might as well go for Ubuntu 20.04.
I find that most of the linux-like versions of user-space tools can be installed via homebrew. What functionality would you been hoping for from a linux vm?
would this VM be able to have any GPL v3 licensed tools?
Yeah, the drive for a "WSL" is really because Microsoft/Windows doesn't play nice with the *nix world of tools/commands that developers are falling in love with.
With OSX, the terminal experience is great, no need for a "WSL" over there!
This is an interesting take considering Linux is Unix-like and macOS is technically a BSD derivative. I don’t think macOS will ever have a Linux syscall translation layer/subsystem (WSL1) nor a lightweight VM with special integration (WSL2) but you can do sort of similar things in third party software, I think. Docker for Mac is using the native VM framework and includes a filesystem integration called osxfs, but despite mentioning the source code in documentation it does not appear to be open source at this time?
Exactly. It really doesn't need a syscall translation layer anyway given its Unix + BSD and POSIX foundations. Although this open-source project [0] does exist for macOS but it looks like its inactive.
There are several Windows features that rely on HyperV under the hood. Home supports them. It does not give the user the ability to run their own virtual machines though.
Or most open source non-windows software work. So many tools run great (or only run) under Linux, and dev tooling for these projects is also heavily Linux-centric. Often you might be able to get a package working under Windows, but support is thin and it's prone to breakage. Easier to just run it under Linux in a VM, like what WSL2 is doing.
Unfortunately no. This would have been a very useful feature to me since currently I use a Windows laptop to write code to run on a linux gpu server under my desk.
This is great news. Looks like I neither need a separate Linux VM or dual booting Ubuntu desktop install anymore for testing my software given it will also run on WSL2 on general availability.
Another clever move from Microsoft and the Windows Teams.
WSL2 is a fantastic product. The ability to debug python code on ubuntu and alt-tab to Steam is unparalleled.
However there is a fundamental issue - Microsoft is treating this as a toy project. The number 1 problem is that the file system inside an Ubuntu shell/wsl2-container is sitting inside a hidden file system.
It is is not exposed to the rest of windows and neither to backup tools like Dropbox, etc.
So you have a huge chance of losing important documents inside a WSL2 container.
Now, you can "Cd /c/Documents" and do your work - but this filesystem is unimaginably slow. Not sure if it is because of file mounts, etc. The performance is incredibly bad.
If there are WSL2 devs here - please make the working container filesystem as a first class directory inside the rest of windows. I'll live with performance issues for a while...but this is a blocker.
yes - however those paths are not available to any backup system. if you want to switch from ubuntu to fedora, there is no easy way to do it.
If you accidentally uninstall ubuntu, you will lose that partition (and all your work). this has happened to me once already.
In linux, you just have a separate /home partition . You can have a dozen operating systems, merrily using the same home partition. you can trash your OS, but your home directory doesnt get trashed.
i have a pending request to offer the concept of a "home directory". the filesystem on which i work inside wsl2 should not be opaque to the rest of the operating system.
Back in the 80's wasn't there some issue with Microsoft having to pay AT&T royalties for Xenix and in order to get out of that agreement, they had to agree to stop doing anything related to Unix?
Would be nice if it had GPU support so I could use it for deep learning prototyping. I might use Windows instead of Ubuntu then. I can't use macos now that they dropped Nvidia.
I always laugh when I think about this. Microsoft's tools are so bad that they had to ship an entire other OS in their own OS just so that programmers can be productive.
While similar in concept, Wine is better than WSL1 in a lot of ways due to supporting sound, graphics, and GPU acceleration.
I've always been suspicious of WSL because it follows a narrative that benefits Microsoft - that Linux is primarily a command-line/server environment, and the graphical and audio applications for Linux are not worthwhile. It doesn't have to be like this. WSL1 was based on an Android environment for Windows Phone called Project Astoria, which did support graphics.
The graphical and audio applications for linux aren't worthwhile, or at least they're not competitive with the top proprietary offerings. It's not really about the theoretical capacity of of linux to support audio and graphics. The low market share of desktop linux makes it not economically worthwhile for top proprietary software companies to port to linux. That's just an unfortunate fact. I don't really want to edit graphics with Gimp when Photoshop exists, and I don't really want to produce music in Ardour when Ableton Live exists.
I think the only real way to fix this is to change the economic incentives by increasing desktop Linux market share, which is a long and uphill battle.
Yeah, I was trying to set up Selenium (testing framework) with WSL 2. Its workable but you need to write more code to hack around WSL limitations on graphical apps. Using Selenium is a much better experience on macOS and desktop Linux.
WSL1 can do graphics just fine, if you run the X server in your Windows environment (which is the right way to do this, in any case). There's implementations - forks of Xming, I assume - that are set up to work like that with minimum hassle, e.g. https://x410.dev/
honestly, i use WSL to ssh into servers in a pinch and get things taken care of... my experience actually trying to develop things (python/django/node) has been poor otherwise.
I seem to perhaps be in the minority here, but I got onto the slow ring just for WSL2 and it has been excellent so far. I am primarily a Ruby developer and things just work.
Even if Microsoft was willing to throw away what makes Windows have the desktop market that Linux will never have, and go back to its Xenix roots, Linux would probably get as much back as it has been getting from all those Android OEM contributions.
So not really a reason to cheer what would be yet another pyrrhic victory in the desktop/mobile space.
I've been a happy user of WSL2 since 2 months, and I absolutely love it. After switching to it, I completely stopped using Mac Mini and Linux.
It has made my life so easier. All the Linux stuff + development work gets done inside Windows Terminal + WSL2 + NeoVim, and I still get to keep Windows for Gaming, Designing, Office work, other random programs etc.
I've been a happy user of running a Linux VM (without X) besides Windows since 6 years ago (first VirtualBox, then Hyper-V after switching to Win10) :p
Congrats to WSL2 team, they introduced this pattern to a much wider audience.
Yep, using VirtualBox and it works fine for me. I don't use Hyper-V because it's buggy with my build for some reason (Nvidia driver crashes, BSODs). I don't really get this WSL 2 hype.
In genuine amazement at a lot of the comments here - why not just dual boot Linux instead of using it in a VM? Productivity on a Linux machine can be so far ahead of Windows, especially if you've got a terminal-based workflow an maybe a tiling WM
I've been using WSL for the last year. No complaints at all so far. I don't want to dual boot because of the hassle of restarting my computer every time I want to do something different - like switching between a video game (say Age of Empires) and development. I don't want the hassle or overhead of dealing with a VM, allocating resources permanently to that whether it needs it or not.
This way, it feels like I'm on Linux the whole time with a Windows themed desktop, great support for games and no hardware issues. This isn't about ideological purity for me. If everything works, I'm happy.
My workflow is mostly terminal based, and I used to use tiling WMs, yet I am now back to Windows + WSL.
Why? Because GPU driver compatibility is still so bad on Linux that I can't get any of the popular distros to work properly on my setup (laptop with hybrid Intel/nVidia GPU setup, thunderbolt dock, external display with different scaling than laptop display).
And yes, if I spend a weekend fiddling with Nouveau drivers I might get it to run somewhat decently, but really I don't want to spend that time on my work setup. Window + WSL works out of the box, so I'll use that.
I have been on Manjaro with XFCE for the past couple of years and it's a dream. Haven't had one driver problem. Zoom, Slack and Beyond Compare all work without issue as does VS Code. Installing and updating all software via the package manager UI is so much better than Windows that I'll never go back. Furthermore, I think that XFCE provides a better Windows experience for developers than Windows does - at least I don't have to find, download and install 7+ taskbar tweaker or hack the system configuration registry to get the features I want out of it.
Mixing business and gaming on the same machine caused problems for me even when I was fully on Windows, so I've always kept separate machines for that.
I used to use Linux as my main OS for ~5years and I would dual boot for maybe 1-2y of that time.
In the end, I just found that I am more productive with Windows. On Linux, I really hated driver compatibility issues (happens quite often on laptops), core programs are not so stable (KDE window manager) and skype sucked on Linux.
Also, I could not game on Linux so that sucked even more. On Windows I have steam with 0 worries about registry and dll hacking with WINE.
In WSL2, running the same command takes over 30 seconds. WSL2 is a massive hit to the seemless experience between the two operating systems with filesystem performance from Linux to Windows files orders of magnitude worse. Yes, unzipping tarballs or manipulating and stat syscalls are cheaper now on the Linux side. The downside performance loss is, however, staggering for the files I had in C:\
Don't even get me started on how long an npm install took.
One of the truly wonderous things about WSL1 was the ability to do something like this in a PowerShell window:
C:\some-code-dir\> wsl grep -R "something" | Some-PowerShell | ForEach-Item { }
Now performance across the OS boundary is so bad, I wouldn't even think of using "wsl grep" in my C drive. Or "wsl npm install" or "wsl npm run test" or any of that.
It's very depressing because WSL1 is so, so promising and is so close to feature parity. WSL2 should definitely stick around and has use cases, for Docker it's unparalleled. But for daily driver mixed OS use, WSL2 has made me very unhappy. I think I'll be deconverting my various WSL distributions because the performance hit was too much, it just was absolutely unbearable to even run simple commands in my Windows side.