Once upon a time these were usually posts on one own blog (self-hosted or not), started with a HOWTO keyword and helped the people just in the same way. Nowadays everything developer must be on GitHub or it won't ever be noticed.
I host my blog using GitHub pages and hugo. Its all still markdown, but it also allows people to go to my vanity url and see stuff formatted exactly as I want it to be.
I personally like these detailed step by step readmes. It’s helpful for someone clueless dropping in to get quickly started and develop a working prototype. Too many times you’re left to your own devices when setting up dependencies. I totally understand if you’re expected to do some homework. On another hand, it certainly helps to gently guide the reader versus give them a high level request to install this or that.
The problem with each of these tools (Lima etc.) is that it’s still fundamentally a virtual machine under the covers.
The great thing for me about WSL (and why IMHO it was worth all the effort they put into it), is that because the kernel process tree is running as a native Windows process tree, I don’t have to pre-allocate memory ahead of time.
This is murder on Apple’s RAM-restricted laptops, and kind of rubbish on machines with massive RAM too. Most of the time, most of your memory goes unused. The base model Airs/Pros for example need to have 25-50% of your RAM locked away just to run a Linux container at all - when you already have a fairly anaemic 8GB to begin with that’s pretty dire.
It's a shame they essentially deprecated WSL1 and is doing everything in WSL2 now, where exactly this kind of memory allocation and the performance issues that entails is an issue.
Not that I think it's a bad decision, but I wish they somehow managed to break Conway's Law and enhance Windows itself by fixing the file system speed issues as well as somehow making NT/Windows processes fork instantaneously just like on Linux :)
We work very closely across the file system, kernel, and virtualization teams, so I don’t think we can blame Conway’s Law for WSL2. And actually our WSL1 fork performance wasn’t too bad IIRC—we have real fork at the Windows kernel level, it’s just not something that can realistically work with the Win32 programming model. I also think we will eventually resolve the file system performance issues.
No, the most important reason to choose a VM instead of a reimplementation of the Linux ABI is long tail compatibility. You can’t realistically replicate and then keep up with every corner of the Linux kernel’s interface. And so with WSL1, software will randomly not work, or it will randomly break after an apt upgrade, and users will get frustrated and switch to a VM anyway. Might as well get perfect compatibility and still have nice integration with Windows via the WSL2 approach.
I think most people understand why the WSL2 approach eventually won out, and its saved my bacon more times than I can count when I had to demo something and it breaks on my Mac only for me to quickly switch to "real" linux on my personal windows machine running WSL2.
However, with WSL1, implementing the Linux ABI was such a wicked flex.
Seamless mounting, launching code editors in Windows from Linux, issuing docker commands in the Linux OS while the docker engine and its mounts are hosted on Windows.
Doing things with something like Vagrant on Mac sometimes cuts it, sometimes doesn't. Either way, if I'm not shelling out money I could be stuck using a subpar solution like VirtualBox.
Seconding that. WSL1 is an impressive achievement. I use WSL every day and switched to WSL2 a long while back but I still kinda wish WSL1 were the way forward.
Thanks for the detailed explanation! You folks have actually tried to implement an ABI compatible Linux kernel, so I'll definitely take your word for it :)
Yes. But user space can take dependencies on features we didn’t implement yet or we implemented incorrectly or that have bugs in combinations with other features. E.g., more software is starting to depend on namespace features that we implemented incompletely in earlier versions of WSL1. So Docker worked for a while until they starting using a different subset of kernel features, for example.
At another point, glibc started depending on more precise behavior of CLONE_VFORK, which we originally didn’t implement fully. So essentially all of user space was broken. We fixed it as soon as we could, but I think glibc may have added a workaround, too. I feel bad that the community had to work around our bugs.
New syscalls and flags get added. The Linux ABI is huge though, even if you snapshot at some “old” 4.x version, say. WINE has similar problems with coverage, and Win32 is famously “stable” too.
> No, the most important reason to choose a VM instead of a reimplementation of the Linux ABI is long tail compatibility. You can’t realistically replicate and then keep up with every corner of the Linux kernel’s interface.
As far as we the public know, Google Cloud Run is based on gVisor, which emulates Linux system calls with userspace code. Seems to work great for the usual container workloads.
Doesnt ReFS do exactly that? I have been surprised they do not try to push it more - the fact it cannot (or maybe it can now but in the past it could not) be used for a boot drive seemed strange.
Their explanation was that Windows could not be enhanced but rather should be slimmed down - the APIs that the NT kernel provides can't be efficiently worked around without breaking all kinds of already existing Windows software.
The other commenter alluded to this but didn’t say it outright: WSL2 is essentially just a special VM with API based limited hardware access. It’s neat though, the file systems are connected over 9p.
It is - but its speciality is exactly what matters most - it doesn't preallocate RAM and in fact can dynamically release it back to the host without any user intervention. It's super convenient.
To this day I wish they'd release decent (i.e. well documented) APIs for creating VMs like this yourself with Hyper-V. I sometimes have to spin a bunch of Windows VMs for testing, and if they could dynamically release RAM and CPU when they aren't using it it would be amazing. They have something like it with Windows Sandbox but you can only spin one at a time.
My comment was really kind of inaccurate, the difference isn't the memory ballooning, it's the special way WSL2 is able to interact with the host. It frees memory much better than normal Hyper-V VMs, and boots way faster than a regular Ubuntu VM.
Microsoft implemented a Firecracker style microVM (they call it "Krypton") for Hyper-V, which is used for WSL2, Edge Application Guard (run Edge in a VM), and Windows Sandbox. There aren't any real good docs on how to create one yourself using the Host Compute Service.
To add to this a little bit.
There is deeper host side integration for these applications and the VM’s are virtual address backed (the guest vm’s memory is represented as a process on the host), which adds more opportunities to free up physical memory.
The windows sandbox architecture page covers some of this.
Also WSL2, if I recall correctly, has a relatively minimal base OS with a custom init binary.
I would say if you’re looking to use Host Compute Service, your best option would be to look at the way https://github.com/microsoft/hcsshim initializes containers.
That's not what OP meant. OP meant interfacing with Hyper-V from inside the VM.
Things like dynamic memory allocation and release aren't some kind of black magic, but are something that is actually directly communicated by the guest VM.
The great thing for me about WSL (and why IMHO it was worth all the effort they put into it), is that because the kernel process tree is running as a native Windows process tree, I don’t have to pre-allocate memory ahead of time.
You are right and it can definitely be improved, however at all my jobs I’ve had 32-64gb MacBooks, is that not true for most developers? When I didn’t i was able to buy an actual Linux laptop
> A: It's a proof of concept name only. If I were to distribute this as a finished piece of software, which I probably won't (see why below) I would choose a different name.
> Q: Isn't this literally just a QEMU virtual machine? How simple is that!
> A: Yes, it is. And yes, it's simple. Why did I choose to put this on my Twitter, you ask? Because I couldn't find something exactly like what I did, and because before I posted it, I had 50 Twitter followers, total. I did not expect to get the over 700 likes that it did when I posted it; as a matter of fact, I would have been perfectly satisfied if it had gotten 7 likes.
> Q: What, you won't be distributing this as a finished piece of software!? Why?
> A: From what I can tell (and admittedly, I haven't tested this piece of software), it already exists. Lima seems to have a lot more features (such as file sharing, something that I have not implemented) and seems to be geared towards a different application (containers).
For at least the last year I’ve ran my entire dev environment out of a Linux VM I ssh into (locally).
The primary factors are:
1. I, overall, like macOS and the Apple ecosystem. I heavily use Notes, my calendar, several apps I like to sync from my iPad and phone, etc. I'm pretty bought-in, and macOS helps me manage the non-developer stuff a lot easier, in my opinion.
2. Enough things have broken, over time, from macOS updates that my developer experience is subpar compared to a Linux OS.
3. A Linux VM gives me 100% reproducibility, so little delay that I would argue to say there is no delay, and I can reliably use the rest of my MacBook for all the other apps I care about.
It’s worked great, and I’ve never felt like it’s complicated.
Edit: Some people might mention Docker. I've run into just-enough networking issues with Docker that the full VM was worth running.
I sometimes code in assembly for fun (my professional work is mobile app developer, though). One of my favorite assemblers is FASM: https://flatassembler.net/
It's still written in 32-bit assembly, which means it won't run on any macOS since Catalina. On the other side, Linux still provide 32-bit compatibility mode.
In an ideal world, we wouldn't need to be bothered with Linux-only software. It's incredibly frustrating not only on Mac, but e.g. on OpenBSD as well.
Docker (and Kubernetes) are a prime example, but it's increasingly bothersome to work around dependencies on systemd, logind, etc. Linux software is starting to feel like Windows software: it's this special other thing that requires extra steps and care to run on your machine, and the user experience is far from smooth, if things work at all.
I know about Darling [1] but don't know about how practical it can be because I'm assuming all proprietary frameworks like the core web framework aren't implemented there.
Multipass is somewhat limited, but what it does it does really well.
It is supported in Linux, Windows and osx. This means that you can have identical development environment across all your devices using a shared VM script
I’ve been using Multipass for a couple years now as my “WSL for Mac”. It’s great to be able to spin up a Linux guest on demand. I also use it for Docker shenanigans so I don’t need Docker Desktop or deal with its licensing issues.
Yes. It's using the same naming as WSL which always confused me:
Q: Why the same naming scheme as WSL?
A: It's a proof of concept name only.
If I were to distribute this as a finished piece of
software, which I probably won't (see why below) I would
choose a different name.
It's not confusing once you know where it came from. The NT kernel has a concept called “environment subsystems”. Win32 was, until WSL1, the only one used (a long time ago there was SFU, the grandfather of WSL). So WSL is a Windows subsystem made for Linux. Hence the name.
Note that WSL2 isn't an NT subsystem, they're just using an actual kernel now. The name stuck though.
The original NT subsystems were for OS/2, POSIX, and Win32 and were called simply the "OS/2 Subsystem", "POSIX Subsystem", and "Win32 Subsystem". If they had stuck with the sensible NT naming convention, WLS would have simply been the Linux Subsystem.
Yeah well, I'm not familiar with their naming scheme, I've managed to mostly ignore Windows in the past 10+ years. Except that legacy vb.net project... sigh, the things I'll do for money.
I think the concept of "sub system" is quite integral to the NT kernel. Might have been easier if they had stolen the name from user-mode linux - which might be more suitable for wsl2 in some ways... But that was/is a quite different type of project:
I've wanted to do it. Mostly to test out stuff that my colleagues would need to run but haven't been able because I run Linux. So I usually have to ask colleagues to double check my work.
I am aware that running macOS on non-Apple hardwade would be against their ToS.
You can just use Vagrant for this kind of setup. The benefit of that is you can provision so it is able to run your application and you're able to commit it to git and share it with others. Pretty sure it will have a qemu adapter too if you so choose.
Podman Desktop is Apache 2.0 open source; supports Win, Mac, Lin; supports Docker Desktop plugins; and has plugins for Podman, Docker, Lima, and CRC/OpenShift Local (k8s) https://github.com/containers/podman-desktop :
Perhaps not that OT, but FWIW I just explained exactly this in a tweet:
> Mambaforge-pypy3 for Linux, OSX, Windows installs from conda-forge by default. (@condaforge builds packages with CI for you without having to install local xcode IIRC)
Q: What, you won't be distributing this as a finished piece of software!? Why?
A: From what I can tell (and admittedly, I haven't tested this piece of software), it already exists. Lima seems to have a lot more features (such as file sharing, something that I have not implemented) and seems to be geared towards a different application (containers).
Here's a relevant comment from a Microsoft product manager[1]:
> Because we cannot name something leading with a trademark owned by someone else.
> It's pretty much the same reason why you can't launch a commercial product called "Apple <product-name>" unless you're Apple. Or "Adobe <product-name>" unless you're Adobe. Or ... well ... you get the idea ;)
Good news, the page literally addresses that. Just because it made HN doesn't mean it's a real thing: this is barely a proof of concept, it doesn't need a good name =)
Might be even easier to provide (hopefully trustworthy and safe) `msl.qcow2` Debian images with the installation and `GRUB` modifications already done?
The WSL naming is terrible and stems from a time where Microsoft was an evil monopoly empire, and used all kind of doublespeak regarding Linux, GPL, etc (see Halloween documents). The name stems from its predecessor, Windows Services For Unix (SFU) [1]. Its bad enough Microsoft kept using the doublespeak naming scheme, please don't copy it.
What this is, is Linux compatibility layer (running on MacOS). BSDs have (or used to have) such as well, at least FreeBSD used to.
That README shows a great deal of appreciation for other people’s time, which should be commended.