I think you missed my point so let me clarify: If your job is to develop software, then your computer is your production environment. It's where you run your production - your development. This is hopefully separate from where your customers runs development.
Since version 1903, the proper way to access Linux files for writing is to invoke explorer.exe from within WSL. A transparent 9P mount is created for the working directory and files are made accessible through a regular Explorer window.
This has been changed. WSL 2 VM now runs a 9p file server, and on the Windows side it mounts to \\wsl$. Of course, the performance are degraded. It would certainly take longer for Intellij to index your project.
I haven't used all variants of VMs but my experience with VMs is very different from WSL2. For example:
* Smooth set up. Don't have to install some large commercial 800MB MSI like VMware workstation, download some Linux image, go through partitioning of file system etc.
* Well integrated. I can open up a terminal and it acts as any other window in my system (meaning I don't get the window in a window effect as you get with a new VM).
* My file system is mapped automatically. No need to set up Shared Folders or whatever manually.
* Better startup perf. WSL starts in a second on my computer. Never had the same experience with full VMs. Even if I use something like alpine just starting VMware or VirtualBox takes a lot longer than starting WSL.
Saying it is like any other VM seems just incorrect. Saying it didn't work out seems even more misguided.
> Saying it didn't work out seems even more misguided.
That may be, but the rest of your comment seems to be unrelated to the matter at hand since you merely listed some advantages of WSL2 instead of addressing the disadvantages that are causing people trouble.
I have been using multipass very successfully the past few weeks. It's a full fat VM and has an experience very similar to that of WSL. https://multipass.run/
I use it all the time. I develop software for Windows but being able to use various Linux utilities and software is super convinient.
Many of them have ports to Windows but it's just easier when it's all already available. Some days ago I needed to run some penetration test software and while supposedly I should be able to download the code myself and build in Windows, just installing using apt is a lot easier.
Wouldn't it be enough to enter an invalid audience when configuring the IDP? If the audience is ignored the sign-in flow still allows you to log on and you know the SP is broken.
Sure, but two reasons that's not quite optimal and you might not want to do that in practice:
1. This tells you the SP is broken; just using individual keys means that doesn't matter anymore if the SP is broken or not. Individual keys is in your control, fixing the SP much less likely so. And you can just set up a practice of doing it for everything and now it's one less thing to test for.
2. That still requires a bit of testing that's somewhat annoying to set up, which most vendorsec practices don't have time for. It's also only one of dozens of things you need to test for. Ignoring audiences is super common, but a more subtle problem is that you can sign a valid SAML assertion _for the wrong domain_, and now you can sign in as a competitor's staff.
As you hint at, having an SP that'll just self-service accept any random metadata.xml at least gives you a fighting chance :)
My question was more related to it being tedious. And now you say it requires a bit of testing which is annoying to set up. Isn't testing this just a matter of changing the audience field to something incorrect and try to sign on? This should take like 2 minutes?
If you just change the audience field, the signature will be invalid, so it might tell you the SP won't accept a bad signature, but it doesn't tell you that the SP would accept a correct-signature-for-wrong-audience assertion. And now we've explored two states in that very big tedium space I mentioned; it still doesn't tell you anything about e.g. canonicalization bugs or cross-domain bugs. Those are much harder to test, because they require your IdP to sign specifically crafted assertions malicious, so you can't test them with your standard Okta install or whatever.
So, sure: you can test this one specific bug by replaying an assertion for a different SP. Or you can make your IdP use new key pairs every time and then you're definitionally immune to the entire bug class forever with every SP. Even if replaying the SP takes 2 minutes, getting the tester to a place where they can exploit it takes way longer for most companies, so it's much more effective to just eliminate entire classes of bugs via policy.
TL;DR: you're right (modulo the amount of time) for this particular bug, but why bother? And if you're going to bother testing, why test for this one specific bug that's cheaper to avoid a different way? (I can think of a reason to test; but then the tedium comes in :))