You can write vertex shaders in WebGL to massively parallelize calculating the movement of each particle. This isn't possible in canvas.
And that just optimizes the physics—there's even more parallelization with the actual drawing! Drawing a bunch of points like this can be achieved using a single fragment shader that takes a big array of positions. Drawing new positions involves just copying the array of positions from the CPU to the GPU.
But if you're using both vertex and fragment shaders, the GPU might entirely handle updating and drawing points over time (and thus can be parallelized across hundreds of cores), only needing the CPU to set up the initial point positions and get things started.
With canvas, all the work (including rendering each frame) takes the JS main thread, running on the CPU.
Here's a cool demo of how much better WebGL is at rendering a ton of 2D sprites! Be careful turning up the number of sprites with canvas—it might lock up your main thread and be difficult to undo.
They aren't doing the particle sim in WebGL. That's the WASM part.
As an aside: there are new standards in the works for getting even the CanvasContext2D rendering off of the main thread. On an HTMLCanvasElement, you can call transferControlToOffscreen(), which will retrun an OffscreenCanvas object. That object can then be transfered to a Worker. From there, you can create the 2D rendering context (make sure you don't create any 2D contexts on that canvas in the main thread) and draw to the canvas in the worker (you can do this for WebGL contexts, as well).
All the drawing ops performed on the OffscreenCanvas will appear in the HTMLCanvasElement immediately, without having to perform a callback message from the Worker back to the main thread.
Unfortunately, OffscreenCanvas is still in development in Firefox, only accessible behind a flag. In my testing with the feature enabled, it also seems that Mozilla hasn't implemented 2D contexts in Workers, only having the WebGL context available, which is quite annoying.
I don't know the status in Safari, but Apple has historically been very slow about implementing new features.
I use it for text rendering, myself. Getting text rendering working in shader code is pretty difficult and gives pretty low quality results. You have to create a glyph map for all the characters you plan to render--which also means you have to plan ahead of time what text you're going to render--and you are stuck with only one color for all of the text (there are some multicolor SDF-based systems, but they aren't very common).
So I render the text in a 2D rendering context, with all the fancy bells and whistles and support for changing fonts on the fly and colors and transformations and a sane API, in a Worker, save off an ImageBitmap object from it, transfer that ImageBitmap back to the main thread, and use it as a texture on 3D geometry.
I also use 3D contexts in Workers to do some pre-processing on textures, like converting equirectangular skymaps into rectalinear cubemaps. The worker is setup to be able to preload these images while the user is doing other things, then transfer the results when they are needed during a scene transition, so the scene transition is nearly instanteous.
What's the benefit of using WebGL compared to Canvas?
Does the WASM bring a performance improvement in practice, relative to the equivalent JS library?
Keep it up!