I checked the source briefly, and the problem is that the source draws every single rectangle manually every single tick. That's very inefficient. Just draw a rectangle to a Texture once, and create a bunch of sprites using that Texture. Pixi should be able to handle at least 10x the amount of rectangles if you do this.
but is definitely slower than Two.js at drawing randomly sized rectangles.
I pushed a fix, the way it was drawing randomly sized rectangles was an unfair comparison. It is much faster than Two.js now (tried it at 10000 rectangles too) and initialises much faster also.
The comparison was unfair it was only moving the positions of the rectangles each frame for paper.js, and two.js but for pixi.js it was redrawing them from scratch which means it had to retesselate everything every frame.
I'm not familiar with PixiJS in any way other than knowing that it's a game engine. What is the difference between doing single PIXI.Graphics for the whole canvas vs PIXI.Graphics for every rectangle? Pros / cons?
I see you saw the response on the issue, for anyone else this was the response:
The golden rule is the less you have to clear() and redraw the insides of the graphics the faster it's going to be as pixi.js has to convert your draw commands to triangles to pass to WebGL, if you're just moving the position of something it can use the already calculated triangles and just add an offset before drawing whereas previously you were generating new triangles and hardbaking the position into the triangles each frame.
Because you move each rectangle at a different rate I used a separate graphics instance so that we could move them without having to redraw them.
At your link, on my phone, performance dips when new bunnies appear, then gets back up in a few seconds. Which is weird, since afaiu objects should be handled in exactly the same way throughout their lifetime.
I see the same on the desktop. It's probably some procedural code that is updating a data structure as bunnies are added. Just speculation - I haven't looked at the code.
I've been using this exact test for years now to judge a phone/tablet before buying/recommending - devices with the exact same SoC can differ wildly in performance.
Nowadays any phone that can't display 20k bunnies at 20FPS likely has:
1. An underpowered GPU.
2. Badly designed cooling.
3. A screen with a too high resolution.
Of course there are many more thorough and appropriate benchmarks, but this one doesn't require you to install anything and will give you an answer before you're approached by the store staff inquiring what is it that you're trying to do with the merchandise.
It’s WebGL underneath, which explains the ability to render really fast. You’re not drawing individual shapes so much as shoving data structures into the graphics card.
(That’s not a statement on how impressive this is, just a “oh that’s how it’s done.”)
One thing that I didn't realize when I first saw it was that the demo is able to render so many sprites because it's doing the simplest possible thing - rendering just a few textures. If you render a bunch of different sprites with different textures, you'll get a big FPS hit.
I see people doing this crap all the time in the middle of tight loops that do a lot of work in C#, JS, TypeScript, and justifying it on the grounds of "productivity" and "readability". It seriously gets on my nerves. Do not do this.
Functional code might look nice but often creates excess work for the GC and kills performance. We had a situation within the last week where a piece of code was blowing through 350MB of memory unnecessarily, and massively slowing down a heavy set of calculations, because of exactly this kind of issue.
> ustifying it on the grounds of "productivity" and "readability". It seriously gets on my nerves. Do not do this.
On the contrary, please do this. "productivity" and "readability" are important aspects to consider when writing code, especially if someone else is going to be reading it.
When you've identified a bottleneck, feel free to write the code in the bottleneck more performantly, if necessary. But please do not sacrifice readability across the entire codebase for a couple of hot loops.
I can agree with your point in general, but [...Array(N).keys()].forEach() is not the most readable way to write "do this N times".
It creates an array of length N, but for obscure-to-most-people reasons, Array(N).forEach() doesn't work, so they Rube Goldberged their way to an array that they could call forEach() on. Their solution was to use Array#keys to get an iterable from the array. But an iterable doesn't have a .forEach() method, so they iterate the iterable into another array just to iterate it again with Array#forEach. Frankly the only thing this seems optimized for is to solve the problem without the for-loop for some reason.
The for-loop, on the other hand, is an instantly obvious solution. It's how programmers have been expressing "do this N times" for decades across languages.
Especially when Array(N).fill(0).forEach((_, key) => ...) gives you the same exact functionality without the second eager array if you're seriously trying to avoid for loops.
Unless there's a reason to use Array.forEach such as automatic paralellization or some other SIMD-like optimization that can be done that cannot be done in a forloop.
A lot of cargo-cult programming seem to take functional programming to mean programming using the map/reduce/foreach functions, and you end up with shitty code like [...Array(N).keys()].foreach(), just so you can somehow claim that you're doing functional programming.
I sometimes wish there was a way to demand the same level of proof for claims of readability and productivity that we demand for claims about performance.
Measuring before optimizing is of course a good idea, but it's also time consuming. There's a lot of latitude to make different reasonable choices about how you write code down in the first place before you get to the measuring staging.
Should we criticize someone who reaches for a for loop because they know it doesn't allocate without proving that that matters more than someone who reaches for a map because they think it's more readable and productive without any great way of proving that that's true?
>map because they think it's more readable and productive without any great way of proving that that's true?
I mean, doing
sendUserIds(users.map(u => u.id))
is very obviously more readable than what you're forced to do in languages like Go
ids := make([]int, len(users))
for i, user := range users {
ids[i] = user.Id
}
sendUserIds(ids)
The difference only grows once you start needing to do more operations, grouping, putting stuff into map by a certain key, etc. Many languages also have lazy sequences for such operations (e.g. https://kotlinlang.org/docs/reference/sequences.html, though they aren't always faster, it depends)
The code in the comment above is however not an example of this, and is actually less readable in my opinion. It seems more like an author wishing to be using a different language instead of accepting what they have in front of them.
You really don't need to measure to know that a tight loop inside a render function called at 60 fps is performance critical. The post does call this out explicitly as "in the middle of tight loops", which is a vary, very different situation than code that pretty much only runs once.
This is pretty far from premature optimization even if you never measure the effect, it is entirely predictable that code like this would have unnecessarily bad performance. If you write performance sensitive code you don't allocate stuff inside a tight loop unless you can't avoid it. The consequence of this is typically that the code is more straightforward imperative with fewer abstractions, but that is not inherently less readable.
Don't do stuff like pushing rectangles to an array to have their position wrapped, instead of simply doing it right there, unless you want to test the GC.
Yeah, by default it's accelerated anyway, at least if depending on browser and hardware. So, at least for something as basic as drawing a lot of things with the same color, engines and additional code can only make it slower.
Generally speaking, avoid GC, and avoid setting state that's already set... if you do these two things and use engines that do these two things, it's usually going to be more than fast enough :)
Its nice if you can avoid creating new objects, but in recent optimization I've seen no difference between reusing objects vs immutable objects. Removing unnecessary code is always nice though.
I might be the only one but for such a simple website I found the menu confusing. When changing renderers, the order appears to change every time so I forgot which one I was previously looking at. Also, the Count is always reset back to 1000 so it's annoying to compare renderers at the highest count.
I use Paper.js in my main project, and if you want to make a 2D game or something it would be a terrible choice. It isn't fast. However, it gives you a ton of extremely useful tools (like calculating intersections between arbitrary paths) with a very well thought out API. For interactive diagram generation it's great. The performance is still a solid 60fps if you limit what's getting updated to fewer than 20 or so things at a time.
all of these are quite low level engines, nothing wrong with that but there's wrapper around these, like phaser which uses pixi as backend and give quite some useful abstractions on top of the rendering.
Yeah, I have basic culling implemented. Objects that are larger than, say, 3x their normal scale are not rendered. Likewise with those 1/100th their normal scale.
The boolean functionality in Paper.js is quite outstanding. I am not aware of any other Javascript library that features such a robust implementation of path unite/intersect/subtract/exclude operations.
Shameless plug here, but I made a quick-and-dirty perf test that does something similar in our HTML5 game engine Construct 3, and it appears to run way faster even with 20000 boxes: https://www.scirra.com/labs/boxperf/index.html
This kind of test is easy work for a well-batched renderer. You can accumulate everything in to a single big typed array, copy to a single vertex buffer, and do one call to drawElements() in WebGL, and bingo, tens of thousands of sprites drawn in one go.
I've got other similar performance tests that can do hundreds of thousands of sprites @ 30 FPS (on a high-end machine), which I believe are bottlenecked mainly on memory bandwidth, because I only managed to make it go faster by reducing the size of the JS objects involved.
Modern JS is ultra fast - if you have the right performant coding style.
FWIW the code in the post doesn't use sprites. I think it is done intentionally because not all of the engines support sprites. As someone mentioned in this thread if PixiJS would use sprites, it could be 10x faster.
Yes, I agree. I also found out that Two.js redraws 5k elements even a little bit faster but it takes a couple of seconds to render them for the first time.
Actually I made a switch at first as I started with Two.js and there are 3 types (svg, canvas and webgl). But then there was Paper.js which only had simple canvas so I removed the switch at all and used the fastest renderers for every engine (webgl or canvas).
From this I conclude I should try using Two.js for my game.
I have been limited to drawing 4k neutrons for performance but Two.js might be a better solution.
I'm a total newb with GPU stuff on the web. But am curious why aren't these GPU frameworks used to render most website? Things such as facebook for instance I imagine would be a lot snappier. Am I missing something?
Having to download an extra meg or 10 of code does not make your website responsive to start, worse if you're updating it constantly so you're users have to re-download that code every few days, hours.
Support for assistive technologies disappears. A page of HTML is relatively easy to scan for text to read or turn to brail or translate to another language. A screen (not a page) of pixels is not.
Similarly extensions all break. Extensions work because there is a known structure to the page (HTML)
UI consistency disappears. Of course pages already have this issue but it will be much much worse if every site rolls it's own pixel rendering GUI because none of the standard keys will work. Ctrl/Cmd-Z for undo Ctrl/Cmd-A for select all? Similarly maybe the user has changed those keys or is using some other assistive device which all works because things are standardized.
Letting the browser handle what's best for the device probably disappears. Cleartype for fonts? Rendering Text or SVG at the user's resolution (yes that can be handled by the page but will it? up to the site)
Password managers break including the browser's built in one. There's no text field to find so no way to know if this is the place to fill them in.
Spell checking breaks. Same as above.
Basically your site will suck for users if you do this. Some will say some frameworks will come up that try to solve all of these issues but that will just mean every pages is on a different version of the framework with different bugs not yet resolved. Sounds like hell.
But don't you think there's a whole lot of apps that could benefit from being on a canvas rather than being slowed down by browser-stuff? Editors come to mind: CodeSandbox and VScode
Editors in particular, are worse for some people if they can't render Unicode text properly, have poorer rendering of fonts, can't be analysed by assistive tools such as screen readers and braille displays, and don't interact with OS and tools outside the browser the same way as HTML and native elements.
Sometimes a good compromise is to use canvas for rendering some things things on a page (the way Google Sheets does), but create HTML elements on top as needed for particular behaviour.
Outside of grid-layout, there is nothing salvageable from the CSS layout engine that cannot be done better inside a programmable view such a canvas.
Give it 5 years or so for WebGL/WebGPU to standardize, and new UI libraries with a better layout engine and UI constructs far better than what can be built on top of HTML/CSS will show up.
Nobody thought we would take HTML/CSS as far as we have, but it has already served its purpose.
I think what we'll see in 5 years will be browsers using even more hardware acceleration/GPU acceleration for its rendering -- so it'll be entirely abstracted away from us. Obviously, direct canvas rendering will still be more performant.
I don't think anything will stop browser vendors from advancing the HTML/CSS standards, but I also believe that we will see the emergence of libraries that have a different opinion around layout declarations and core components. These libraries will benefit from browser vendors opening up the GPU with each new iteration.
Well, the browser already uses the GPU to render. The difference is how you manage state. A lot of web applications do that in a slow way by storing data in the DOM for example, or do things that require the browser to re-render the whole scene.
You could port FB to Canvas and it'd probably be faster until you add a ton of abstraction to give you what HTML does.
The performance on my phone makes me sad. PixiJS can barely hit 30 fps and the others do worse than that. FWIW, I have a Pixel 2 phone and I'm using Firefox.
Don't worry, the benchmark is not optimized and only testing a very specific thing, you can still do awesome complex stuff running at 60fps with any of these libraries :)
Paper.js looks the best on my Mac on a 4k monitor 200% scaling. The rectangles in the others look like the are upscaled from a lower resolution. Canvas vs WebGL?
My understanding is canvas also applies AA the rectangle lines still look AA just at a higher DPI.
It looks to me like the rectangles done in WebGL are drawn at the scaled resolution (1920 x 1080) vs the unscaled resolution (3840 x 2160), whereas the canvas is DPI aware and drawing at the full 4k resolution.
I would need to dig further but basically in WebGL the rectangles are drawn to a texture at the lower res then upscaled just like a straight image of a rectangle prerendered would be at the lower dpi.
Edit:
Looks like paper.js is specifically HiDpi aware and it can be turned off for better performance which would be more fair when compared to the other implementations:
hidpi="off": By default, Paper.js renders into a hi-res Canvas on Hi-DPI (Retina) screens to match their native resolution, and handles all the additional transformations for you transparently. If this behavior is not desired, e.g. for lower memory footprint, or higher rendering performance, you can turn it off, by setting hidpi="off" in your canvas tag. For proper validation, data-paper-hidpi="off" works just as well.
Also PixiJS seems to support HiDpi if resolution is set properly:
// Use the native window resolution as the default resolution
// will support high-density displays when rendering
PIXI.settings.RESOLUTION = window.devicePixelRatio;
https://www.goodboydigital.com/pixijs/bunnymark/
I checked the source briefly, and the problem is that the source draws every single rectangle manually every single tick. That's very inefficient. Just draw a rectangle to a Texture once, and create a bunch of sprites using that Texture. Pixi should be able to handle at least 10x the amount of rectangles if you do this.