Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Forma – An efficient vector-graphics renderer (github.com/google)
291 points by dragostis on Dec 16, 2022 | hide | past | favorite | 58 comments



I'm very happy to see this work! The era of rendering vector graphics in GPU compute shaders is upon us, and I have no doubt it we'll start seeing these in production soon, as there's just such a performance advantage over CPU rendering, and I believe trying to run vector 2D graphics through the GPU rasterization pipeline doesn't quite work.

This code is simpler than Vello (the new name for piet-gpu), focused on vector path rendering. It's also a strong demo of the power of WebGPU, while also having a performant software-only pipeline. I definitely encourage people to take a closer look.


> I believe trying to run vector 2D graphics through the GPU rasterization pipeline doesn't quite work.

I mean, it does work, it's just slower than using compute in most cases. I mean, people have emulated Linux in the GPU rasterization pipeline [1], so the raster pipeline is clearly Turing complete :)

[1]: https://blog.pimaker.at/texts/rvc1/


Thank you for the endorsement. Looking forward to collaborating more with you on 2D graphics.


How does this compare to https://github.com/RazrFalcon/resvg ?


That's a different kind of thing, I think. It renders using tiny-skia, so probably comparing to tiny-skia would be more appropriate? It also sounds like resvg's renderer can be swapped out, so it may be possible to have it render with Forma.


I just tried it in my own renderer (that uses tiny-skia) to compare performance, but unfortunately forma is still lacking some features to do a proper comparison (strokes, the ability to scale shapes up instead of only down (?????), rendering without clearing).


That sounds neat.


I saw google, I saw Rust, and I was betting it was you. I lost my bet but still happy to see it!


Haven't Skia et al been GPU accelerated for ages?


Yes, Skia uses the traditional method of tessellating path geometry on the CPU and then having the GPU handle transforming and shading it as if it were 3D geometry. Compare that with what we see here, which uses compute shaders for the whole process.


Mozilla peeps, is it conceivable that this could eventually get into firefox, or is it not the right fit in some way.

There was a recent discussion about map rendering and I commented to "just convert to SVG and let the browser render it" to which someone said that's too slow, to which I said "so improve the browsers vector rendering" rather than do a high performance renderer just for map software.


You might be interested in https://github.com/servo/pathfinder


In GIS, vector things are better, but the community used to use rasters


I don't understand why this isn't mentioned in the README. I wonder how they compare!


Was this conversation about OsmAnd+ by chance? Because that could use something like this.

Edit: glossed over the "browser" part. My bad.


Just five days ago OsmAnd 4.3 for Android with new faster rendering engine that uses OpenGL was released.

https://osmand.net/blog/osmand-android-4-3-released/#new-fas...


No way! It is waaaay faster now. It was unbelievably unperformant before, but I just dealt with it because it was otherwise a pretty good map app, especially using OSM and no Google spyware.


I wonder why their renderer is so slow. Organic Maps is super fast and smooth on my phone, though.


Is it rendering or the database access?

I see no slowness in OSM+ rendering on Android once the data for an area is loaded, as you pan, zoom, etc. But the pause between opening a new location and having it rendered is quite noticeable.


It was the rendering. I had read up on the related issue tickets, and the answer was that the vector rendering was both not optimized, and not cached in any way.


A while ago I wanted to make some vector-based toy games for fun. I surveyed the available options and basically couldn't find any. If an engine supports vector graphics, it wants to rasterize them ASAP. If I asked online, the general answer was, "GPUs are for rasters, it gets rasterized one way or another. So just do that."

I dunno. I'm not that bright. I don't fully understand it all. But I feel like it should be easy to say, "here's some SVG files for my toy 2D game. I expect they'll scale infinitely well when I zoom the viewport" but alas it was such a migraine with PhaserJS/PixiJs, Godot, and Unity3D.

I'm excited about any sort of project to get you vector graphics as close to the metal as possible before being rasterized (ideally upon draw to the graphics buffer)


I work on a project that builds an SVG compliant vector scene graph and allows for detailed interaction via its API. Writing client code that can plug into the rendering pipeline or just altering the existing code is also fairly easy - comparatively at least.

It's at https://github.com/parasol-framework/parasol but fair warning though, it is in alpha right now and going through an overhaul.


> I'm excited about any sort of project to get you vector graphics as close to the metal as possible before being rasterized

Get a NeXT workstation and use Display Postscript?


Don’t suggest such brilliantly horrible ideas during the beginning of my three week holiday… must resist urge to wander eBay.


I hope this makes its way into web browser rendering. I've been working on a "game engine" using SVG in the browser. When I started, everything I read said don't - because performance. However, manipulating an SVG using a reactive framework (I use Vue) makes for very readable code. Our game client is meant to be a "reference" client so readability is more important than performance, which just needs to be "good enough." However, a significant boost in SVG rendering performance would be amazing.

edit: although, as I look more deeply, I may be misunderstanding and this may not be something that would benefit existing SVG browser rendering in any way...


Ahh I also went down the SVG-for-browser game. The promise of vector (resize-and-without-loss) seemed good right.

I ran into a bunch of issues especially with text, when starting to scale "objects-with-txt", had to add bunch of weird css-stuff everywhere and not it's not very cross-browser.

I'm now back on the ThreeJS train again :/


Yeah, I doubt that it would. My impression is that it would be more used for Fuscia and/or Flutter.


Please, write more about the implementation! There is so little information on this topic in general.

Like, how does sorting pixel segments result in coverage masks? Is there an accumulator somewhere that isn't mentioned?


I’ve just filed an issue (https://github.com/google/forma/issues/11) describing how, on an old GPU (AMD 7990) but using the CPU device, it fails to start.

This looks like an issue with wgpu-rs, but am I wrong in assuming that the CPU device would work regardless of the graphic card?


Nice to hear someone else still rocking the 7990


I'm curious how this relates to Skia (the library that handles cross-platform rendering for Chrome and Flutter). Seems like this could be an alternative backend but also could serve as faster replacement?


forma is geared towards vector-graphics heavy use cases. A very good example would be Lottie or Rive animations or games that make use of vector graphics. Another possible use case would be places where you need a very small software renderer, like an OS's early boot.


about a decade ago I wanted to get started prototyping gameplay with vector graphics and I was dismayed at the state of available libraries to do this efficiently. I'm glad things seem to be getting much better in this space!


How about 3D modeling programs, like FreeCAD? That's currently single threaded and doesn't use hardware acceleration so it could use the boost for sure.


Flutter is moving to using Impeller, which is the faster replacement you're talking about: https://github.com/flutter/engine/tree/main/impeller


> Line segments are transformed into pixel segments by intersecting them with the pixel grid. We developed a simple method that performs this computation in O(1) and which is run in parallel.

What's a pixel segment exactly? Just a list of pixel coordinates that intersect?


A pixel segment is a line segment that fits inside of a pixel's square box. It can start and end anywhere inside of this 1x1 square box and it has a 64bit compact representation.


Ok, and when you sort these pixel-contained segments, what's the ordering?


I believe this refers to the z-ordering of pixel segments from different paths in the same pixel.


I don't know for sure, but I suspect a list of consecutive horizontal pixels (no diagonal).


This is cool. Being able to render SVG animations natively, without having to embed a browser, is awesome.

The map of Paris rendered in 40ms on my M1 Air which is crazy considering it's a 14MB file with tons of overlapping shapes.

I'm guessing supporting stroke is difficult because you would need to ultimately convert them to shapes which can travel through the pipeline.


Converting strokes to fills isn't that hard conceptually and it's what Pathfinder did. Most of the work is just supporting all the stroke features that SVG supports (joins, caps, dashes, etc.)


The README mentions WebGPU - do you have a URL to a demo of this running in the browser using WebAssembly?


It's quite easy to get the CPU back-end running with WASM. GPU might work as well with Chrome Canary, but I haven't tested it yet. Using WebGPU should make this just as easy.


You don't need Canary, there's an origin trial for WebGPU. You can ship to users on Chrome Stable today.

BTW WebGPU 1.0 now has a target date of May 4 2023 to be enabled by default in Chrome Stable. But with the origin trial you don't have to wait.


Feature request: a demo URL that does that!

I'm interested enough to click on a demo, but I'm not _quite_ interested enough to figure out what hoops I would need to jump through to get that demo running myself.


Slightly curious why it's on the google github account, if it's not an official google project, or even endorsed by google? Surely this should live over on github.com/dragostis/forma then?


If you're a Google employee, there's a few different paths to 'legally' open sourcing stuff without violating your employment contract. The simplest and fastest one is just to assign copyright to Google, use an approved license (basically anything non-copyleft) and stick it under the google/ github org. It being there doesn't mean Google sponsored or authored it, just that a Googler worked/works on it. And the "not official Google product" text is mandatory in the README.

There are other processes that allow you to retain copyright for yourself, but they require approval process.

(I used to work at Google, and have followed this process before.)


Putting "not official Google products" under github.com/google is the dumbest policy. I wonder why they don't realize/care about this.


Yeah, I suppose they could create a separate org, "Google Open Source" and put it there, but probably they'd like to reserve the right to own and then change the "not official" to "official" whenever they please.

There are many lawyers and such at Google who think about these things more intensely than you and I do. I'm sure they have looked at all angles.

EDIT: I do find it amusing to watch HN and other forums whenever something new and interesting is dumped under the /google org and people need to be convinced it's not some radical new direction Google is taking. But I've also been wrong before. I dismissed Fuchsia as a personal hobby project that grew out of control, until it grew out of control enough that it rm -r'd everything I'd been working on for 2+ years.


I'm sure the lawyers looked at all the legal angles but I don't know if they thought about the perception. But on a scale from 1 to "what were they thinking" this is pretty minor.


It’s very similar for Khronos, the github org of which contains projects that aren’t official Khronos products, but just something related and made by one of Khronos members.


When you leave employment at Google do you typically retain your github account and contribution rights to the repo?


Most people I think just use their personal github but join the Google org. And then you get booted from it when you quit.

Commit rights, I'm actually not clear on, but by default I think you'd lose rights. I believe there's likely a process for keeping them,, but I don't recall what it is. The one repo I contributed to that followed this process was essentially abandoned long before I left Google.


Thanks, that's interesting. Is any production Google code written this way, or is it only Google-adjacent open source?


There are definitely things under the google github org that are used in production, though they are typically mirrored from Google3 in some way. But not sure what, if any, has its origins in a personal open source projects.


i work on https://github.com/google/pytype which is largely developed internally and then pushed to github every few days. the github commits are associated with the team's personal github accounts. pytype is not an "official google product" insofar as the open source version is presented as is without official google support, but it is "production code" in the sense that it is very much used extensively within google.


Neat. Just in time for a project I'm working on. Thanks for sharing!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: