Hacker News new | past | comments | ask | show | jobs | submit login

The author would make an even stronger case if they showed the results after compression. On the few examples in the home page, which I compressed with gzip -9 (zstd performs similarly):

    tiger.svg.gz   29834
    tiger.tvg.gz   20533
    app-icon.svg.gz     613
    app-icon.tvg.gz     665
    comic.svg.gz   25063
    comic.tvg.gz   13262
    chart.svg.gz    8311
    chart.tvg.gz    5369
In most cases, even after gzip compression TVG still has a substantial lead over SVG.

This is evidence that the size improvement does not come entirely from the binary format (it would be possible to devise a binary format for SVG without changing the language and semantics), but from the simplified graphic primitives as well. If it was just XML overhead, compression should mitigate most of it.




If you run tiger.svg through `svgcleaner`, you get a file that's 57614 (with no visual difference[1]) that compresses to 20924 with gzip or 19529 with zstd.

`svgo` gives 61698 / 21642 gz / 20228 zstd, again no visual difference.

Not really a need for TVG if you can clean your SVG as part of your deployment pipeline.

(You can go even further and trim the coordinates to 2 places of precision which ends up with 52763 / 18299 / 17114 zstd at the expense of still largely invisible differences but I've had SVGs where this level of cleaning did materially affect the output.)

[1] https://imgur.com/a/2b5CsPQ


I came here to say the same thing.

Somebody recently pointed me at a nice online GUI for svgo, so you can try it for yourself without installing anything: https://jakearchibald.github.io/svgomg/


I'm not sure, but it seems svgcleaner can remove unused and invisible graphical elements[1]. I don't know if TinyVG preserves them. but if it does, it's not a fair comparison.

Did you try converting svgcleaner processed SVG to a TVG?

[1] https://github.com/RazrFalcon/svgcleaner


> Did you try converting svgcleaner processed SVG to a TVG?

I would but I can't build the SDK (gets some zig error about failing to add a package) and the darwin-arm downloads don't include `svg2tvg`.

If I use the darwin-x86 download, `svg2tvgt` can't convert the file - `UnitRangeException: NaN is out of range when encoded with scale 1`.

I guess it doesn't really parse all SVGs.


If the goal is smaller files than it's fair and SVG with SVGCleaner wins.

Maybe we need a TVGCleaner too.


While you're not wrong, I'm gonna put my graphic designer hat back on for the first time since high school and point out that sometimes you _do_ want those invisible elements still there, especially if you're gonna want to do further editing on the file later on.


> especially if you're gonna want to do further editing on the file later on.

I think you'd generally only use the cleaning / optimising step when deploying / packaging the asset - you'd leave the original as, well, the original for further editing (and to take advantage of better optimisations if they come about.)


Very true, and I'd expect graphic designers and most dev's to know that.

I've worked with enough people who only had the optimized assets because "Well optimized is better, right?" [0] that I thought it was worth pointing out.

[0] I was working on some web stuff for them and they were curious if I could also do some graphics work, small local company


You're right. The page mentions comparisons with optimized SVGs, but I didn't realize that the downloadable examples were not in fact optimized.


We are starting to miss the point of TinyVG with this discussion. The point is a simplified standard, so we don't end up with feature incomplete implementations. I mean just look at all the stuff Adobe Illustrator can, but Browsers can't. Final size a nice-to-have that comes with a minimalistic approach to the standard.

All of this of course at the risk of https://imgs.xkcd.com/comics/standards.png


What's the point if it's an output format and the authoring uses SVG? Then the lossy bit won't be the rendering but the SVG to TVG conversion.


or better yet, have brotli and gzip both in your pipeline


Just tested this and brotli gives an extra 1-1.5k saving over the zstd versions (18329 vs 19529 for tiger-clean.svg, 15749 vs 17114 for tiger-prec2.svg)


> If it was just XML overhead, compression should mitigate most of it.

Strong enough compression should mitigate most of it, but DEFLATE (and consequently zip and gzip) is not a strong enough algorithm.

For example, let's imagine that a particular format is available both in JSON and in a binary format and is entirely composed of objects, arrays and ASCII strings, so binary doesn't benefit much from a compact encoding. Now consider a JSON `[{"field1":...,"field2":...,...},...]` with lots of `fieldN` strings duplicated. DEFLATE will be able to figure out that `","fieldN":"` fragments frequently occur and can be shortened into a backreference, but that backreference still takes at least two bits and normally a lot more bits (because you have to distinguish different `","fieldN":"` fragments), so they will translate to pure overhead compared to compressed binary.

Modern compression algorithms mainly deal this pattern with two ways, possibly both. The backreference can be encoded to fractional number of bits, implying the use of arithmetic/range/ANS coding (cf. Zstandard). Or you can recognize different token distributions for different contexts (cf. Brotli). They do come with non-negligible computational overhead though and became only practical recently with newer techniques.


I did mention that Zstd numbers are very close to gzip numbers for this data.


I see much more gain in TinyVG in CPU usage to decode and render an image. XML is definitely not the most efficient way to expose data that is not meant for human consumption.


That would be what I’d care about the most. Smaller file size, but not an order of magnitude difference? Meh.

Easier for the browser to process? Well that’s going to have a tonne of useful ramifications.

Honestly that’s what annoys me about web services in general. (Rant mode enabled). The human readability aspect is moot because conversion is cheap, yet everything these days is built on XML, JSON and YAML.

The increasing use of middleman services whose entire job is to parse these formats into native types, then process the data, then serialise back into the same inefficient format, makes the issue a whole lot worse.

I mean, sure, this stuff is used so heavily that some amazing work has gone into parsing with SIMD at ridiculously high rates, but this is still orders of magnitude more time and effort for a CPU to perform the same thing with a native format. Even for things like strings, an actually-sensible representation like [length][body] would save all kinds of hassle by avoiding processing delimiters, searching for quotes, etc, and would make loading a value as simple as allocating the ALREADY KNOWN size and reading it.

Anyway, that’s my rant. The more parse-friendly formats out there the better.


XML, of course, is an opposite: It is a rather good way for humans to create data meant for computer consumption. Once the data are laid out, they can be transformed into a more efficient machine form in the same way a program is compiled, but for the web this is rarely done for any format, including pure machine-to-machine interaction. E.g. JSON is mostly used for machine-to-machine exchange and it is far from begin efficient for this.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: