Hacker Newsnew | past | comments | ask | show | jobs | submit | HackerThemAll's commentslogin

I lost all respect for Proton. They've been running ragebait ad campaign on Facebook, maybe also on other media, I don't know that, with that rage especially targeted at Google, spreading fake information and hate.

could you tell more details or links about that ?

I think this was the main reason (from the linked article) LOL:

"Brotli is a compression algorithm developed by Google."

They have no idea about Zstandard nor ANS/FSE comparing it with LZ77.

Sheer incompetence.


I can’t imagine the people actually doing the technical work don’t know about Zstandard.

EDIT: Something weird is going on here. When compressing zstd in parallel it produces the garbage results seen here, but when compressing on a single core, it produces result competitive with Brotli (37M). See: https://news.ycombinator.com/item?id=46723158

I just took all PDFs I had in my downloads folder (55, totaling 47M). These are invoices, data sheets, employment contracts, schematics, research reports, a bunch of random stuff really.

I compressed them all with 'zstd --ultra -22', 'brotli -9', 'xz -9' and 'gzip -9'. Here are the results:

    +------+------+-----+------+--------+
    | none | zstd | xz  | gzip | brotli |
    +------|------|-----|------|--------|
    | 47M  | 45M  | 39M | 38M  | 37M    |
    +------+------+-----+------+--------+
Here's a table with all the files:

    +------+------+------+------+--------+
    | raw  | zstd | xz   | gzip | brotli |
    +------+------+------+------+--------+
    | 12K  | 12K  | 12K  | 12K  | 12K    |
    | 20K  | 20K  | 20K  | 20K  | 20K    | x5
    | 24K  | 20K  | 20K  | 20K  | 20K    | x5
    | 28K  | 24K  | 24K  | 24K  | 24K    |
    | 28K  | 24K  | 24K  | 24K  | 24K    |
    | 32K  | 20K  | 20K  | 20K  | 20K    | x3
    | 32K  | 24K  | 24K  | 24K  | 24K    |
    | 40K  | 32K  | 32K  | 32K  | 32K    |
    | 44K  | 40K  | 40K  | 40K  | 40K    |
    | 44K  | 40K  | 40K  | 40K  | 40K    |
    | 48K  | 36K  | 36K  | 36K  | 36K    |
    | 48K  | 48K  | 48K  | 48K  | 48K    |
    | 76K  | 128K | 72K  | 72K  | 72K    |
    | 84K  | 140K | 84K  | 80K  | 80K    | x7
    | 88K  | 136K | 76K  | 76K  | 76K    |
    | 124K | 152K | 88K  | 92K  | 92K    |
    | 124K | 152K | 92K  | 96K  | 92K    |
    | 140K | 160K | 100K | 100K | 100K   |
    | 152K | 188K | 128K | 128K | 132K   |
    | 188K | 192K | 184K | 184K | 184K   |
    | 264K | 256K | 240K | 244K | 240K   |
    | 320K | 256K | 228K | 232K | 228K   |
    | 440K | 448K | 408K | 408K | 408K   |
    | 448K | 448K | 432K | 432K | 432K   |
    | 516K | 384K | 376K | 384K | 376K   |
    | 992K | 320K | 260K | 296K | 280K   |
    | 1.0M | 2.0M | 1.0M | 1.0M | 1.0M   |
    | 1.1M | 192K | 192K | 228K | 200K   |
    | 1.1M | 2.0M | 1.1M | 1.1M | 1.1M   |
    | 1.2M | 1.1M | 1.0M | 1.0M | 1.0M   |
    | 1.3M | 2.0M | 1.1M | 1.1M | 1.1M   |
    | 1.7M | 2.0M | 1.7M | 1.7M | 1.7M   |
    | 1.9M | 960K | 896K | 952K | 916K   |
    | 2.9M | 2.0M | 1.3M | 1.4M | 1.4M   |
    | 3.2M | 4.0M | 3.1M | 3.1M | 3.0M   |
    | 3.7M | 4.0M | 3.5M | 3.5M | 3.5M   |
    | 6.4M | 4.0M | 4.1M | 3.7M | 3.5M   |
    | 6.4M | 6.0M | 6.1M | 5.8M | 5.7M   |
    | 9.7M | 10M  | 10M  | 9.5M | 9.4M   |
    +------+------+------+------+--------+
Zstd is surprisingly bad on this data set. I'm guessing it struggles with the already-compressed image data in some of these PDFs.

Going by only compression ratio, brotli is clearly better than the rest here and zstd is the worst. You'd have to find some other reason (maybe decompression speed, maybe spec complexity, or maybe you just trust Facebook more than Google) to choose zstd over brotli, going by my results.

I wish I could share the data set for reproducibility, but I obviously can't just share every PDF I happened to have laying around in my downloads folder :p


Turns out that these numbers are caused by APFS weirdness. I used 'du' to get them which reports the size on disk, which is weirdly bloated for some reason when compressing in parallel. I should've used 'du -A', which reports the apparent size.

Here's a table with the correct sizes, reported by 'du -A' (which shows the apparent size):

    +---------+---------+--------+--------+--------+
    |  none   |  zstd   |   xz   |  gzip  | brotli |
    +---------|---------|--------|--------|--------|
    | 47.81M  | 37.92M  | 37.96M | 38.80M | 37.06M |
    +---------+---------+--------+--------+--------+
These numbers are much more impressive. Still, Brotli has a slight edge.

> | 1.1M | 2.0M | 1.1M | 1.1M | 1.1M |

Something is going terribly wrong with `zstd` here, where it is reported to compress a file of 1.1MB to 2MB. Zstd should never grow the file size by more than a very small percent, like any compressor. Am I interpreting it correctly that you're doing something like `zstd -22 --ultra $FILE && wc -c $FILE.zst`?

If you can reproduce this behavior, can you please file an issue with the zstd version you are using, the commands used, and if possible the file producing this result.


Okay now this is weird.

I can reproduce it just fine ... but only when compressing all PDFs simultaneously.

To utilize all cores, I ran:

    $ for x in *.pdf; do zstd <"$x" >"$x.zst" --ultra -22 & done; wait
(and similar for the other formats).

I ran this again and it produced the same 2M file from the source 1.1M file. However when I run without paralellization:

    $ for x in *.pdf; do zstd <"$x" >"$x.zst" --ultra -22; done
That one file becomes 1.1M, and the total size of *.zst is 37M (competitive with Brotli, which is impressive given how much faster it is to decompress).

What's going on here? Surely '-22' disables any adaptive compression stuff based on system resource availability and just uses compression level 22?


Yeah, `--adaptive` will enable adaptive compression, but it isn't enabled by default, so shouldn't apply here. But even with `--adaptive`, after compressing each block of 128KB of data, zstd checks that the output size is < 128KB. If it isn't, it emits an uncompressed block that is 128KB + 3B.

So it is very central to zstd that it will never emit a block that is larger than 128KB+3B.

I will try to reproduce, but I suspect that there is something unrelated to zstd going on.

What version of zstd are you using?


'zstd --version' reports: "** Zstandard CLI (64-bit) v1.5.7, by Yann Collet **". This is zstd installed through Homebrew on macOS 26 on an M1 Pro laptop. Also of interest, I was able to reproduce this with a random binary I had in /bin: https://floss.social/@mort/115940378643840495

I was completely unable to reproduce it on my Linux desktop though: https://floss.social/@mort/115940627269799738


I've figured out the issue. Use `wc -c` instead of `du`.

I can repro on my Mac with these steps with either `zstd` or `gzip`:

    $ rm -f ksh.zst
    $ zstd < /bin/ksh > ksh.zst
    $ du -h ksh.zst
    1.2M ksh.zst
    $ wc -c ksh.zst
     1240701 ksh.zst
    $ zstd < /bin/ksh > ksh.zst
    $ du -h ksh.zst
    2.0M ksh.zst
    $ wc -c ksh.zst
     1240701 ksh.zst
    
    $ rm -f ksh.gz
    $ gzip < /bin/ksh > ksh.gz
    $ du -h ksh.gz
    1.2M ksh.gz
    $ wc -c ksh.gz
     1246815 ksh.gz
    $ gzip < /bin/ksh > ksh.gz
    $ du -h ksh.gz
    2.1M ksh.gz
    $ wc -c ksh.gz
     1246815 ksh.gz
When a file is overwritten, the on-disk size is bigger. I don't know why. But you must have ran zstd's benchmark twice, and every other compressor's benchmark once.

I'm a zstd developer, so I have a vested interest in accurate benchmarks, and finding & fixing issues :)


Interesting!

It doesn't seem to be only about overwriting, I can be in a directory without any .zst files and run the command to compress 55 files in parallel and it's still 45M according to 'du -h'. But you're right, 'wc -c' shows 38809999 bytes regardless of whether 'du -h' shows 45M after a parallel compression or 38M after a sequential compression.

My mental model of 'du' was basically that it gives a size accurate to the nearest 4k block, which is usually accurate enough. Seems I have to reconsider. Too bad there's no standard alternative which has the interface of 'du' but with byte-accurate file sizes...


Yeah, it isn't quite that simple. E.g. `/bin/ksh` reports 1.4MB, but it is actually 2.4MB. Initially, I thought it was because the file was sparse, but there are only 493KB of zeros. So something else is going on. Perhaps some filesystem-level blocks are deduped from other files? Or APFS has transparent compression? I'm not sure.

It does still seem odd that APFS is reporting a significantly larger disk-size for these files. I'm not sure why that would ever be the case, unless there is something like deferred cleanup work.


Ross Burton on Mastodon suggests that it might be deduplication; when writing sequentially, later files can re-use blocks from earlier files, while that isn't the case as much when writing sequentially. That seems plausible enough to me.

I've concluded that this can't be the reason. It'd only result in an error where the size reported by 'du' is smaller than the apparent size (aka number of bytes reported by 'wc -c') of the file. What we see here is that the size reported by 'du' is almost twice as large as the number of bytes. That can't be the result of dedpulication.

I'll chalk it up to "some APFS weirdness".


doesn't zstd cap out at compression level 19?

From the man page:

    --ultra: unlocks high compression levels 20+ (maximum 22), using a lot more memory.
Regardless, this reproduces with random other files and with '-9' as the compression level. I made a mastodon post about it here: https://floss.social/@mort/115940378643840495

If you're worried about double-compression of image data, you can uncompress all images by using qpdf:

    qpdf --stream-data=uncompress in.pdf out.pdf
The resulting file should compress better with zstd.

Why not use a more widespread compression algorithm (e.g. gzip) considering that Brotli barely performs better at all? Sounds like a pain for portability

I'm not sold on the idea of adding compression to PDF at all, I'm not convinced that the space savings are worth breaking compatibility with older readers. Especially when you consider that you can just compress it in transit with e.g HTTP's 'Content-Encoding' without any special PDF reader support. (You can even use 'Content-Encoding: br' for brotli!)

If you do wanna change PDF backwards-incompatibly, I don't think there's a significant advantage to choosing gzip to be honest, both brotli and zstd are pretty widely available these days and should be fairly easy to vendor. But yeah, it's a slight advantage I guess. Though I would expect that there are other PDF data sets where brotli has a larger advantage compared to gzip.

But what I really don't get is all the calls to use zstd instead of brotli and treating the choise to use brotli instead of zstd as some form of Google conspiracy. (Is Facebook really better?)


>But what I really don't get is all the calls to use zstd instead of brotli and treating the choise to use brotli instead of zstd as some form of Google conspiracy. (Is Facebook really better?)

I may dislike Google. But my support of JPEG XL and Zstd has nothing to do with competition tech being Google at all. I simply think JPEG XL and Zstd are better technology.


Could you add compression and decompression speeds to your table?

I just did some interactive shell loops and globs to compress everything and output CSV which I processed into an ASCII table, so I don't exactly have a pipeline I can modify and re-run the tests with compression speeds added ... but I can run some more interactive shell-glob-and-loop-based analysis to give you decompression speeds:

    ~/tmp/pdfbench $ hyperfine --warmup 2 \
    'for x in zst/*; do zstd -d >/dev/null <"$x"; done' \
    'for x in gz/*; do gzip -d >/dev/null <"$x"; done' \
    'for x in xz/*; do xz -d >/dev/null <"$x"; done' \
    'for x in br/*; do brotli -d >/dev/null <"$x"; done'
    Benchmark 1: for x in zst/*; do zstd -d >/dev/null <"$x"; done
      Time (mean ± σ):     164.6 ms ±   1.3 ms    [User: 83.6 ms, System: 72.4 ms]
      Range (min … max):   162.0 ms … 166.9 ms    17 runs
    
    Benchmark 2: for x in gz/*; do gzip -d >/dev/null <"$x"; done
      Time (mean ± σ):     143.0 ms ±   1.0 ms    [User: 87.6 ms, System: 43.6 ms]
      Range (min … max):   141.4 ms … 145.6 ms    20 runs
    
    Benchmark 3: for x in xz/*; do xz -d >/dev/null <"$x"; done
      Time (mean ± σ):     981.7 ms ±   1.6 ms    [User: 891.5 ms, System: 93.0 ms]
      Range (min … max):   978.7 ms … 984.3 ms    10 runs
    
    Benchmark 4: for x in br/*; do brotli -d >/dev/null <"$x"; done
      Time (mean ± σ):     254.5 ms ±   2.5 ms    [User: 172.9 ms, System: 67.4 ms]
      Range (min … max):   252.3 ms … 260.5 ms    11 runs
    
    Summary
      for x in gz/*; do gzip -d >/dev/null <"$x"; done ran
        1.15 ± 0.01 times faster than for x in zst/*; do zstd -d >/dev/null <"$x"; done
        1.78 ± 0.02 times faster than for x in br/*; do brotli -d >/dev/null <"$x"; done
        6.87 ± 0.05 times faster than for x in xz/*; do xz -d >/dev/null <"$x"; done
As expected, xz is super slow. Gzip is fastest, zstd being somewhat slower, brotli slower again but still much faster than xz.

    +-------+-------+--------+-------+
    | gzip  | zstd  | brotli | xz    |
    +-------+-------+--------+-------+
    | 143ms | 165ms | 255ms  | 982ms |
    +-------+-------+--------+-------+
I honestly expected zstd to win here.

Thanks a lot. Interestingly Brotli’s author mentioned here that zstd is 2× faster at decompressing, which roughly matches your numbers:

https://news.ycombinator.com/item?id=46035817

I’m also really surprised that gzip performs better here. Is there some kind of hardware acceleration or the like?


Zstd should not be slower than gzip to decompress here. Given that it has inflated the files to be bigger than the uncompressed data, it has to do more work to decompress. This seems like a bug, or somehow measuring the wrong thing, and not the expected behavior.

It seems like zstd is somehow compressing really badly when many zstd processes are run in parallel, but works as expected when run sequentially: https://news.ycombinator.com/item?id=46723158

Regardless, this does not make a significant difference. I ran hyperfine again against a 37M folder of .pdf.zst files, and the results are virtually identical for zstd and gzip:

    +-------+-------+--------+-------+
    | gzip  | zstd  | brotli | xz    |
    +-------+-------+--------+-------+
    | 142ms | 165ms | 269ms  | 994ms |
    +-------+-------+--------+-------+
Raw hyperfine output:

    ~/tmp/pdfbench $ du -h zst2 gz xz br
     37M    zst2
     38M    gz
     38M    xz
     37M    br
    
    ~/tmp/pdfbench $ hyperfine ...
    Benchmark 1: for x in zst2/*; do zstd -d >/dev/null <"$x"; done
      Time (mean ± σ):     164.5 ms ±   2.3 ms    [User: 83.5 ms, System: 72.3 ms]
      Range (min … max):   162.3 ms … 172.3 ms    17 runs
    
    Benchmark 2: for x in gz/*; do gzip -d >/dev/null <"$x"; done
      Time (mean ± σ):     142.2 ms ±   0.9 ms    [User: 87.4 ms, System: 43.1 ms]
      Range (min … max):   140.8 ms … 143.9 ms    20 runs
    
    Benchmark 3: for x in xz/*; do xz -d >/dev/null <"$x"; done
      Time (mean ± σ):     993.9 ms ±   9.2 ms    [User: 896.7 ms, System: 99.1 ms]
      Range (min … max):   981.4 ms … 1007.2 ms    10 runs
    
    Benchmark 4: for x in br/*; do brotli -d >/dev/null <"$x"; done
      Time (mean ± σ):     269.1 ms ±   8.8 ms    [User: 176.6 ms, System: 75.8 ms]
      Range (min … max):   261.8 ms … 287.6 ms    10 runs

Ah I understand. In this benchmark, Zstd's decompression time is 284 MB/s, and Gzip's is 330 MB/s. This benchmark is likely dominated by file IO for the faster decompressors.

On the incompressible files, I'd expect decompression of any algorithm to approach the speed of `memcpy()`. And would generally expect zstd's decompression speed to be faster. For example, on a x86 core running at 2GHz, Zstd is decompressing a file at 660 MB/s, and on my M1 at 1276 MB/s.

You could measure locally either using a specialized tool like lzbench [0], or for zstd by just running `zstd -b22 --ultra /path/to/file`, which will print the compression ratio, compression speed, and decompression speed.

[0] https://github.com/inikep/lzbench


Copilot is the stupidest of them all. And I suspect the UK police used either free or cheap version, which is even more stupid.

The best decision for Google happened like 10 years ago when they started manufacturing their own silicon for crunching neural nets. No matter if they had a really good crystal ball back then, smart people, time travel machine or just luck, it pays for them now. They don't need to participate in that Ponzi scheme that OpenAI, Nvidia and Microsoft created, and they don't need to wait in line to buy Nvidia cards.


It had to have been launched longer ago than that because their first public-facing, TPU-using generative product was Inbox Smart Reply, which launched more than 10 years ago. Add to that however much time had to pass up to the point where they had the hardware in production. I think the genesis of the project must have been 12-15 years ago.


The acquired podcast did a nice episode on the history of AI in Google recently going back all the way to when they were trying to do the "I feel lucky", early versions of translate, etc. All of which laid the ground work for adding AI features to Google and running them at Google scale. That started early in the history of Google when they did everything on CPUs still.

The transition to using GPU accelerated algorithms at scale started happening pretty early in Google around 2009/2010 when they started doing stuff with voice and images.

This started with Google just buying a few big GPUs for their R&D and then suddenly appearing as a big customer for NVidia who up to then had no clue that they were going to be an AI company. The internal work on TPUs started around 2013. They deployed the first versions around 2015 and have been iterating on those since then. Interestingly, OpenAI was founded around the same time.

OpenAI has a moat as well in terms of brand recognition and diversified hardware supplier deals and funding. Nvidia is no longer the only game in town and Intel and AMD are in scope as well. Google's TPUs give them a short term advantage but hardware capabilities are becoming a commodity long term. OpenAI and Google need to demonstrate value to end users, not cost optimizations. This is about where the many billions on AI subscription spending is going to go. Google might be catching up, but OpenAI is the clear leader in terms of paid subscriptions.

Google has been chasing different products for the last fifteen years in terms of always trying to catch up with the latest and greatest in terms messaging, social networking, and now AI features. They are doing a lot of copycat products; not a lot of original ones. It's not a safe bet that this will go differently for them this time.


but cost is critical. It's been proven customers are willing to pay +- 20/month, no matter how much underlying cost there is to the provider.

Google is almost an order of magnitude cheaper to serve GenAI compared to ChatGPT. Long term, this will be a big competitive advantage to them. Look at their very generous free tier compared to others. And the products are not subpar, they do compete on quality. OpenAI had the early mover advantage, but it's clear the crowd who is willing to pay for these services, is not very sticky and churn is really high when a new model is release, it's one of the more competitive markets.


I don't even know if it amounts to $20. If you already pay for Google One the marginal cost isn't that much. And if you are all in on Google stuff like Fi, or Pixel phones, YouTube Premium, you get a big discount on the recurring costs.


About 12. First deployed mid 2015.


Nobody wants to use Copilot voluntarily, therefore it's going to be pushed down deep into Microsoft customers' throats.


Loosing them in the process.

Office 365 app suddenly is not called that. It's called copilot. When you open it it just shows chat. No files, no word, no documents. You have to try hard to find your files back.

So suddenly an app that you used to edit word documents and print PDFs is completely gone with no warning. Word doesn't exist. Even Office doesn't exist :D How are clients supposed to navigate that shitshow?


That's nuts.

I wonder if all this shoving of AI down peoples throats could trigger a bit of a backlash around vendor software updates / proprietary software in general. There's this huge infrastructure of Windows Update, chrome auto-updates, app stores and SaaS that predated and enabled all this... and people accepted it when they were getting bugfixes and security updates out of it, but now it's getting used to take away the features they wanted and replace them with worse and worse versions of crapware.

All of a sudden... the free software world of updating when _you_ want the new version, and being able to fork the old version if you want, starts to look pretty great.


I'm trying to understand why they think 30 years of brand building should be discarded.


It's over 40 years now.

I just found that it's called "myopia" in English, while trying to find how short-sightedness is spelled.

Multiple generations knew what Word and Office was. That beats even twitter rename fiasco.


Also Excel is now called Incel (per Reddit, ymmv).


Did you forget to include the punchline "often interprets something else as dates"?


They're supposed to chat, not navigate, no?


I was at Microsoft in 2007. We were told to say "Bing it" but everyone was using Google to look stuff up for work.


Our company at the time forced us to use Bing as the default search engine, so people started searching “google” on Bing to just to get back to Google. We were told “they’re both search engines, just use this one”, just like a person who bought an iPhone from a dollar store.


Good luck. It cannot even preserve unsaved files across restarts. https://github.com/zed-industries/zed/issues/15098


I refuse to enter the website until it implements https. A free Let's Encrypt certificate will do. Otherwise I don't even know if I'm reading what the author published on the site, or what a man-in-the-middle provided me.


It'd end when we implement a next generation IP addressing scheme. I'm not very big fan of IPv6 though. I'd prefer a 64-bit address format. IPv6 would only promote incautious distribution which would again result in address space exhaustion, more abuse and increased cybercrime.


Interesting. What about ipv6 don't you like, and why would a 64-bit scheme remedy it?

>IPv6 would only promote incautious distribution which would again result in address space exhaustion

There are more ipv6 addresses than there are atoms in the earth. Exhaustion won't be a concern for generations.

>more abuse and increased cybercrime.

IP address-based mitigations are already not effective with v4, can you talk about why v6 makes this worse?


It's going to take those conservative netadmins another 10 to 20 years to learn that HTTP/3 or QUIC works over UDP and that it needs to be enabled. So... happy buffering and watching spinners until then.


Security. I know it's boring for most, but important for those who need to handle cybersecurity issues.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: