Hacker News new | past | comments | ask | show | jobs | submit login

What a shit take. JXL did have plenty favorable responses on HN before Google removed it for reasons that they never applied to their own formats. And FF did get plenty of complaints for not supporting JXL but those are often shut down with the opposite variant of your take.



As I work with codecs I've been following the situation quite closely and the attention to XL was pretty much zero until Google decided to not support it.

Moreover, this whole topic is about a comparison over a SINGLE IMAGE. Anyone who ever came close to codecs would immediately dismiss this as ridiculous. Yet here we are.


I will respond to you since you posted about this so called "SINGLE IMAGE" three times in this post already.

Ackchually, the blog post contains a comparison over TWO IMAGEs. But since you work with codecs, surely you understand that the blog post is complaining about how WebP interacts with gradients in general and not just about the specific images in the blog post.

JXL was getting plenty of attention before the Chrome debacle. Of course it was less than WebP and AVIF but JXL wasn't getting pushed or championed by anyone (other than Cloudinary I think) so JXL didn't have the marketing powers the others had.


To make a conclusion about how a codec handles image features you need to to quantitative comparison across a big enough data set to make conclusions about any kind of generalized quality.

This goes triple for modern codecs like JPEG XL, VP8/9, AV1/AVIF, etc. because they deliberately make tradeoffs when compressing based on how the image will SEEM to people, not how pixel correct it is. Note just how many people say they barely notice a problem - this is where WebP made the tradeoff. JPEG did it elsewhere (e.g. text).

Cherry-picking a single image is useful only for fanboy screeching.


The author explains why thinking in terms of averages "across a big enough data set" isn't enough.

>Call me crazy, but I don’t give a shit about averages. For a gaussian "normal" process, probabilities say half of your sample will be above and half will be below the average (which is also the median in a gaussian distribution). If we designed cars for the average load they would have to sustain, it means we would kill about half of the customers. Instead, we design cars for the worst foreseeable scenario, add a safety factor on top, and they still kill a fair amount of them, but a lot fewer than in the past. [...]

>As a photographer, I care about robustness of the visual output. Which means, as a designer, designing for the worst possible image and taking numerical metrics with a grain of salt. And that whole WebP hype is unjustified, in this regard. It surely performs well in well chosen examples, no doubt. The question is : what happens when it doesn’t ? I can’t fine-tune the WebP quality for each individual image on my website, that’s time consuming and WordPress doesn’t even allow that. I can’t have a portfolio of pictures with even 25 % posterized backgrounds either, the whole point of a portfolio is to showcase your skills and results, not to take a wild guess on the compression performance of your image backend. Average won’t do, it’s simply not good enough.


> To make a conclusion about how a codec handles image features you need to to quantitative comparison across a big enough data set to make conclusions about any kind of generalized quality. > > Cherry-picking a single image is useful only for fanboy screeching.

Do you really expect a photographer to prepare a quantitative codec comparison benchmark? All they have is anecdotal evidence, and I think it is fair for them to criticize and make decision based off of their own anecdotal evidence.

> This goes triple for modern codecs like JPEG XL, VP8/9, AV1/AVIF, etc. because they deliberately make tradeoffs when compressing based on how the image will SEEM to people, not how pixel correct it is. Note just how many people say they barely notice a problem - this is where WebP made the tradeoff. JPEG did it elsewhere (e.g. text).

No one is going to sit here and claim that WebP performs better on all images or JPEG performs better on all images. Obviously there is going to be some kind of tradeoff.

TBH, my gripe with WebP is not that it's worse than JPEG. IMO it is in fact better than JPEG in most cases.

My problem is that it is only an incremental improvement over JPEGs. We are breaking compatibility with the universal image formats and we get the following benefits:

- 15-25% better compression

- animation

- transparency

- lossless compression

On the other hand, we could break compatibility, adopt JXL and get the following benefits:

- lossy compression on par with WebP

- animation

- transparency

- lossless compression that is marginally better than WebP

- actually kinda not break backwards compatibility because you can convert JPEG -> JXL losslessly

- enhanced colorspace support

- progressive decoding

- very fast decode speed

- support for ultra-large images

Adopting WebP would be great. But why adopt WebP when instead you can adopt JXL which is superior in terms of features and on par in terms of compression?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: