Dithering was never a compression technique, it's a filtering technique for reducing banding on devices/displays/images that have a small color palette.
In fact, even in the 80's, dithered images were often larger than their un-dithered counterpart, sometimes by a lot. But it was worth the trade-off when the alternative was an image with so much banding that it could be confused for a European flag.
Unless you're trying to display your image on a retro console (or have aesthetic reasons for wanting to achieve that effect), you should not use dithering. Essentially all modern devices have a sufficiently enormous color palette, and modern compression algorithms use other techniques to achieve their efficiency.
In fact, modern compression will do a much better job giving you a smaller file size if you don't use dithering.
Edit:
Don't get me wrong, dithering is a super interesting topic, and designing a good dither can be surprisingly hard, it's just not going to help you if your goal is to shrink the images on your website the way the article claims.
If you haven't seen the trailer for "Return of the Obra Dinn" you owe it to yourself to take a look:
Super cool aesthetic, and writing that shader must have been all sorts of difficult/fun. But you don't do this sort of thing for compression efficiency.
Directly related to dithering and compression, Return of the Obra Dinn is, to an certain extent, an "unstreamable game".
When this streamer on Twitch tried to play it [1], the quality of his composited webcam would immediately drop as the compression algorithm focused on rendering the high-frequency dithering sections of the screen. As soon as he would aim the camera away from the fancy rendering (to the sky for instance, or the menu), the video quality would immediately improve. Really a fascinating clip.
The devlog [1] for that game is incredibly interesting, would strongly recommend to anyone interested in game development. If I recall correctly this compression problem almost made him reconsider the whole game - how can your indie game get any momentum if it's unwatchable on youtube/twitch? Luckily for us he persisted. Obra Dinn is one of the most interesting games I've ever played.
I hadn't thought of that, but this is hilarious, and illustrates my point perfectly.
If your compression algorithm isn't aware of the exact dither you're using, the decompressor can't reproduce the dither on the other end using only rules and image data. The compressor needs to encode every single dither pixel as an expensive "Hey decompressor, you're never going to be able to guess this pixel value, so here's the whole thing" residual value.
This is also why old image compression algorithms that were aware of simple dithers (i.e. a handful of fixed grid patterns) could produce small-ish images that looked slightly better than un-dithered, but still kind of bad. But then as soon as you customized the dither to use a more random looking pixel arrangement that looked significantly better, the filesize would explode -- because the compressor was blissfully unaware of the more complicated dither and had no choice but to encode all of the seemingly random pixels directly.
Do you know if they tried the other rendering modes (shaders?) included in the game? From memory there were at least five and some of them looked more suitable for livestreaming.
> In fact, modern compression will do a much better job giving you a smaller file size if you don't use dithering.
Not necessarily. The idea of dithering is to use a representation with a smaller color space, meaning fewer bits per pixel, possibly palettized.
The idea is to control where the lossiness "damage" happens. You deliberately discard information in the area of color depth, rather than whatever the modern compression might choose to discard. It's possible you could get results that to an observer appear subjectively better per file size.
Imagine a photo of masonry brickwork. What's important is the edges between the brick and mortar, while you don't really care about the grain within a brick. General-purpose image compression tends to smear sharp edges like that. It's possible you could do subjectively better by reducing the color depth, to intentionally discard more of the data that you don't need with dithering to keep a little of it, while keeping more of the information you do want in the sharp edges.
I'm not claiming any of this would pan out for real-world use, but there are certainly hypothetically feasible cases for dithering.
In practice, video inspired image compression techniques (i.e. HEVC Main Still Picture) will do a significantly better job on that masonry brickwork image.
The algorithm is already looking for patches of image that have moved by possibly hundreds of pixels (to sub pixel accuracy) both within a single frame, and across multiple frames.
Basically, it'll find a patch of image that, when shifted horizontally and vertically by a certain amount, looks pretty close to the patch of image that it's trying to encode. The compressor will then say "just copy that region that you decompressed half a frame ago to here first, and now the residual values I give you are differences between the first patch and the second."
Even if the brick / mortar phase is slightly off (e.g. from perspective or lens effects), this will give you about an order of magnitude more compression efficiency (and perceptual quality) than anything that tries to use color depth to preserve edges.
Your example is about saliency and perception. Modeling these to guide lossy compression is an important feature of high-end encoders, but that is largely independent of compression techniques used.
It's possible to do optimal-ish highly compressible dither (it's been done for LZW), but the results are still pretty disappointing compared to even old JPEG.
Specifically, modern formats use gradients where possible. If something transitions smoothly from one color to another, they can represent that as what it is - to oversimplify, one pixel is the first color, a different pixel some distance away is the second color, and the decompression will generate the intermediate colors for all the pixels between those two. By manually dithering, it has to encode each pixel individually, because the transitions are gone.
Really? IME images compressed real hard with modern techniques look pretty terrible. Dithering is much less efficient if you're looking for low loss, but if you're trying to get something quite lossy, it looks better to my eyes.
This was something I helped with recently, as a great case study. We launched a new website for our game studio recently and went all-out on supporting modern compressed images: AVIF and WebP with PNG fallback.
Originally, the image came from art with the glow around the planet being dithered. The resulting PNG was over 2 MB, resisted crushing, and didn't downscale well. Trying to use AVIF and WebP with aggressive compression made the image look awful.
We asked if they could remove the dithering and suddenly we got super great compression with some tweaking: 50 kB as AVIF, 68 kB as WebP, 797 kB as PNG (oof!)
This is a large banner image. Smaller images can get _much_ smaller with AVIF and WebP with no sacrifice of quality. It takes some tweaking and the tools were pretty bad in my experience. We wrote a couple utilities to do this and fiddled with knobs for awhile and it turned out great.
EDIT: Looking at this page again closely, I can see interesting artifacts because of AVIF. Look at the robo-dog's left ear! You could probably use slightly higher settings than we did.
Not sure how you created those AVIFs. The reference AVIF encoder[0] wants to use 4:4:4 chroma, but it looks like that hero image is 4:2:0. There is a small size hit for 4:4:4, but edges around saturated colors is much better.
Sometimes it is helpful to first reduce the number of colors (preferably to 256 if that doesn't cause too much banding, depending on the number of color shades used). Then png usually compresses a lot better. Png compresses badly when the image contains too many different colors.
> Png compresses badly when the image contains too many different colors.
Old trick to squeeze a few kb out of a png: Use Posterize filter from Photoshop, with very light settings. Basically it will just flatten the number of colors.
Nah, just use Photoshop's "Export -> Save for Web (Legacy)", then set the file to PNG-8.
Now you can mess around with the number of colors in the color table, customise the color palette selection algorithm, dithering algorithm and dither amount.
The loop filters in modern image/video compression systems don't know anything about dithering. If you want a pixel perfect dither like what you remember from the 80s, you're going to need way more bits to encode the image, because you need to encode those pixels as expensive residuals that can't be ignored, rather than obvious (to the decompresser) pixels that can be inferred from the decompressed pixels in neighboring regions.
The best counterexample for the claim that dithering is a good idea is the post itself. It shows a high quality (albeit downscaled) picture of the dog that is only twice as big as the horrible-looking dithered versions. And at 30 KB vs. 14 KB, HTTP header sizes already start to make the marginal savings questionable.
https://imgur.com/a/eBxFlL5 has 4 images next to each other - the original scaled down image, the 14 KB dithered image, a 14 KB JPEG, and a 8 KB webp (both the JPEG and WEBP were at the full 500x500 resolution, downscaled afterwards, since in my experience that often yields better results).
You should still use dithering, even with modern palettes. You can absolutely see banding on undithered 24 bit images. 256 color levels per channel is barely adequate, even when optimally allocating those levels perceptually (gamma correction is a poor man's approximation of this).
Yes, dithering will increase the file size of a losslessly compressed image. That's because it contains more information. If you're sufficiently bothered by file size to degrade color accuracy, why are you using lossless compression to begin with?
Dithering is an essential component of any digital signal processing pipeline, not some weird retro artifact.
This is generally some pretty terrible advice. Really, don't follow it. At least not without testing its impact.
Dithered 8-bit/256-color images will look 'better' than non-dithered 8-bit/265-color images, but it will almost always be worse that a 24-bit JPEG (no alpha) or 32-bit webP (includes alpha) and have a much larger file size.
I did some quick tests with https://squoosh.app. The 8-bit dithered PNG is >4x the size of the JPEG. It also shows some terrible banding on any kind of gradient in the image. The PNG is 5x larger than a better looking webP version of the same image.
I tested a lot of images (photos, drawings, digital artwork, etc) and some of the images were 10x larger as dithered PNGs vs webP/JPEG. Only one was smaller as a dithered PNG.
It completely depends on what kind of dithering you do — ordered dithering with a small color palette will give you a much smaller file size than a full color jpeg.
WebP and AVIF also support lossless compression and can be used for even smaller file sizes using dithering.
Depends on the quality level. In my test, recompressing the original as JPEG quality 12 (with ImageMagick) gives approximately the same size as your 12 color 4x1 dithering, but at better quality (though it's miscolored in places). Webp quality 11 also gives approximately the same size, but looks way better.
I suppose you can argue whether the JPEG artifacts look better or worse than the dithering, but WebP definitely looks better.
Same here, tested dozens of images from our website (user avatars, product images and logos), and the vast majority ended up being larger and with much lower quality. We serve WebPs generated using the sharp library with quality=80.
A. It looks like Safari didn't support webP at the time? https://caniuse.com/webp Either way, they explicitly mention it anyway.
B. If the comparisons keep the pixel size constant, they're not relying on the cool thing about dithering, which is that you can dither down to a small color count and quite small pixel size then display larger with image-rendering: crisp-edges; and it'll still look #aesthetic. From my experiments with the tool you linked, the scaled-up equivalently-sized webPs look potato. This is most relevant for big hero images.
How about dithering as a compositor-level rendering technique for low-bpp displays, rather than as a compression-at-rest effect?
I'm thinking specifically of non-HDR displays displaying HDR content (where HDR10 includes 10-bit-per-channel color.) Presumably the compsitor, if it knew you were watching an HDR video on a non-HDR display, would do better by dithering the HDR content down to 24-bit non-HDR content for display, rather than allowing it to be naively rendered with 24-bit banding?
It can even be tested with the author's own tool[1] and the sample image he provides. I'm surprised the author didn't experiment themselves? Or maybe the did, and the it's the further reduced settings they decided should be used...
Tell GIMP not to include the EXIF and XMP data (which doesn't get any smaller when you decrease the quality). Then tell it to use 4:2:0 chroma subsampling, and set the quality to 24 or so.
It just looks a bit blocky/fuzzy when I try this. It doesn't look great, but it retains much more detail than the dithered version. It looks a bit better at the same size as a HEIC image.
I like the dither aesthetic and underlying message, but it's possible to compress the first image of the dog (123 kB) to 64 kB with MozJPEG and still maintain the same quality.
Modern compression algorithms with native lazy loading will probably offer the best of both worlds.
Images serve to make a website beautiful. Dithered images are not - in my mind - beautiful. There are other ways to optimize a website speed and bandwidth. Let's serve devices the image resolution that they can handle and not an XL image just because we can. Let's minimize JS cruft, minimize excessive renders. Sacrificing image quality would be near the bottom of my list.
Anybody who used a cheap PC in the 90s knows how horrible everything looked. It was a low-res dithered hell. I don't understand why anyone thought that gradients on a 256 color screen ever looked good, and wonder why we didn't have the modern, flat, solid color designs that are fashionable today back then.
> wonder why we didn't have the modern, flat, solid color designs that are fashionable today back then.
I really, really, dislike the modern flat colour designs of today. They make it much harder for me to separate out what I care about, like context and information, because my eyesight isn't perfect.
Gradients made it so much easier for me to see the difference between a tab and the bar it's sitting in. A little strip of colour or some shadowing, to denote the active tab just disappears for me.
I'd rather have flat colors for background elements and bevels for "active" UI elements like buttons and checkboxes. NEXTSTEP-derived UIs (like Windows 9x) were razor sharp and easy to read... and it's funny how futuristic cyberpunk UIs, like the non-VR interfaces featured in Johnny Mnemonic, were envisaged as being more of the same -- maybe a drop shadow or marble texture on the beveled button -- instead of the indistinct flat-shaded hellscape of today.
Those little strips of color don't need to be shaded, though. You could do what Windows XP did with tabs—highlighting them by running a line of bright shining highlight color along the top (e.g. https://www.techrepublic.com/a/hub/i/2008/11/19/b8ad3817-c3b...).
(I'm sure there are even better examples of this "no gradients, but strong color contrast" effect among the various Linux Desktop Environment themes, but I'm not too familiar with them. Paging anyone from /r/unixporn.)
Presuming the active tab is flat white either way; what's the difference in legibility between the inactive tabs being gradiented white-to-grey (the XP Windows controls style) vs. inactive tabs just being flat grey (the "modern UI" e.g. Google Chrome style)?
I would note that Windows XP also had a high-contrast theme; and that, when enabled, inactive tabs actually lose the distinction in background color from active tabs. But they keep the highlight stripe.
To try and put it another way, imagine, for a moment, that you are incapable of perceiving edges. Every single one of your flat tabs now looks like the same part of the same blob.
Now, realise that for a pretty high number of low-vision people, that is reality. The slight blur we experience makes most edges disappear.
(WinXP's high contrast overcame this by tripling the size of all edges).
Wouldn't the inactive flat tabs all still end up looking like one continuous blob either way? The gradient on the inactive tabs in the image I linked is a top-to-bottom gradient; it does nothing to enhance the visibility of the edges between contiguous inactive tabs.
>and wonder why we didn't have the modern, flat, solid color designs that are fashionable today back then.
A lot of UI design on computers of the era was flat colors, maybe with a mild gradient. Go look up screenshots from Windows 3.1. Everything is flat grey or white. Windows 9x wasn't really much better, except it had that ugly solid teal background.
Also, dithering on a CRT produced a very different effect from dithering on a high-res LCD.
IIRC the color gradients only started to appear once PCs which could display "True-Color" (or at least "High-Color") modes were widespread enough. The original Windows 95 definitely didn't use them yet.
And the "modern flat solid color designs" still use stuff like subtle shadows and transparencies that would have been either impossible or wouldn't have looked nice in the nineties...
Website exist to convey information, image are used to help convey information which is not easily explained otherwise. Using images to make pages beautiful is, to me, part of the problem with modern websites.
I happen to like the dithered look, but arguably it might not be the best way to save on bandwidth.
> Let's serve devices the image resolution that they can handle and not an XL image just because we can.
I would be all for this, but I frequently find myself pinch-zooming into images on my phone to more closely examine the details in them. If the site assumed that I would only see what's visible in the image at the initially-rendered aspect ratio, then zooming would reveal nothing (e.g. text that's small and blurry, would just become large and blurry, rather than crisp.)
Ideally, a browser would be able to notice when you've zoomed into an image element past its Nyquist frequency, and async-load a higher-quality version of the image, swapping it out once the loading's complete. Do any browsers do this yet?
It doesn't just look bad when it's still, it's actively distracting while scrolling due it causing flickering. The only time dithering should be employed is to get rid of nasty banding. Otherwise just serve lower resolution images or use tricks that aren't nearly as noticeable like compressing the chroma more than the luma.
I think there’s an interesting idea in here - not necessarily that ‘everyone should use dithering’, but more that if you are using an image and if that image is being used in a way that would not be harmed by stylizing the image in some way, that one of the things you can consider when thinking about how to stylize it might be the compressibility of the stylized image.
If you are coming up with a look for a site, dithering all the hero images, or running them through a cell-animation filter, or mosaicing them, or halftoning them, all are stylistic choices that might help your design stand out, and help reduce file size.
But… no, you probably shouldn’t just diffusion dither everything down to black and white.
By all means use dithered images on your website for stylistic reasons, but do not use them simply because dithering produces a slightly smaller PNG file.
First of all, no discussion of minimizing PNG sizes is complete without `pngcrush` which applies all kinds of optimizations to the PNG file (losslessly). In fact pngcrush reduces the author's 48kb dithered file down to 28kb.
And secondly, modern compression formats like webp and avif will blow PNG out of the water when compressing any kind of photographic image. Heck, just turn down your JPG quality, and your image will be much smaller, and still perfectly recognizable.
Know when to use the right tool for the job:
PNG -> diagrams, charts, anything with perfectly uniform areas of color and sharp transitions.
JPG/AVIF -> photos, anything with smooth variations in color.
You can tell this article was written by someone young who thinks they have discovered sex drugs and rock and roll - dithering was a sub optimal solution to simulate more colours on old computers with limited colour palettes.
Once computers got to the point where you could use any colour for any pixel it rightly died off, and its only use today is for things like e-ink displays or monochrome printing.
As others have pointed out lossy compression will give a way better result. I feel like the author simply misses the aesthetic of the 90s internet and is trying to find any reason for it to make a return.
False dichotomy. Dithering is hardly the only or even best way to reduce image sizes. Just opening his 200x200 original in my favorite editor and resaving as JPEG with my usual parameter choices reduced it from 30K to 11K with no noticeable reduction in quality. I could have tuned the parameters for even more savings.
Dithering isn't usually worth the reduction in quality. And ironically it can make things worse if you're not careful - dithering the image and saving it as JPEG actually INCREASED the size to 39K!
If you try to use lossy compression on a dithered image it will increase the file size. Using dithering and saving in a compressed lossless format will have drastically different results.
Exactly, that was my point. That difference wasn't mentioned in the post, and you have to have some knowledge of how image formats work to realize why it's so.
> The internet is responsible for 3.7% of global carbon emissions
Given all the trade, commerce, learning, community, education, entertainment, and countless other benefits and multipliers the internet brings, I can't help but feel it's a fantastic return on its energy investment.
Apparently the author's point is that image compression is useful, but dithering and then using lossless compression instead of using lossy ones sounds like rather poorly reinventing lossy compressions.
Related—I wrote a novelty dithering Mac app and released it this week. It dithers photos and converts them to MacPaint format for display on old black and white Macs. As another commenter pointed out, dithering was less about compression and more about just making things look okay on the limited displays of the time. The original Macintosh had only black and white pixels (no grayscale).
> Dithering is a retro way of reducing the colors in an image for use on old hardware or in print. Why dither in 2020?
Dithering is still commonplace, but mostly invisible, for high-end color. Photoshop quietly dithers by default when converting from 16 bits per channel to 8 bits, for example. This is important when sending 8 bits/channel images out for large-format printing because the invisible differences on screen can become clearly visible on paper - the monitor’s gamut and the printer’s gamut are surprisingly dissimilar. Dither is needed to keep smooth gradients from banding badly, and it sucks when you’re spending $100 per print or more to have nasty bands or even compression artifacts appear.
Isn't this the technique used in Black-n-White Newspapers. I used to help compose Newspaper layouts in the early 90s, and dithering of images were done for similar reasoning -- printing flat colors are bad, so dither them.
This reads more as an argument against dithering. Sure, it saves bandwidth, but all the images in the post look awful. If I saw pictures like these on a professional website, I’d assume the people behind it were incompetent.
If you really want to save bandwidth, you’d be better off looking at better formats like AVIF, and WebP.
Those will shave off a lot of extra bits, especially AVIF, although that is not universally supported yet.
Combine that with making a the images a little smaller would save you a lot of bandwidth, without making your pictures look like they are 20 years old.
Whenever I look into the modern formats excited to save 50% on bandwidth, I realize I'd have to spend 50% more on storage since I still need my fallback to JPEG. WebP is probably the best supported but isn't available on slightly older Macs or iOS devices. There are tons of iOS 12 iPads and pre-Big Sur macs in service.
given the cost difference between storage and bandwidth, that's probably a worthwhile tradeoff. Especially given the common usecase of store once, send many times.
Not as drastic as the dithering in the linked post, but if you work with png files, pngquant is worth a look. It's lossy, but the image quality is still quite good.
the tool loses a lot of points because it doesn't appear to do the necessary gamma correction before dithering. the dithered images, at least the color ones, probably all, are lighter than the source images.
The reason so many blogs have "stock images of people in suits doing business" is because when the article is submitted to a social media app, the app scrapes the blog looking for an image to use as a thumbnail for the link. Pages with no image, get a default thumbnail and users don't click it. The entire reason so many pages on the web have some generic image from unsplash is because of these thumbnails.
This means that the main meaningful way users will see the image is resized as a thumbnail, so before you start dithering, you should really test to see how the resized dithered image looks. Odds are good it will not look good.
For those familiar with Sigma-Delta modulation, that's essentially what Floyd-Steinberg dithering is but in two dimensions: It preserves and diffuses sanpling error. (Sigma-Delta just does a better job than F-S of quantifying where the noise moves so you can filter it in the analog domain.)
1-bit DACs in CD players are the same idea: Trading higher sampling frequency for lower sample resolution to convey the same information.
Bresenham's algorithm is yet another expression of the same idea but there the samples in question represent the slopes of straight lines represented by pixels.
With ImageMagick, you can use -colors to reduce the number of colors in the image. You don't even have to go down to 256 colors. This will usually make photos larger in size, but drawings/graphics can be smaller.
That seems odd to me. Isn't PNG compression based on a LZW-type algorithm, making the bit count of the inputs irrelevant as long as you're only using two distinct values?
If we grant that the right kind of blog could make the aesthetics of this work I still think users with cheaper internet might like to be able to click to load the original full-res versions of the images.
I like dithered images and appreciate the author's post.
If you don't, but you want your webpages to load fast, look into WebP and AVIF images. Load them opportunistically using the html5 PICTURE tag - no JS required and no worry about old browsers not supporting new formats. Even plain lossless encoding of legacy formats goes a long way. Test your own site for ideas:
If you take the "4x4 ordered dithering" image, and copy it, use a "mean curvature blur" filter (I used gimp - similarly for gaussian with low enough width) and overlay it on the dither at 50% opacity, the image actually looks pretty good. This could probably be done in CSS/JS on a client machine. a 14k image comes across passably, even on my desktop.
>Climate Change. Big images waste electricity and emit carbon. The internet is responsible for 3.7% of global carbon emissions4. A number that keeps growing as we send more and more data.
This is a fun stat, I wonder how much physical infrastructure is actually behind serving all the images vs videos / adtech.
As others have mentioned, dithering doesn't really play well with compression; it adds lots of fine detail noise, while lossy compression tends to smooth things out. Also modern compression like AVIF doesn't understand palette color formats (iirc avif generally uses yuv), so that kinda loses the only thing that dithering has.
I've tried it out, I'm still getting smaller AVIF files with dithering. Not like the saving you get by using a dithered png instead of a jpeg.
Depends on a lot of factors. You can keep turning the quality down on avif and get it lower than a dithered image. At some point I'd prefer the crispness of a dithered image over a blurry full color image.
Also, avif seems to do a really good job of lossy compression on dithered images.
The example images look awful. If the site really has images of "people in suits doing business," why would you want them to be so ugly? I'm not a fan of such images, but if you are going to have them, they should at least look pleasant. These look awful.
Dithered images cause significant problems with many lower end LCD displays. When I scroll any image on my laptop or my phone they change brightness while in motion.
Maybe put dithering images outside the paywall and full pictures within and in your ad block placeholder a message that says pro users get the real images.
You sell it as quality. Think of ordering the quick paperback printing with black and white photos vs getting the full quality hardbound with a glossy photo folio in the middle. Premiumification vs free to read. It is also egalitarian as users aren't paywalled, get access to the content and only pay if they want to use the extra bandwidth.
the example in the page is kind of stupid.
he could have kept using Jpeg but with increased compression ratio to achieve same file size and the visual quality would still be better than dithered PNG.
In fact, even in the 80's, dithered images were often larger than their un-dithered counterpart, sometimes by a lot. But it was worth the trade-off when the alternative was an image with so much banding that it could be confused for a European flag.
Unless you're trying to display your image on a retro console (or have aesthetic reasons for wanting to achieve that effect), you should not use dithering. Essentially all modern devices have a sufficiently enormous color palette, and modern compression algorithms use other techniques to achieve their efficiency.
In fact, modern compression will do a much better job giving you a smaller file size if you don't use dithering.
Edit:
Don't get me wrong, dithering is a super interesting topic, and designing a good dither can be surprisingly hard, it's just not going to help you if your goal is to shrink the images on your website the way the article claims.
If you haven't seen the trailer for "Return of the Obra Dinn" you owe it to yourself to take a look:
https://youtu.be/ILolesm8kFY
Super cool aesthetic, and writing that shader must have been all sorts of difficult/fun. But you don't do this sort of thing for compression efficiency.