As is unfortunately typical for tutorials like this, the best and most effective PNG compressors like oxipng or ECT [1] are strangely left off.
I compressed the original PNG shown on the site with ECT. It took only 6.5 seconds on my old laptop, and resulted in a file 95.7% of the size of their pngcrush result. I don't see why you'd want to leave 4-5% more efficiency on the table in a case like this.
Similarly, I reproduced their pngcrush result on the same laptop. It took 10.834 seconds. Getting a better result doesn't even take more time!
Thanks for dropping a citation, when the CPU cost is low and the code is on Github it's relatively easy to add a tool or two to a script if you're benchmarking. (IMHO)
But speaking as a paranoid hacker: why is it more common to run unknown code in the graphics world than others?
|I've noticed a lot of trust in that community, versus the felonious infosec mantra of "oh, you typed something from the internet into the terminal without understanding it? Of course someone stole everything on your hard drive, why didn't you've have seven layers of veracrypt volume around your diary or whatever?" mentality.
I don't know a ton about image visualization, but I'm super appreciative of tools like GIMP and VLC that save me time and money in a variety of ways.
(Remember back when downloading a keygen meant risking a trojan? Or is that something the next generation of hackers has only heard about in stories? :-) )
Their Windows binary comes up clean on VirusTotal. Running on Linux under strace is squeaky clean. There should be a rule against fud like this from anons because open source is built on trust and without trust open source couldn't exist. Worst case scenario I imagine is you forget to use --strict when compressing your manga collection and since it's pre-1.0 you get an artifact here and there.
I don't much believe in antivirus, as crooks can often find ways around them.
Traditionally, trust is based on knowing someones name and place of residence so you can call the cops on them and/or take matters in your own hands if they try any funny business. For anonymous people, an established reputation can also work.
In this case the repository is 7 years old. The owner states a full name (that if we are paranoid may or may not be the real one). A person with that name has published a scientific article about compression.
I generally agree but give me more credit. I monitored the system calls myself. Also VirusTotal isn't a virus scanner but rather a community driven aggregator. People leave comments and downvote when binaries do evil things.
Sorry, I wasn't questioning your integrity, it was more a meta level question since I lack that ability to start looking at system calls and thought to myself "this looks cool, I can probably trust it, but am I making a rational choice or just being reckles because it's on a well known site and I think the concepts are cool".
(And sorry I posted hard today, I had too much caffeine and my weed supply dried up -- if I didn't have those issues you'd see most of my comments be more like this one.)
>I don't much believe in antivirus, as crooks can often find ways around them.
I have mixed feelings. It can be good to spot check things, but I think things like checking the firewall for odd traffic are probably better uses of resources.
>Their Windows binary comes up clean on VirusTotal. Running on Linux under strace is squeaky clean. There should be a rule against fud like this from anons
I'm not anonymous. Someone very purposefully tied my legal name to this nym when I signed the anti-Stallman petition.
I've had two accounts (Reddit, and Twitter) shut down very purposefully as retaliation for reporting extremely serious felonies.
There should be a rule that if I can prove someone's abusive gaslighting causes me to overshare, the person who purposefully distorted the truth is responsible for anything I purchase to return to a positive emotional state.
I am one of the strongest allies of open source on the planet.
It's not my fault if folks who use it for evil have conspired against me, then literally shot themselves when they realized I'll just go back to pounding espresso and posting twice as hard within less than 24 hours of their soul exiting their body.
Folks from the so called "free speech" community cannot handle truly free expression, they just want to hear their own talking points repeated.
It sounds like those events greatly affected you, because you are losing your composure retelling them. Maybe you should get some professional help processing it all.
It doesn't compress as well as JPEG or other formats that are designed to be lossy. But if you need PNG for some reason like alpha channel or application requirements, or if your image is particularly well-suited to my compression algorithm, it might be a good choice. For many images it does compare favorably to an 8-bit indexed PNG.
Nice! For images that lean well to indexing, I always use png, it's amazingly small. How hard (for an amateur) would it be to plug your algorithm into GIMP? But should have live preview, because judging by your example images, there is a bit of trying involved.
advdef uses the zopfli or 7-zip compressor for png's deflate compression. If you want to be fancy you can use gnu parallel on linux to parallelize the compression and make the script a one-liner. (I know xargs can do it but I stopped using xargs after adopting gnu parallel). If you're powershell 7+ you add -parallel after %.
If you want to do quantization you can use imagemagick's -colors to reduce to as many as you need. It doesn't even have to be <= 256. You can do say 300, 500, or even 1000 colors. This doesn't allow you to use a palette but fewer colors in a png often make things easier for the compressor to compress well. This is not lossless compression but can work well for simple artwork.
The 'indexed' PNG is not an apt comparison. Using an 8-bit palette doesn't mandate terrible dithering, you can get this image down to 70-80kB with quantized 80-120 colors, and because of the solid surfaces it doesn't need any dithering at all. Or you could use WebP to get a 30KB file that has better quality than the JPEG.
I recommend using https://squoosh.app for this, it has OxiPNG and runs the compressors in-browser with WASM so you can fine tune the parameters live.
I've tried a few, the best one was pingo (https://css-ig.net/pingo). pingo -s9 gives better results than oxipng with Zopfli, while being usually two order of magnitudes faster. It's also faster than "regular" oxipng while being better. I can usually shave of 15%/20% of the size of png files I encounter. One thing I didn't check is that you might pay that in decoding time, I've never seen anybody talking about that though.
There are a myriad of PNG (and in general DEFLATE) optimizers and pingo hosts its own benchmark [1]. I believe ECT [2] is the only tool comparable to pingo in terms of compression ratio and speed. But pingo still lacks a license statement and it's even unclear whether you can use this for any purpose at all, probably because it is still "experimental", so if you don't like that you can try ECT instead.
> One thing I didn't check is that you might pay that in decoding time, I've never seen anybody talking about that though.
PNG and in general DEFLATE-based formats are mostly free from this concern because they are comparably simple. The maximum "overhead" you can intentionally trigger is a very large LZ77 window and a very deep prefix code tree; the former is however capped to 32 KB in DEFLATE, and the latter will mostly result in an inferior compression (a longer prefix code means a larger file).
Yes. I have no idea why ECT gets left off of lists so often, because it's almost always within 1% of the best tools on size, and at least several times faster than them on speed.
As far as I can tell, pingo isn't cross platform or open source, in addition to the issue you mention with the license statement, which makes it extremely limiting for e.g. automated processing of images on a server. I think ECT is absolutely a reasonable choice for most use cases.
I didn't know about ECT even after searching for a while. Maybe it's lacking in visibility? I'll give it a try.
As for pingo, I can run it without issues with wine. I would personally prefer an open source tool though, so if ECT is really around 1% of pingo and not too much slower, I would prefer it.
One of the best things you can do for a great many PNG files is to convert them to platted images using something like libimagequant. If you are willing to accept some image loss you can even apply this to images with more than 256 colors. In most cases it will be difficult to find the difference with just your human eyeball, and most of those cases you should probably be using webm or even JPEG instead.
The file size savings can be enormous, way more than you achieve with just twiddling the block sizes and compression levels.
That's the critical part. PNG is an archival format for me. Image formats come and go, but as long as the archive image's format is lossless it can always be converted to another format without degradation if need be.
I'd be happy to switch away from PNG to a more efficient format, but only if that format was lossless and well supported.
Generally people have separate concerns for published files vs archived files.
Archived files can be considered like source code. They are probably unoptimized and may contain extra layers and components used to create them.
Published files would generally focus on compact size and would strip the extra components and might use a degree of lossiness to enhance the compression if appropriate.
>That's the critical part. PNG is an archival format for me. Image formats come and go, but as long as the archive image's format is lossless it can always be converted to another format without degradation if need be.
Why not BMP then? Maybe I'm misunderstanding but I thought it was relatively free and relatively simple -- I thought you use things like PNG when you don't want a literal 1 for 1 (100% lossless) file.
I thought if you use compression, you'll always lose at least a small amount of data.
Or is that a fundamental misunderstanding? (If so, sorry to wander in like this, I'm not trying to sealion[1] you.)
> I thought if you use compression, you'll always lose at least a small amount of data.
No, the terms "lossless" and "lossy" are attached to different compression methods to indicate whether they do or don't discard data.
Lossless compression methods like PNG, FLAC, ZIP, RAR, etc. will output the exact same input.
Lossy compression methods, which discard some of the input data, include JPEG, MP3, AAC, MPEG-4 video (e.g. AVC, HEVC), and many more.
As PNG is lossless, you can convert your BMPs into PNGs (which will make it far smaller) and back to BMPs, without a single pixel changing in any way. However, note that some PNG "optimizers" may introduce loss with some settings, like reducing the color space, but that's incidental to the format... You can reduce the resolution or color space of a BMP image to reduce its size as well.
>As PNG is lossless, you can convert your BMPs into PNGs (which will make it far smaller) and back to BMPs, without a single pixel changing in any way. However, note that some PNG "optimizers" may introduce loss with some settings
Ok, TY, I think I grok it now.
Yeah, if it's patent unencumbered and lossless, I'd rather see something like PNG used more than invent new file formats, though I respect the craftmanship that goes into some of the formats mentioned in this thread.
> I thought if you use compression, you'll always lose at least a small amount of data. Or is that a fundamental misunderstanding?
Yeah, that is a misunderstanding. There's lossless compression and lossy compression. For images, the classic example of lossless is PNG and lossy is JPEG -- with JPEG you get more and more artifacts/noise the higher your compression is, but with PNG you can always recreate the exact pixels of the original, hence the OP's use of it as an archival format.
PNG is similar to how .zip and .tar.gz files work for executable files and other data -- those have to be lossless, because for text and executables and the like you need the exact 1s and 0s back out that you put in, otherwise the executable will crash or the text will be garbled.
Thanks a ton, I wish more programs would aid folks in using PNG settings to efficiently use PNG, rather than nudge towards JPG/JPEG for realistic photos.
There's a lot of "tricks" like that we taught designers long ago that hold the web back absent more technical folks allowing new paradigms to flourish.
It’s actually time to switch to JPEG XL. Just one format, does lossless and lossy (and usually smaller than PNG when lossless, always smaller than JPEG when lossy)
Browsers don't support it out of the box (yet), so probably not time to switch. Try AVIF instead. It has similar efficiency and feature set and is better supported.
AVIF will be a good choice in time, but right now it takes a lot of horsepower (read: battery juice) for some devices to use it. Not sure how/when that'll get resolved. From “The ROI of Adopting AVIF for Websites” on the Cloudinary Blog, 2021-11-29[0]:
> . . . since AVIF decoding consumes a lot of CPU and battery, mobile devices are taking a while to jump on the AVIF bandwagon.
>Browsers don't support it out of the box (yet), so probably not time to switch.
Ok, so do we skip even trying to use more well tested stuff like APNG to use this? Or just accept the moment is as gone as the blink tag?
(I'm only being half joking -- I'm not a data visualization expert but I learned digital photography when you'd shoot onto a floppy jammed into the camera, so while I don't know algos well, I've also seen scores of security vulnerabilities stem from browser cruft.
Classic example being that prior to FTP support being removed from firefox, you could do things like say go to trusteddomain.nameofschool.edu@badsitethatistotallyfullofputinsmalware.ru and very easily infect any sort of kiosk running Firefox, Firebird, or in one especially hilarious case... Internet Explorer.
I'm not even joking... my target was still using Internet Explorer, on a kiosk, years after the switch to Edge.
(Luckily all I did was redirect to a search engine to find an ISBN number. I could have done much worse.)
The 'one format' thing is actually somewhat of a problem. Users may want to be sure they're staying loseless when doing an operation and with most modern formats one can't ensure that.
Yes, lossy PNG can be done, but in such a way that there's no way to apply that transformation without being very aware - but there's no similar standard to ensure this with modern formats, this is left to the mercy of software.
Unfortunately, it seems that despite being inferior to JPEG XL, AVIF will become the next main image format.
JXL still has zero browser support (out of the box; Firefox and Chromium have it hidden behind a feature flag) and it doesn't seem that the browser developers are interested in changing that.
The most important thing is Apple. The closed-door company has taken a long time to make its devices accessible to the WebP, and even now it is incomplete. If avif is not displayed in Safari, the Apple's Internet Explorer, it is no different from jxl.
Their website doesn't list a date so I have no idea how up to date this is, but the situation doesn't seem all that clear to me: https://jpegxl.io/articles/rans/
If Microsoft's patent goes through and ends up being enforceable then their open-source release doesn't mean squat; their license may say it's royalty free and grants free usage of any patents, but the patents wouldn't be theirs to grant the usage of.
Of course you can ignore this for software distributed in jurisdictions without silly stuff like software patent enforcement, but if you distribute the software to the USA then you might run into trouble.
Pngcrush was always awesome for web and similar work. I remember using it to get image slices lowered way down in size...for tables...in emails. Blech.
Maybe this is an appropriate place to say that lately I am wishing for a video format that can do dithering and indexed colors without jpeg/mpeg-style blurry/blocky compression. Even 1 bit color would be fine...
My dream is that it would allow something like a pixelcast as an alternative to a video stream, with an impressive rate of compression and no blur at all. Any tips on formats or tools would be appreciated.
Ah that's a cool idea to use GIF for it. Seems like the video player is the trick in terms of consumer-side then, making me wonder if it's wise to expect any support for this idea whatsoever. Maybe if somebody creates a nice WASM player with support someday.
It's really too bad this wasn't handled long ago, with some web-safe video emphasis to go along with all the web-safe imagery. Lossless/indexed or lossy, your choice...
You can use ffmpeg losslessly on the decode side as well, but the only interfaces I could find to write to are v4l2 and fbdev, neither of which seem to work properly under X11. (The obvious candidate, x11grab, a: reads, not writes b: the entire screen, not a specific window.) So you'd need something else to pipe the raw video to (ffmpeg can dump the audio into alsa).
You could also not decode it, and just demux the video and pipe that. Depending on how it reads/decodes files, a image viewer that displays GIFs might update in close to real time as it reads from a pipe that ffmpeg is demuxing into, and might not leak memory by keeping the overwritten frames around in case the GIF loops, but those both seem unlikely.
(As the saying goes: if it only works the way the manufacturer intended, it's defective.)
Addendum: I remembered that ffplay existed. Annoyingly, it still doesn't properly handle ffmpeg-generated .gif.mkv files by default, but you can do:
$ ffplay -codec:v gif -i A.gif.mkv
to force correct video codec.
Also, it seems this is not a problem with the video players, but rather that ffmpeg's matroska muxer writes a incorrect video codec id of V_QUICKTIME. It works correctly with mov/.gif.mov (and possibly other container formats), and should work with mkv if you can patch in a correct codec id.
Although it’s best practice to run images through a compressor at some point in the build/deploy practice…in reality not everyone does that.
But if you work with Photoshop-generated PNGs with alpha transparency: try to work in some sort of post-Photoshop compression. I can get significant improvements on these type of images.
This is a strange test to do. The image he used is not really suited to PNG. It is more like a photograph, which is better in JPEG format, than an illustration, which is better as a PNG.
Furthermore, he doesn't even post the resulting compressed versions from each piece of software, just some console output and then a single "optimized PNG" image.
"The image he used is not really suited to PNG. It is more like a photograph, which is better in JPEG format, than an illustration, which is better as a PNG."
I use PNGs to archive photographs because it's lossless, supports 16-bit images, and is well supported across most software.
Not sure why you think it wouldn't be suitable for that.
JPEGs, on the other hand, would not be at all suitable for this purpose, as they're lossy and only support 8-bit images.
Have you made any comparisons of your PNG files to, say, lossless WebP files? (Apparently there's also lossless mode in JPEG XS nowadays but that seems to be a very new option.)
>This is a strange test to do. The image he used is not really suited to PNG. It is more like a photograph, which is better in JPEG format, than an illustration, which is better as a PNG.
Isn't your comment why they used Lenna[1] so long -- it might be male gazey etc, but at least everyone is using the same male gazey softcore photo rather than sidetracking with arguments over image choice?
No. I was assuming that this blog post was about compressing PNG files for general use on web sites, but apparently that is not the case. He didn't actually state in the blog post what he was using the images for, but he says in a comment here that they need to be lossless. Usually that's not the case. JPEG works better for getting small sizes on photos for the web.
If you want better lossless compression you may want to try WebP, which is now supported by all modern browsers and offers better lossless compression than PNG. For the example image I got 187kBytes compared to 267kBytes with PNG.
Indeed! For one of the my images, which had lots of non-linear gradients and shadows (an app icon), an optimized (with the best algorithm using ImageOptim) 457 KB PNG got compressed to 36 KB WebP (lossless, of course).
Just for fun you can compare WebP vs PNG performance (in OxiPNG or your browser's implementation) on https://squoosh.app/. For your normal images this will quickly net your a 20-30% size decrease over optimized PNG even in lossless mode.
JPEG XL is even better, but browsers and other image libraries tend not to support it.
> PNG is “lossless” meaning, you can save (nearly) all of the original data.
I was under the impression that no data was lost in PNG. Anyone know why the author added that (nearly) in there? Does it maybe have to do with bit depth? Are there cameras shooting in more than 16 bit depth?
Let me project my own reasoning onto the author, but I have a habit of not using absolute language, and words like "nearly" are good for hedging against being wrong. I think probably (wink) this habit intrudes into statements were absolute would be correct.
I compressed the original PNG shown on the site with ECT. It took only 6.5 seconds on my old laptop, and resulted in a file 95.7% of the size of their pngcrush result. I don't see why you'd want to leave 4-5% more efficiency on the table in a case like this.
Similarly, I reproduced their pngcrush result on the same laptop. It took 10.834 seconds. Getting a better result doesn't even take more time!
Commands:
[1] https://github.com/fhanau/Efficient-Compression-Tool