I once worked somewhere where we had a need to store and transmit lots of camera created jpegs in as efficient a manner as possible.
When working on this problem, I noticed that all digital cameras create jpeg with 422 sampling. This might be a religious topic for some, but I cannot see any visual difference between 4:2:2 and 4:2:0, even with high magnification or on physical prints as large as 30 inch height. Simply sub-sampling to 4:2:0 along with subtle tweaks to quantization tables and with little effort we'd bring a typical jpeg from 1-2MB down to 300-400KB.
We even did a series of experiments with people who look at photo prints all day and they could not reliably pick the 'recompressed' images.
The only reason I think people really hold on to 422 or even 444 is that when subsampling you can get nasty amplified errors if you are using two tools that sampling with different algorithms or from different siting (i.e. imaging if one tool picked odd lines and one picked even). Although the spec specifies siting, I think there are still people who do it wrong, not to mention there is still leeway for interpretation on how pixels are averaged for siting that is not on a pixel boundary.
Bottom line is that if you know exactly what you are going to do with the photo in the future you can compress a lot and quite easily with little risk. If you have a lot of unknown processing in the future, things can get dicey.
4:4:4 sampling is necessary to preserve color distinction in screenshots (otherwise your icons and text may change color), but since very few photographs will have single-pixel color variation, 4:2:0 is usually sufficient. Don't get me started on 4:1:1, though... I'm glad to be rid of the chromatic curse that was miniDV.
Given that they claim to perform perceptual optimization, I would expect this. Human perception wouldn't come into play if they were just optimizing the image.
I've successfully converted 99% quality image which became 4.4 times smaller in size (identify shows 72% quality) after that I've converted same original image to 72% quality with imagemagick and I've got similar compression. Artifacts are look similar to jpegmini algorithm.
More interesting would be blind comparison experiment with with imagemagick compressed images with similar compression ratio. I'd be surprised if they could get 25-50% better compression with similar quality images.
> In my experience, one should stay away from Israeli tech startups. They're often very aggressive and fly-by-night operations. The culture clash between a north american firm is hard to deal with.
Some are, most aren't. There are scammers and hucksters everywhere. It is always easier to scam people when they don't really know you, a fact that can benefit foreign start-ups in the US market.
That said, regarding Israeli companies, I personally object to Israeli companies that operate from illegal Israeli settlements in the occupied Palestinian territories, such as SodaStream and Ahava Cosmetics. SodaStream is particularly bad in that it produces its goods using underpaid Palestinian labor but market's its products as made in Israel.
Better JPEG sells a library, Photoshop plugin, and (relatively simple) standalone program that allow lossless editing (only changed blocks are recompressed). http://www.betterjpeg.com/
Someone with that name posted to the WebP list about Jpegmini, they came across as a bit of a kook (though it could have been a language issue I suppose):
I'd like to run it through errorlevelanalysis.com but their service seem to be down.
I suspect it tries to raise the brightness and sharpness of certain pixels before compressing it further so "important" elements stay in detail. What was done to your image though looks nothing like the quality of their examples for the ship, etc.
If the compressed version is apparently sharper than the original file, that's an artifact surely? It might be pleasing in some contexts but it's not an accurate representation of the original file in that aspect.
But the compressed version actually exposes more detail.
No. It really doesn't.
It adds a couple of visual filters (brightening, sharpening) before recompressing, but these don't "expose more detail", but rather all three steps actually introduce additional errors. Errors that trick they eye into seeing a "better" image, but errors nonetheless.
JPEGmini does not apply any filters on the photo, no pre processing whatsoever. JPEGmini went through BT.500 certification, the result was that given 2 images 1. source 2. recompressed, the tester could not tell which is the source and which is the recompressed. Enjoy.
I'm not sure what BT.500 certification is, but I can certainly tell the difference on some (not all) images, when recomposed.
Are you attached to JPEGmini? If so, would you be willing to bet that I cannot tell if a photo of my choice has gone through your system and been compressed?
I'm on a 3G connection today, so relied on this thread for before/after images. Unfortunately, it seems that the original poster had labelled his photos the wrong way around...
Wow, the Dropbox sample photo Costa Rican Frog was reduced 4.6x and actually looks sharper than the original. The detail in the middle of the back appears to have more contrast in the minified version. Other than some sharpness and some minor artifacts in the eye, the before and after images look identical. Except the original is 346KB and the JPEGmini version is 75KB.
looks sharper than the original ... more contrast in the minified version
That's not a good thing, to my way of thinking. When I'm processing my photos, I'm choosing very carefully the amount of sharpness and especially contrast in the image. I'd be very unhappy with a tool that alters these.
It looks to be a per-image effect and not intentional. Most images I tried, there was no apparent difference. Then again, most images had only about a 30% savings. For an almost 80% savings, I'm not surprised if there's no free lunch.
It's impressive, indeed! I guess they use some feedback loop when recompressing, because there doesn't appear to be no quality loss, even on the smallest details. I've tried one file of 1.4MB of an Australian nature landscape. Recompressed went under 400KB. I wonder if there isn't an offline tool available. For free if possible :)
Wow it looks like they've caught on to something. I've tried a few ~1.2 MB scans of film negatives and it shrunk the sizes 5x and 6x with no differences that I can see. When I tried smaller files however the difference got down to ~1.5x with some more obvious noise removal so I think it's just exploiting inefficiencies in how large jpg's are encoded.
I also love the sense of humor. "SEEMS YOUR PHOTO LOST SOME WEIGHT!"
This would be interesting if it was not web service. I would like to have an executable with command line access, in order to run batch commands. I don't want my images stored/processed on their server. Surely they can think of a license price.
A more web-ish example: a static jpg banner with 900x400 was 46KB and got compressed into 35KB (reduction of 1.3x). Another similiar banner with 43KB resulted in a compressed 29KB (1.5x smaller). This is very good!
Does what it says. Took a 1MB image and took it down to about 300kB. Only complaint is that you have to sign in (using Facebook/Google) or create an account to do batch uploads.
I wonder if one could figure out the algorithm by making a specially crafted image and recycling it through their service several times to see what it does where.
When working on this problem, I noticed that all digital cameras create jpeg with 422 sampling. This might be a religious topic for some, but I cannot see any visual difference between 4:2:2 and 4:2:0, even with high magnification or on physical prints as large as 30 inch height. Simply sub-sampling to 4:2:0 along with subtle tweaks to quantization tables and with little effort we'd bring a typical jpeg from 1-2MB down to 300-400KB.
We even did a series of experiments with people who look at photo prints all day and they could not reliably pick the 'recompressed' images.
The only reason I think people really hold on to 422 or even 444 is that when subsampling you can get nasty amplified errors if you are using two tools that sampling with different algorithms or from different siting (i.e. imaging if one tool picked odd lines and one picked even). Although the spec specifies siting, I think there are still people who do it wrong, not to mention there is still leeway for interpretation on how pixels are averaged for siting that is not on a pixel boundary.
Bottom line is that if you know exactly what you are going to do with the photo in the future you can compress a lot and quite easily with little risk. If you have a lot of unknown processing in the future, things can get dicey.