Hacker News new | past | comments | ask | show | jobs | submit login
Free Lossless Image Format (flif.info)
1254 points by davidbarker on Oct 2, 2015 | hide | past | favorite | 362 comments



This is really interesting. One of the most interesting parts is the progressive decoding/responsive images. http://flif.info/example.php (go to the bottom of the page).

Basically, the last example shows that, if you want a scaled version of the image, you can simply stop decompressing. No need for multiple image files. Just create one with very high quality and decompress until you get the quality you want and scale it down with html.

edit: more info on the responsive side of FLIF: http://flif.info/responsive.php


Also no more .thumbnails folders

I imagine images appearing as fast as I can scroll, with enough visual clue to them to find images way way more quickly than with thumbnails today. As a start, thumbnails could use flif.


I think some formats already support embedded thumbnails. (EXIF metadata can contain thumbnails, I believe)


But that is not the same. That is a small, but fixed size, second version of the image, embedded into the same file.

With FLIF you simply read the first N bytes of the full image and have a resonable preview. You choose how big or small N has to be, depending on the size of the thumbnail you want to show. Maybe first read N bytes for each image to get a quick but rough preview, then repeatedly read a few bytes more to enhance the thumbnail.


To be fair, that's not the same thing either. A thumbnail is resampled in a way to resemble the original at a smaller size. This will be resampled with (more or less) nearest neighbor, which means lots of aliasing and possibly looking nothing like the original, depending on the subject.


That's true. I tried a variant where the resampling would be better, but while it is possible, it hurts the compression rate significantly. At the level of a single step: if you have two pixels A and B that are both 8-bit numbers, then if you want some lossless way to store both (A+B)/2 (an 8-bit number) and enough information to restore both A and B, it will take at least one extra bit (9 bits, e.g. to store B and the least significant bit of A+B). So a 24-bit RGB image would become a 27-bit images when interlaced with better resampling (except for the very first pixel, which would need only 24 bits).

In practice, the simplistic resampling is not likely to be an issue -- of course you can create a malicious image which is white on the even rows and black on the odd rows, and then all previews would be black while they should be grey. But most of the actual images are not like that -- e.g. photographs. You can just decode at somewhat higher resolution and scale down from that. (You have to start from a power-of-two scaled image anyway.) Also note that Y is emitted at more detail earlier than chroma, so most of the error will be in those less important chroma channels. Other than that it's just Adam7 interlacing, but with no upper bound on the number of passes (so you could call it Adam-infinity interlacing).


I haven't checked, but I suspect that a better down scaling method can be used while still being compatible with the same decoding algorithm.


hmm, how could it be? The pixels values have to fit into the eventual reconstructed image. If the values were different than any pixels found in the final image, it wouldn't be progressively loading, it would have several different size images embedded in it.


(edited) After reading the author's comment above mine, you're very probably right.


Part of the beauty of FLIF is that there are no dedicated thumbnails, though.


As long as you don't directly work with FLIF, cause tools don't support it, having the file browser's preview-feature work using FLIF would be awesome. This way it doesn't need to store n previews for the supported preview resolutions but could store just a flif of the max preview size per document (psd, jpg, png, svg, pdf) and then, when you seek 3 pages down, the system reads the first kB of each such preview to render it and reads the next kB to improve the images, etc.

Sure, for actual flif files you would not have a separate preview file.


This would be great. A lot of "responsive" sites nowadays use only one image as well, the highest resolution available. So you are downloading that 4K jpeg whether you are on mobile or on a desktop.


Your example also shows why it needs to be left up to the browser: downloading a large JPEG makes more sense on an iPhone 6S+ with a 1080p display and LTE than the 1024x768 computers at the local school sharing a basic cable modem.

The part which would really make sense for this would be something like an extension to <picture> or <img srcset> which would allow you to provide the byte ranges for each resolution so the client could issue a standard HTTP range request for the level of pixels it actually needs. You'd still have one file to manage and CDNs could cache it intelligently rather than lowering hit rates by caching different versions.


I wonder if it would be feasible to have browser functionality that starts the download, then stops it once it has enough data to make an image the width of the element on the page. (Maybe then continuing the download later if it's resized larger?)


You could certainly build that in as a heuristic similar to how many browsers implement <video>: issue Range requests for a certain chunk size and stop once you have enough data. It's not as efficient as being able to get it all in one go but if you have a progressive image format that might be a better experience if it can start rendering quickly and fill details later.


Progressive JPEG has supported this for decades. Anyone over 25 should remember them from dial-up years.


I think the suggestion is that the client could just stop downloading at a set threshold, maybe taking current bandwidth, cost, etc. into consideration.

Judging from the video, something like that could work very well (and much better than with progressive jpegs).


It's an interesting concept. But I'm not sure how it could work. Resizing introduces artifacts so best quality is always going to be achieved by reencoding for different resolutions. Also, how would the client know when to stop downloading?


The client would download the first 1KB for an icon, 4K for a thumbnail and the whole image for the full size. The best part is that all of those are actually the same file, not separate embedded thumbnails inside the file.

The client could decide BY ITSELF to download only 2K for a thumbnail because it's currently on 3g.


Where are you getting 1KB and 4KB from? The compression is highly dependent on the image so there's no way to know the quality of the thumbnail image.


Maybe a future version of the format would specify useful cut-off points for browsers to make the decision inside the header. Then the browser can decide to download 4K version on wifi and 1K version on 3g.


You'd need to set the thresholds in the file manually, based on the artists judgement of quality.

It would take a few extra seconds when creating it, but could save both bandwidth and create a better user experience on mobile.


This will have to be automated or it's going to be really annoying for development. To eliminate that, maybe the file format could incorporate some internal tags for good size thresholds, or even just a handful of coefficients at the beginning for a simple curve approximating the byte-to-resolution mapping.


But how is that different from progressive JPEG?


And then there was magic of watching interlaced GIFs download before the era of JREG.


However jpeg is lossy.


So is a FLIF that you only download half of.


If compared to the original yes, but that's apples to oranges. They are different resolutions.

It'd be more accurate to compare it to a pre-generated resizing of the original image (at the same effective resolution as the "half downloaded" FLIF).


A compressed image which does not reconstitute to the original image is by definition lossy. You can't scale a PNG down by 25% and call that "lossless compression".

Though I'll gladly take your logic and call the first 1/64th of a progressive JPEG lossess (which it is, since at that point you only have the DC offsets of basic blocks, which undergo no lossy compression).


I think JPEG's color transform is lossy, so even those basic blocks are somewhat lossy. A better approximation of the entire 8x8 block than the exact pixel at the top-left position of the block, which is what FLIF and PNG with Adam7 interlacing do, but still lossy. If you would take an image, magnify it by a factor of 8 in both directions so each pixel gets blown up to an 8x8 block, then convert it to JPEG, there is still some information lost.


It's lossless (in theory) when compared to a 1/64th scaled version of the original image (which is what was clearly meant).


Snall screens dont always have small resolutions. There are more pixels on an iphone than my laptop!


I don't feel like that page is entirely fair.

The PNG and GIF comparisons are unfair; they're the same low resolution as FLIF, but given nearest-neighbor upsampling instead of bilinear. Most web browsers use bilinear or bicubic upsampling for progressive pictures now, so FLIF isn't nearly as unique as it presents itself. Aside from that, progressive is only useful if you're on a slow connection or the file is broken; in other cases re-rendering multiple times becomes a bottleneck and battery drain. In general it's rare that progressive decoding is a win anymore (although progressive encoding of JPEG is always a win).

In another comment I pointed out that jpeg2000 can be encoded to any size, even 1k, even though the q-scale doesn't go that low, by using the target size parameter instead. It looks better than anything but BPG and is fully progressive. Of course, J2K is a dead format outside of DejaVu, PDF, and the medical field, but it serves as a useful baseline.

A comparison to JXR would have been nice; it's an open, patent-indemnified format that fully supports progressive decoding, and compresses about as well as WebP. That format's the biggest disappointment for me, I thought it had everything in place to displace JPEG.

BPG could be made somewhat progressive, but only by changing the underlying HEVC bitstream, while one of its goals is to stay as close as possible. The inclusion of a thumbnail is the only recourse.


You're right about the interpolation method (bilinear vs nearest-neighbor), it's indeed not a fair comparison in that respect. However, I do think that most PNG viewers use nearest-neighbor (at least that's what my viewer did when I gave it a partial PNG file).

But it's not just the interpolation that makes a difference: PNG interlaces full RGB pixels (all 3 channels at the same time), while FLIF gives priority to luma so you get a chroma subsampling effect on partial files. Also GIF only has vertical interlacing and PNG only does 7 passes of 2D interlacing, which means that for huge images, the first pass can still take a while.

Another point: I am proposing to use progressive decoding to avoid having to download the full lossless image (so a browser supporting FLIF would need some mechanism to stop downloading the image file when "enough" detail is available). Repeated re-rendering is not necessary. E.g. on a mobile phone you would rather just download some prefix of the file (the size of the prefix could depend on the available bandwidth) and render it once, at least until the user zooms in on the image, or saves or prints it or something -- then the download should be resumed in order to load more/all detail.


Makes me think of the web of old, where sometimes an image would load in multiple passes.


I assume you're talking about progressive JPEGs. These are still fairly common in my experience. It's just that with today's typical connection speed, you'll rarely notice.


There's also progressive PNGs. Progressive JPEGs have the advantage that they increase compression ratio (though they're more expensive to decode, as you need multiple passes and refresh)


They increase compression ratio? That's counter-intuitive. You're effectively imposing an ordering requiring certain information to be available first, so you'd expect the compression be at most as good as non-progressive. I guess the changed ordering makes the statistics simpler and easier to compress, or something like that.


The downside is memory usage during compression. A non-progressive jpeg can be compressed locally but a progressive encoding requires the whole image. But this is only a small downside.

(Converting JPEG images to progressive format is one of the optimizations mod_pagespeed makes.)


The initial passes are lower freq, so they compress better. More over you have control over them when encoding so you can optimize there (and there are tools that do so), but the defaults used by all encoders are very good across most all images.


That part is pretty cool. If this were widely deployed, I could imagine this making the use of responsive images much easier.


JPEG has something cooler, you can first load a black-white smaller picture, then load more color and more detail.


Thinking back to the old days, are you sure the black and white image you have in mind didn't come from the "lowsrc" attribute of an img tag? Progressive jpeg is typically full colour from the start, and it does offer the progressive enhancement of resolution that you mention.


Interesting, just last week I had a use for lowsrc for the first time in years and tried to use it, only to find out it had been removed from all browsers a while back.



please be aware that w3schools is a terrible resource loaded with inaccuracies and misinformation. It's the National Enquirer of web development. I'd recommend using MDN instead: https://developer.mozilla.org/en-US/.


Look at what the js ia actually doing, lowsrc has been obsoleted in html5: http://www.w3.org/TR/html5/obsolete.html



It depends on the encoder, it very well could be the first pass is just Y.


I'd call that "grayscale" rather than black and white, but interesting suggestion. I've never seen it done but looks like all sorts of things are possible: http://hodapple.com/blag/2011/11/24/obscure-features-of-jpeg...


Here's the Mandrill test image from SIPI and Utah as a 75% progressive JPEG with just the first scan of the most significant bit of just the DC for Y only, which is a totally expected scheme for the web back in the '90s:

https://lh3.googleusercontent.com/-4tX6mfFoY4c/Vg7bg85RwcI/A...


What do you mean by "totally expected"? A Turing machine "totally expects" an infinite ribbon.


Back in the '90s I was doing work funded by DEC and later NSF with JPEG and one of the things I did was find an decent way to store them at a smaller size and resolution. I found that having the first scan be just the MSB of DC of Y and then later the MSBs of the other two DCs was the same as having the first be all three give or take two bytes over more than 90% of our corpus and it let something display in about half the time (when a 9600 baud modem was target).

It's been a long time but I think what I ended-up with was first scan MSB of Y DC, next six bits of Y AC. Then MSB of Cb and Cr DC, then a scan with a few bits of Cb and Cr DC and AC and finally the rest. The idea was that for a B&W, greyscale, or color thumbnail the same JPEG would be used but only the first N scans sent followed by 0xffd9 with a width and height in the img tag. Anyway, I can't be the only one that figured-out in the days of SLIP over modems that doing this trick was a good idea.


FLIF also emits Y first (actually alpha first), before the chroma channels.


Who needs mipmaps?


The GPU!- skipping those pixels hurts the cache, even if swizzled :)

And my eyes - shimmering car grills and garden fences begone!


Wow, okay, this comment and its parent lost me :)


Rough context: When the GPU (or the CPU back in the days) had to draw a texture, it had to sample pixels, maybe one, maybe four and do some say bi-linear filtering in between them, and then use that pixel as a result.

Now several problems:

1. If your texture is sampled roughly one pixel from it to one pixel on the screen, and if all pixels are read linearly ("horizontally"), then you are good with the cache, cause for the first pixel you've read the cache maybe cold at that address, but it'll load the next pixels just in case you need them, and here is the catch - you need to get always benefit from that - it's like always using your coupons, and deals, and your employer perks. So the CPU/GPU might read 4, 8, 16 who really knows (but you) bytes in advance, or around that pixel.

2. But then you turn the texture 90 degrees, and suddenly it's much slower. You draw a pixel from the texture, but then the next pixel is 256, 512, or more away ("vertically), and then next too, your cache line of 4, 8, 16, 32 or 64 read bytes that you did not used, and by the time you might need them again, the cache discarded them. Hence the slowness now - much much slower!

4. To fix it, you come up with "swizzling" - e.g. instead of the texture being purely scan-line by scan-line, you kind of split your image in blocks - maybe 8x8 tiles, or 32x32, and then make sure those tiles are linearly written one to each other. You can go even further, but the idea is that under any angle, if you decide to read, you most likely would hit pixels from the same cache line you've read before. It's not that simple, and my explantation is poor, but here is someone who can do that better than me: https://fgiesen.wordpress.com/2011/01/17/texture-tiling-and-...

8. But even with swizzling, tiling, whatever you call that magic to keep pixels together in any direction really together stops working as soon as you have to draw that big texture/image on a much smaller scale.

16. Say 256x256 would have to be drawn as 16x16 - And you say I don't need mipmapping, I don't need smaller versions of my image, I can just use my own image - well then nomatter how you swizzle/tile, you'll be skipping a lot of pixels - hop from here to there, and lost cache lines.

32. For that reasons mipmaps are here to help, but stay with me for one more minute - and see how they almost fix the shimmering problem of textures with "fences", "grills on a car", a pet's cage, or something like it.

64. And when you hear the artist ready to put real spider-man logo on the character made out of real polygons, and real grill in front of the car, made out of real polygons, and real barbed-wired made out of polygons very nice looking fence - then stay away, as these polys woulds shimmer, no good level of detail can be done for them (it'll pop) and such things are just much easily done with textures and mipmaps - it kind of solves a lot of problems.

128. Read about impostor textures.


JPEG 2000 has that same feature.

FLIF claims it will beat the lossless compression ratio of JPEG 2000


It also claims to beat the lossy version, which cannot even be encoded for the transparent fish image at the tested file sizes: http://flif.info/example.php

[ed. image size → file size to emphasize that they had a target byte count for the purpose of comparison]


JPEG 2000 is capable of compressing 1969x1307 images without issues. I work with JPEG 2000 codecs every day. It is commonly used in Virtual Microscopy with images in the tens of gigabytes as well as with small 256x256 MRI. I don't know why the author does not present "JPEG 2000 at this size".

Edit: I was confused since with most coders you can specify precise mean squared error optimal truncation points (called quality layers) and it should have been very easy to obtain a stream of any particular file size.


That example is looking at a specific file size. So what he's saying is that he was unable to produce a JPEG 2000 image in a file that size from that image, not from an image of that resolution in general.


He didn't try hard enough. OpenJPEG has a file size parameter, which can go well below its -q 0 size. I was easily able to generate a 16.9K jp2 by specifying the size: https://dl.dropboxusercontent.com/u/54412753/doom9/fish.jp2 which in png looks like: https://dl.dropboxusercontent.com/u/54412753/doom9/fish.jp2....

Looks better than anything else but BPG at that size, to me, although FLIF obviously isn't optimized for lossy.


Thanks for pointing this out! I only tried ImageMagick convert at minimum -quality. Sorry about that.


One interesting thing from that page is that GIF89a allows interlacing. So it really can be that some combo every eighth line is displayed very early on. Moreover there is a notion of image blocks, so for that particular image taking 64x64 blocks for example you could first encode the fishy portions. Also if you used 16x16 blocks you could have true color, though it would be very large, though there would be some improvement running through gzip over http.


I'm not picking on FLIF, it may well be better! But the concept of progressively decoding for different quality levels isn't what makes it new.


If you want to use a similar feature -today-, there's this service called Imgix: https://www.imgix.com/

Basically, it allows you to use responsive images using 1 single master image. Makes for a really snappy user experience, and it's very easy to integrate into any project, new or old.


Free & Open Source version: http://thumbor.org/


To clarify: at the moment FLIF is licensed under the GPL v3+. Once the format is finalized, the next logical step would be to make a library version of it, which will be most probably get licensed under the LGPL v3+, or maybe something even more permissive. There is not much point in doing that when the format is not yet stable. It's not because FLIF is GPL v3+ now, that we can't add more permissive licenses later.

And of course I'm planning to describe the algorithms and the exact file format in a detailed and public specification, which should be accurate enough to allow anyone to write their own FLIF implementation.


You're of course free to choose whatever license you believe is appropriate for your project, but I can almost guarantee that your project will not see widespread adoption if GPL or copyleft licensing is the only available implementation.

Game development projects, in particular, will avoid it, as will almost anything that wants to publish an application to the Apple App Store or Google Play.

You may wish to consider whether adoption of a standard implementation is more important than other goals you may have considered in choosing a license.


Please read the comments you reply to:

"Once the format is finalized, the next logical step would be to make a library version of it, which will be most probably get licensed under the LGPL v3+"


You can't use LGPL libraries in the iOS App Store. (Well, I'm no lawyer but that was what I've been told in the past)


LGPL is only marginally better (at least as it concerns perceptions, if not reality), especially when you factor in app platforms where shared libraries aren't commonly used.

Basically anything copyleft is going to have a really big dropoff in use compared to say BSD, MIT or Apache 2.0.


I did, hence why I referred to "copyleft".


Except that game developers are already using LGPLv3, especially on the PC market which is only worth $25 billion in size.

A key attribute for companies in a heavy competitive market is that they have to use every tool available to get the best product out in least amount of development time. If a company decided to avoid a license just because of religious reasons, 10 other companies will jump into its place and out compete them.

A good example of this is when we see companies reaches out to developers directly to get permission, as the time to do that is worth compared to having developers spend time on unnecessary code. Large AAA games can easily have a long credit list of every license available that can be used in a proprietary product, including LGPLv3.


There are always exceptions, we are talking about the general case. Anecdotally, in my professional experience every commercial developer I've worked with or am aware of will always avoid any copyleft-licensed material and many license agreements for third-party platforms prphibit their use.

Yes, some platform holders will use it themselves, but that is not their preference. Ultimately, image encoding and decoding is a crowded market with many alternatives and I strongly believe that a copyleft-licensed component will always be passed over in favor of alternatives unless it is overwheingly compelling and a viable option.

Some companies explicitly prphibit the use of any copyleft licensed software in their development.

Certain conditions of the GPL and LGPL are impossible to fulfill on some platforms.

With that in mind, I feel like my assertions remain reasonable.


BSD/MIT, or ASL2 are pretty much the standard if you want to see something as widely used as possible, which this sounds like might be the case.

Cool work in any case!

One thing the page could use is "how much processing power does this use, compared to other things?"


Yeah, GPLv3 AND LGPLv3 are the kiss of death for any corporate use. The patent clause in GPLv3 makes it legally impossible [1] to be used by any corporation that licenses patents; it's a completely broken clause, and GPLv3 should be banished.

Without a BSD/MIT/ASL2 license as an option, you'll never see this in Internet Explorer or Chrome. Probably not even Firefox.

[1] Companies license patents in bulk from other companies for their own use all the time. They don't have the right to sublicense those patents to others; they're just protected against any lawsuits relevant to the use of the ideas in those patents. Yet GPLv3 requires that they provide a free license to any patent that they have a license to that might be required to use GPLv3 source code. So it's requiring them to do something they legally can't do. Selecting GPLv3 means that no large company will ever touch it as a result.


You are wrong, and a drama queen.

The only thing (L)GPLv3 patent clause obliges propagators to do (ie not mere users; not even mere modifiers) is to grant their users licenses to applicable patents which they own. The permissive Apache license v2 demands from contributors the very same. It is the software patents that should be "banished", not freedom preserving licenses.

TL;DR: (L)GPLv3 prevents patent trolling through free software.

Disclaimer: This is not to be treated as legal advice.


You're not a lawyer. Lawyers at big companies accept Apache's demands and reject categorically GPLv3's.

Doesn't matter what you or I think of software patents (yes, they should be banned). Doesn't matter what you or I think GPLv3 says.

The lawyers at big companies see it as a problem, so it's a problem. End of discussion. No drama required; it's just the fact that big companies avoid using anything cursed with GPLv3.


Unlike you, I have at least taken the time to read the relevant license parts before discussing them.

Note that what you've argued before was very different from what you're saying now. You've narrowed the scope of the discussion (leaving out LGPL), but also its very nature ("legally impossible [1]", eh?).

Anyway, companies do use software licensed under both licenses, and even incorporate them into their services - hence the need for AGPL. Maybe others wouldn't see GPLv3 as that much of a problem if people didn't spread FUD about its supposed "curse" ("no drama required" but you couldn't help it, huh?). And if they didn't defend harmful practices, like Linus does tivoisation.

But mostly what companies avoid is copyleft, because it mandates reciprocity and prevents leeching the community. For projects such as these, LGPL is an acceptable compromise. The only valid argument against it is that apps under incompatible licenses will not be able to use it where dynamic linking is barred, and such is the requirement for apps in the Apple's store. However, in this particular case, that wouldn't be a problem either if the platform itself provided a decoder, like iOS does for PNGs.


>Doesn't matter what you or I think GPLv3 says.

I have read the license. I'm like that.

And it still doesn't matter.

It's what the lawyers think. And they say it's verboten.


There are only so many options here. Either:

* You are lying that you read the license.

* You were lying about what the license says.

* You really dont want to admit that you have misunderstood the license.

Either way, you were wrong then and you are wrong now about what the lawyers think. Speaking of which, there are only a few possibilities here as well, only these are not mutually exclusive:

* You are intentionally dishonest because you have an agenda

* You are dishonest just to cover your behind

* You are genuinely careless about what you say

Even if we change the word 'think' for 'say', it's still a gross overgeneralization.

So all things considered, in the best case scenario, you refuse to admit when you are wrong, and will continue overgeneralizing. Forgive me, but it's really not worth the effort arguing under these circumstances. If you wanted to continue, you would have to make some concessions, but I doubt you will, so in all probability: Goodbye.


>You were lying about what the license says.

I was reporting what I was told by corporate lawyers. My own reading of the patent section does happen to side with the lawyers' reading: That if you distribute an app that's protected by a patent you own a license to, that you need to arrange a sublicense for all users of that software. Maybe not technically "impossible," but I didn't count "spending millions of dollars to fix the problem" among the likely corporate responses when I said "impossible." Especially when most GPLv3 code can be written from scratch for less than the cost to license patents.

Someone alleged Blizzard uses it; fine, their lawyers either disagree, weren't consulted, or are being ignored, but Blizzard doesn't make Chrome, Firefox, or Internet Explorer, so the point is moot if you care about web adoption, which would make the format relevant to anyone but a game developer.

What matters is what the lawyers for the big companies that control Chrome and IE won't let GPLv3 code into the code base. Many other big company lawyers take the same position (probably all companies above some size threshold), and that's all I've been alleging from the start. Criticize my delivery all you want, but that's what I was trying to say.

My agenda is to get the developers to change to a license that could actually be adopted into a web standard. Since you're refusing to actually read what I'm saying, I agree: Goodbye.


What would cost millions of dollars? Please cut the drama out already and limit yourself to arguments. You're finally starting to display understanding of the patent clause.

Companies wouldn't adopt things under "GPLv3", but they wouldn't a permit GPLv2 either. Or LPGL. Or Apache 2. Or MIT, or BSD, or any license. They permit nothing short of contributors assigning them copyright and the patents, just them (see eg. Webkit's and Chromium's copyright notices and CLAs). And yet, libpng is under a license. So yeah, I agree they'd write their own library - out of their selfishness. Let them.

With the "adopting a web standard" thing you're attempting to further move goal posts. But you fail, and not because your implication that standard bodies would accept permissive licenses is wrong - which it is, because they're exclusively public domain + patent clause (oh and the people building browsers still contribute somehow). You fail because you're mixing apples and oranges again; programs are not parts of standards. Standards describe file formats, and prescribe behavior of programs that process them. They are not concerned with implementations' licenses.

The spec can become a public domain standard, and all would still be well with the library under (L)GPLv3+. Free software should have the edge.


Someone will have to call up Blizzard and tell them they have had the kiss of death when using LGPLv3 libraries in Starcraft 2. Without patents to do xml parsing or image handling, how can a game like that ever be sold with commercial success? The game only sold 5 million copies, and an other million for the expansion, so that can't possible have earned revenue for the company to support their employees and investors.


Maybe Blizzard is paying patent trolls or otherwise buying patent protection for something stupid like XML parsing that should never have been awarded a patent; a lot of companies do, since it's cheaper than fighting. But there are tons of library options to do XML parsing and image handling that don't involve (L)GPLv3. Can't imagine why they WOULD use it.

If they are using LGPLv3 libraries, then presumably their lawyers are OK with it, or they failed to run it past legal. Regardless, it's completely irrelevant to my point what Blizzard uses or doesn't use.

If you want to see a lossless compression format on the Web, you need the format to be picked up by, at a minimum, Google and Microsoft, the owners of the two top browsers in the world today.

Firefox would also be important, but would certainly follow if Google and Microsoft stepped up to support it. So really it only matters what Google and Microsoft think. If it's a license that they can freely use, then there's a chance it becomes a new Web standard. If it's only available LGPLv3, then Microsoft and Google would need to buy LGPLv3 exceptions in order to use it. A much bigger barrier to entry, and at least Google might object based on the concept that Web standards should be open. Mozilla would certainly resist if they weren't open -- though they eventually caved on H.264.

If you want to see adoption where it counts, you need cooperation from the companies behind the browsers with the market share.

No one is expecting Starcraft 2 to be the next big Web plug-in, so it's irrelevant to the product that they have LGPLv3 code in it (if they do). And if they do, Blizzard better hope that their lawyers are right in their interpretation of GPLv3, and that no one sues them hoping to take home a share of the profits you've pointed out that they are rolling in by forcing them to buy a license to get around the GPLv3 restrictions that they may or may not be violating. That's an expensive lawsuit even if they win.


I use MIT because I want the license never to be an issue to somebody taking it an using it for their own needs. Do you know of any reason why people would object to MIT, or is that the best one for a plugin/library?


The MIT licence does not contain a patent grant, so users of the plugin/library could still be sued for patent infringement. The Apache2 licence does contain a patent grant, and GPLv3 follows its lead in that regard.


> It's not because FLIF is GPL v3+ now, that we can't add more permissive licenses later.

That's only guaranteed if you obtain copyright assignment from any contributors now. Otherwise, re-licensing will be a massive headache as you'll need to track down all the copyright holders for permission. Some might refuse or have have dropped off the map.


Right now, N=2. As of 9 hours ago, it was N=1. I think he's okay at the moment. https://github.com/jonsneyers/FLIF/commits/master


I don't know, at this rate, in 9 days he'll need agreement from everyone in the world!


If that's the case, you should strongly consider setting up a CLA for your project so that you're free to switch licenses without any hassle when the time comes.

https://www.clahub.com/pages/why_cla


Done.


It would be useful if the metadata would include hints for responsive downloads. Eg: For 1/4 resolution download the first 10,000 bytes, for 1/2 resolution download the first 25,000 bytes.


This is a really good idea.


Very neat stuff. Would be nice to support 16/32bit floating point for true HDR as well if can be easily accommodated.


Can be done. Basically just change the type of ColorVal and implement a float variant of writer() and reader() in maniac/symbol.h.


I want to chime in to say that the lack of a common image format (and displays, for that matter) with true support for high-range photos is really holding back digital photography in my humble opinion.


Nah, it's the camera makers that are holding things back by sticking to proprietary formats and in some cases obfuscating them. There's OpenEXR, HDR, and DNG (the last of which is a true RAW format). The big issue with creating a true interchange format is that RAW files don't have proper color information for all pixels (they instead have red, green, or blue data for any given pixel, sometimes deliberately blurred, etc.) and interpolating the values to produce reasonably nice faux color for each pixel is actually very hard and the algorithms the different companies use is presumably protected by patents.


“nah” is a funny way to say “yes, and”


> WARNING: FLIF is a work in progress. The format is not finalized yet. Any small or large change in the algorithm will most likely mean that FLIF files encoded with an older version will no longer be correctly decoded by a newer version. Keep this in mind.

This seems so... avoidable. Maybe the FLIF format could include a version number declaration?


This makes sense for versions after the format gets finalized.

If they do it for pre-finalization versions, then they (and other implementations) will have to keep supporting them, which probably doesn't make sense considering how small the corpus of images in this format likely is.


No, they can just reject prerelease versions without attempting to decode them, which is much more elegant (and safer).


I, for one, applaud your license choice. Please also see my reply to SomeCallMeTim.


This is an exciting development.

I think the concerns about the licensing are a bit premature - honestly, this is currently a research project, not a practical replacement for existing image formats. The licensing is only one of several impediments to adoption.

- The format has no spec

- The format may change, rendering all previous images unreadable

- The format has no javascript implementation, and no way of running on old browsers.

- As far as I can tell, there hasn't been a patent search done to see if it violates other patents from other organizations. For example, this one: http://www.google.com/patents/US7982641

- No peer-reviewed paper published yet?

However that doesn't mean that we can't recognize the accomplishment - this looks like a very promising research result, and I hope the project continues! Good work Jon!


> - No peer-reviewed paper published yet?

While there's no paper there's a full Github repo which I'd actually say is quite impressive.


As someone who regularly has to read supposedly landmark papers - with no source code provided - this is far preferable in my opinion. Computer vision is terrible for this!


Screw software patents!


This is really interesting! There seems to be some additional technical information here:

https://boards.openpandora.org/topic/18485-free-lossless-ima...

  - for interlacing it uses a generalization of PNG's Adam7; unlike PNG, the geometry of the 2D interlacing is exploited heavily to get better pixel estimation, which means the overhead of interlacing is small (vs simple scanline encoding, which has the benefit of locality so usually compresses better)

  - the colorspace is a lossless simplified variant of YIQ, alpha and Y channel are encoded first, chroma channels later

  - the real innovation is in the way the contexts are defined for the arithmetic coding: during encoding, a decision tree is constructed (a description of which is encoded in the compressed stream) which is a way to dynamically adapt the CABAC contexts to the specific encoded image. We have called this method "MANIAC", which is a backronym for "Meta-Adaptive Near-zero Integer Arithmetic Coding".


Great, thanks. It seems there is no technical description available anywhere beyond what you quoted. They haven't written it up, and I couldn't even find comments in the source providing details. That's too bad, but he (I assume that's Jon Sneyers) does say he hopes to write it up later.

Also, comments in that thread on speed [1]:

     In terms of encode/decode speed: both are slow and not very optimized
     at the moment (no assembler code etc, just C++ code). A median file
     took 3 seconds to encode (1 second for a p25 file, 6 seconds for a p75
     file), which is slower than most other algorithms: WebP took slightly
     less than a second for a median file (0.5s for p25, 2s for p75), PNG
     and JPEG2000 took about half a second. It's not that bad though: BPG
     took 9 seconds on a median file (2.5s for p25, 25s for p75), and
     brute-force pngcrushing took something like 15 seconds on a median
     file (6s for p25, over 30s for p75), so at least it's already better
     than that.

     Decode speed to restore the full lossless image and write it as a png
     is not so good: about 0.75s for a median file, 0.25s for a p25 file,
     1.5s for a p75 file. That's roughly 3 to 5 times slower than the other
     algorithms. However, decoding a partial (lossy) file is much faster
     than decoding everything, so in a progressive decoding scenario, the
     difference would not be huge.
[1]: https://boards.openpandora.org/topic/18485-free-lossless-ima...


awesome, thanks for the info! if flif does ultimately consume more CPU resources then that is a trade off I'm perfectly happy to make, I'd rather burn CPU than my heinously bandwidth capped internet.


Here in Israel, bandwidth cap is big for a cheap price - I'm paying around 13USD/month for a package including unlimited voice calls, unlimited SMS and a 3GBs data plan.

What interests me most is battery life - is more CPU and less radio power better?


I hope the author can come back to tell us whether he did comparisons with different speed presets for BPG; it ranges from insanely fast to insanely slow, and that's probably just the default.


I only tried bpgenc with the default speed presets.

It's a bit premature to discuss compression/decompression speed, because this is just a prototype implementation and there are probably many ways to improve the speed. Premature optimization is rarely a good idea.


I posted the news of this new file format weeks ago on HN, no one picked it up :P


What makes it to, and stays on, the front page of HN has a large element of randomness.


Did you remember to put "Free" on the title?


Implicitly yes: "Preview of FLIF, New Lossless Image Format"


So, no.


The power of free. Some good books by Dan Ariely on the topic.


Wow! Higher compression than any other format, full transparency support, progressive/partial loading AND animations?! I can't wait for this to get widespread adoption!

Github repo here btw: https://github.com/jonsneyers/FLIF


GPLv3 will be a hard sell...


In fact, it makes it impossible to use in practice. None of the major browsers are GPLv3.


The Mozilla Public License used by Firefox is compatible with the GPL.

Also, if it's a file format and not just a particular library, shouldn't it be possible to reimplement support for it under different licenses on different browsers?


If the only spec is the code, that's going to be prohibitively difficult. There's a warning that the format isn't complete either, and you can expect breaking changes.


The GPL is compatible with BSD, MPL, and MIT licenses that Firefox / Chromium use.

The problem is more that Firefox / Chrome ship proprietary bits like the DRM modules that would violate the linking GPL coverage of flif. And that Chrome is proprietary.

Its going to need to be relicensed LGPL to be included.


> The GPL is compatible with BSD, MPL, and MIT licenses that Firefox / Chromium use.

Yes, it is compatible, the other way around. You can take BSD, MPL, MIT code and adopt it into a GPL project, not the other way around.

EDIT: Even LGPL won't work here as it will prevent use on Windows Phone, iOS and Android.


If Apple's attributions are correct (1), there is LGPL software on iOS, for example libiconv, and even GPL software (libgcc, libstdc++), but those have linking exceptions. WebKit also is partially LGPL.

Possibly not coincidental, the latest LGPL version I could find in the 'legal' section is 2.1.

(1) they may be overly cautious, given that they mention the L4 kernel, lua, and Tiny Scheme.


bzzt.

LGPL code is everywhere in Android and iOS. There are numerous apps built on GStreamer for both platforms, which is LGPL.

I wouldn't be surprised if Microsoft pulled the pig-headed move though.


> LGPL code is everywhere in Android and iOS.

You're not going to find LGPLv2.1 or LGPLv3 anywhere in them, however. Starting with LGPLv2.1 you are required to allow the end-user to replace a compiled binary you provided of the LPGL'ed component with their own, something that obviously cannot be guaranteed on any of the mobile platforms I listed.

I suppose I should have clarified the version, but it's important to note that the FSF has tried to pull the tivoization card with more than just v3 of their licenses.


Firefox doesn't actually ship anything proprietary. DRM module is an optional plugin from Adobe.


Jon Sneyers is a known advocate of Free Software, I am actually glad he acts on his beliefs.


Based on how he's acted in the past, I think even Stallman would've advocated something like the MIT or BSD license for something like this -- having a patent-unencumbered alternative to formats like JPEG 2000 being used seems to be more important to a lot of Free Software people. In other words, the advantage of having widely-used file formats that are accessible to free software programs are considered greater than the benefits of having software become libre in order to use GPL'd file format libraries. Stallman defended the use of a non-copyleft license for Ogg/Vorbis on those grounds:

https://lwn.net/2001/0301/a/rms-ov-license.php3


> even Stallman would've advocated something like the MIT or BSD license for something like this

Almost, Stallman (well, the FSF) advocates Apache 2:

  Some libraries implement free standards that are competing against restricted
  standards, such as Ogg Vorbis (which competes against MP3 audio) and WebM
  (which competes against MPEG-4 video). For these projects, widespread use of
  the code is vital for advancing the cause of free software, and does more
  good than a copyleft on the project's code would do.

  In these special situations, we recommend the Apache License 2.0.
Source: https://www.gnu.org/licenses/license-recommendations.html


Very, very hard. I don't see why they didn't use LGPL, as GPL will greatly increase the barrier to adoption.

You see how unsupported WebP is at the moment. Simply having a better compression ratio isn't gonna cut it.


The real barrier to adoption is, for the time being, older browsers, anyway. Want to lose 20% of your potential market via IE in favor of some improved image compression? =\


True enough... though there are a lot of other ways this can/could make it in the interim... much like webp through various optimizing/ha proxies. I to think GPLv3 will hinder peopler even trying to look at or support the format though. There are some legal minefields that many won't cross when it comes to copyleft... even if they wanted to many are in jobs that wouldn't allow it.


Yes, but let's give this a little time. If this was going to be another example of 'the code is the spec' it wouldn't go far anyway. That's a n00b mistake, and I don't think these guys are n00bs.

If this is worth using (and it's the first thing in several years that made me sit up and say 'ooOOoo') it won't get lost. There will be a way for the world to use it.


I suppose (hope) that once they finalise the format, we'll have tools released with free, unencumbered licenses.


you don't have to use the code, you can (and probably want to) implement the code yourself.


Why do you really want to duplicate work and write, test, debug, maintain... a FLIF decoder yourself?

Also, I'm not sure how far do you have to deviate from the original code to not be covered by its license.


Technically, I believe the answer is 100%. If you're using the original code as a starting point or a reference for your code, then you're basically creating a derivative work.


Correct, you have to do a clean room implementation. You either start from a public standard, or you need two separate teams. One team is allowed to look at the existing code and make a detailed description of what it does, but not write any new code, and the other team can read the descriptions from the first team and write the new code, but they can't ever look at the old code.

This only works to avoid copyright infringement. Patent infringement cannot be avoided in this fashion.


And that, boys and girls, is how we got the PC clones. Thanks to Compaq no less. Could not happen today though, as back then IBM could not patent the BIOS chip.


You can copy how something works, copyright doesn't protect that, patents protect that. If the form of it is necessary for how the thing works then it's not an artistic expression and so you can copy that directly. Generally speaking there are not interoperability exclusions though so if the choice isn't technically essential you can't copy.


> Why do you really want to duplicate work and write, test, debug, maintain... a FLIF decoder yourself?

Because multiple implementations are useful.


Actually, now that I think about it a bit, maybe not. If your only interface to this is feeding it a compressed file and getting back a bitmap, then you haven't derived from it. You enter a certain legal gray zone, but you should be okay loading it as a dll.


If it gets popular, I'm sure decoders with more permissive licenses will show up.


It is pretty hard to become popular with no software support.


That's just the first implementation. Anyone can reimplement it. And as noted elsewhere in the thread, it won't continue to GPLv3 after things settle down.


I didn't see the part about full transparency. I was looking. Where do you see it?


One of the least interesting parts of this is that it is GPL. Someone should reimplement it with a BSD license so it can be used more widely. AFAIK & IANAL, but I don't think you could integrate this with FF, Chrome, Safari or IE.


No kidding. If you want an image format to become widely adopted and standardized, GPLing the code is a pretty bad idea.


Not just an image format, everything you want to become widely adopted and standardized should refrain from using the GPL, even the FSF[1] recommends using the Apache License in this specific case.

  Some libraries implement free standards that are competing against restricted
  standards, such as Ogg Vorbis (which competes against MP3 audio) and WebM
  (which competes against MPEG-4 video). For these projects, widespread use of
  the code is vital for advancing the cause of free software, and does more
  good than a copyleft on the project's code would do.

  In these special situations, we recommend the Apache License 2.0.
[1] https://www.gnu.org/licenses/license-recommendations.html


Note that the Apache License isn't compatible with GPLv2. If you don't make use of any patents use BSD/MIT/ISC instead.

Sadly there isn't a real alternative permissive license with a patent clause. There is a license called COIL[1], but it hasn't seen much adoption yet.

[1] http://coil.apotheon.org/


Nothing that a dual license GPL/APL wouldn't solve.


He intends to make the spec public as well as to switch to a more permissive license once the format has stabilized, so I wouldn't be too concerned yet.


I think it's great to have a free software solution which wholly eclipses the competition. It's an incentive to use free software (use free software and get the best image codec in the world).


Yes, but the technology of "storing images" is never going to become the domain of free software. It's a little late for that.

If you're trying to drive widespread adoption of a competing file format, then don't GPL the only code that implements it. Make a brain-dead reference implementation with a license as unencumbered as you can stand. (Then code up an elegant implementation and GPL that.)


It is an issue on the GitHub Project:

https://github.com/jonsneyers/FLIF/issues/3

The title incorrectly suggests Creative Commons (which is inappropriate for code), but the discussion does suggest better alternatives.


I think it's the most interesting because it immediately turned me off to the project. Releasing this only as GPL basically means it will NEVER be adopted en masse :-(


Seems like the mistake the author is making is announcing something as a "product" when it's really still a "project."

In principle, using the GPL license for the reference code is OK if his intent is to encourage other people to build more refined, practical, and performant implementations. But the real problem with his choice of license is that a lot of people can't even look at the code for fear of tainting themselves legally. Given that there is no desperate market demand for a better-PNG-than-PNG, it wasn't a good idea to handicap the proposed format by saddling it with various restrictions and legal minefields.

If the reference code is strictly of proof-of-concept quality, which it sounds like it is, then there was no reason not to use a more permissive license.


People should not be reading HN or any news at all if they fear committing patent or copyright infringement from just looking at code. You lock those people into a bunker without internet and just a computer to do the work, and after we call all be safe that it was legal.


I think the right avenue is to keep the license GPL but provide a permissive as-is use license. The idea is that you want people making improvements to the algorithm to contribute those back for the public's benefit rather than charging people for their "upgraded" version. A separate unmodified-use license could be much more permissive in the use of algorithm as-is for compressing and decompressing images.


That's effectively LGPL.. even then, though it would still be problematic for embedding and hold back adoption in browsers and various platforms (game engines, etc).

Realistically a basic rendering and conversion implementation should be at least Apache 2, or something even more permissive (MIT/BSD/ISC). It was my first thought/comment when I saw the GPLv3 notice... They should switch the license quickly if they want to see adoption/support. Getting a free standard in place is more important than using a copyleft license.


Or at least LGPL, but that is still a hassle for embedded use. GPL may prevent some from trying to embrace/extend/extinguish, but you get around that by writing a spec and clean room code from the spec. Right?


I understand the sentiment behind wanting to GPL projects and I would myself if it didn't lower even further the already infinitesimal possibility of someone reusing my code.


The 'only download as much of the file as you need' way of managing detail levels is brilliant.


Totally. I really recommend watching the video under the "Progressive and lossless" heading. With just 20% of the file downloaded the image quality was already subjectively quite good. I didn't even know progressive loading within a single file was a thing, but it's a neat trick.


Isn't a progressive jpeg loaded from a single file? Or are you referring to a different context of progressive with this file type?


It's been a thing for at least 20 years, when "wavelet" compression first started being used.

https://en.wikipedia.org/wiki/JPEG_2000#Progressive_transmis...

And the Digital Cinema formats widely used today rely on this so a 2K or 4K stream can be extracted from the same file.

What's "new" about FLIF is the better lossless compression rate


The most solid feature is lossless responsiveness. Stopping the download gives you a lower-resolution image. When scaled, the image then becomes lossy.

Here is a thorough analysis of how good that lossy image is: http://flif.info/example.php

It shows how very good lossy BPG is, but also how good FLIF is against anything but BPG (and against lossless BPG).


Why all the hate for GPLv3?


Sharing code is no longer cool if people have to share back. Or at least so it seems.

The GPL-hate in here really is quite immense, even though time and time again, RMS has been shown to be right about his stance on freedom.

Should we attribute it to people's desire for a quick ("free") fix over long term considerations?

It's hard to tell, but I suspect the silicon valley influence here doesn't help. There everyone wants to take everyone else's hard work and code, build a weekend SaaS and get rich. The GPL, while not preventing that, is clearly in opposition to that goal.


As someone who made a negative comment about the GPL elsewhere in this thread, I think I'll comment. I have absolutely nothing against the GPL, and use it myself. However, using the GPL for a library seems like a bad idea, because there are a lot of people, especially at companies, which won't touch the GPL. So if you license a library GPL and it's useful someone is just going to come along and reimplement it with a more liberal license. And such a waste of resources is sad to contemplate.

The ideal of the GPL was that it would force other projects to switch to GPL, but it doesn't seem to happen too much in practice because the modern world has an abundance of alternative software for any task. Think long term: chances are someone in the next 50 years will be annoyed enough to reimplement whatever you're doing.


In the context of this project which is yet to have a finalized specification I think the GPL is an ideal format. As a project it is important to have access to all of the pieces and avoiding incompatible forks of different kinds in the early stages. So for the purpose of developing a 'golden standard' prototype for FLIF, I don't think a non-GPL license would have served them better. The author seems to have the same idea, and is considering re-licensing to MIT later on.

I don't agree with corporate apologeticism which seems to be the norm here. If a company wants get something for free, expecting it to publish source code changes is not an actually high threshold. It might be in terms of corporate politics, but in actuality it is not hard.


> If a company wants get something for free, expecting it to publish source code changes is not an actually high threshold.

Obviously the issue is not about publishing changes to the library, it's about publishing the rest of the source which just uses the library as a building block.


You're free to link to compiled GPL libraries. There is nothing forcing you to change the license of your product unless you intend to integrate the sourcecode.


I also believe this, but it is not an established fact. The GPL does prohibit it, and so whether that is actually enforceable has to be tested in a court of law. So you are in fact not free to do this, if you have a boss above you who cares dearly about the company steering clear of hot water.

I'd be willing to testify as a technical expert in a court of law that the GPL cannot reasonably rule out dynamic linking; that dynamic linking to a program is a form of use (like invoking a command line and passing it arguments) and not integration. The proof is that dynamically linked components can be replaced by clean-room substitutes which work exactly alike, or at least well enough so that the main software can function.

For example, a program can be shipped with a stub library which behaves like GNU Readline (but perhaps doesn't have all its features). The users themselves can replace that with a GNU Readline library. Thus, the program's vendor isn't even redistributing the GPL'ed component. They can provide it as a separate download or whatever. However, if they were to include a GNU Readline binary for the convenience of the users, then the program supposedly infringes. This is clearly nonsense.


The problem with that line of thought is that the courts don't care if its use or integration, but if the resulting work depends on someone else work. The word used in copyright law is transform, adapt, recast, and such changes requires additional copyright permission. For example, if I buy a painting and cut it into pieces and rearrange them, I actually need an additional license beyond what I got from purchasing the copy. Components can not be cleanly viewed as separate when dealing with copyright.

You can also turn this around and ask if its legal to use process call within a other program without invoking the need for additional permissions. FSF view that it should be legal, but it has never been tested. By now however an industry standard has formed around the FSF guidelines and most courts would just look at it when deciding. This is what commonly happen when no one go to court to find out what the rules actually should be.


> but if the resulting work depends on someone else work.

Yes, so obviously your argument cannot be that "in theory, we could replace this with a workalike".

You better have the workalike, and that's what you should be shipping.

A powerful argument that you aren't infringing is that your shipping media are completely devoid of the work.

> don't care if its use or integration

For the sake of the GPL, they must care in this case, because the GPL specifically abstains from dictating use; it governs redistribution!

The only parts of the license relevant to use are the disclaimers; the only reason a pure user of a GPL-ed program might want to read the license at all is to be informed that if the program causes loss of data (or whatever), the authors are not liable.

GPLed programs get used all the time. A proprietary app on a GNU/Linux system can use the C library function system() which might invoke /bin/sh that is GNU Bash, and even depend on that functionality.

> For example, if I buy a painting and cut it into pieces and rearrange them, I actually need an additional license beyond what I got from purchasing the copy.

But what if that cut-up never leaves my house?

Or what if I only distribute instructions which describe the geometry of some cuts which can be made to a painting, and the relocation of the pieces?


> A proprietary app on a GNU/Linux system can use the C library function system() which might invoke /bin/sh that is GNU Bash, and even depend on that functionality.

And that according to FSF is legal because it do not create a derivative work. You said above that "GPL cannot reasonably rule out dynamic linking", but now you are picking and choosing which part of FSF interpretation of derivative is correct and which is wrong. I just wanted to point out that the law could have been easily interpreted in a different way if someone had challenged FSF interpretation 25 years ago.

> But what if that cut-up never leaves my house? Or what if I only distribute instructions

Again, the law is both clear and quite fuzzy at the same time. The author has the exclusive right to transform their work, and as such, you could get charged even if it never leaves your house. In EU its a bit different, since it talks about moral right which protect the integrity of the authors work, through the end result is likely to be the same in many cases.

As for just giving out instructions, the legal nature of those are extremely fuzzy. If I provide instructions that reproduce a copyrighted video (by compressing/encrypting the data that represent it), I will still run foul of copyright infringement. From what I have seen, courts tend to take a "common sense" approach to this problem and if the end result is an infringement, then the indirect steps that will cause an infringement becomes infringement too. Judges collectively seem to rule against people who they perceives as trying to bypass laws by technicalities.


> You said above that "GPL cannot reasonably rule out dynamic linking", but now you are picking and choosing which part of FSF interpretation of derivative is correct and which is wrong.

I don't see how you perceive a position change here. The FSF considers dynamic linking to be derivative; I do not agree.

We both consider the invocation via command line not to be derivative.

> now you are picking and choosing which part of FSF interpretation of derivative is correct and which is wrong.

Have been all along. "Dynamic linking is derivative" is almost complete bullshit in my eyes.

It's pretty much pure use. We map this object into memory and then call it.


> But what if that cut-up never leaves my house?

Maybe so, because the exclusive right is framed as a right "to prepare derivative works based upon the copyrighted work" (separate from reproduction, distribution, and other copyright rights).

https://www.law.cornell.edu/uscode/text/17/106


Might it be that this separation between those rights exists so that the copyright holder can contract out manufacturing services, while controlling distribution? The copy house is given the right of preparing copies, without having distribution rights.

I don't think I was infringing back in kindergarten when I cut up newspapers to make strips for papier-mâché. In any case, my courtroom argument there could be bolstered by the remark that the resulting work was painted, entirely concealing the original content.


The FSF takes the position that linking is derivative work; this may or may not be an accurate legal position, but it's a popular position from the licensed creator and an entity which owns a lot of software licenses under the GPL, and which is very influential with people who choose to use the GPL, so even if it is ultimately legally incorrect, the risk of litigation, whatever they outcome, from linking is going to be unacceptable to most significant users, so a GPL library is unlikely to see uptake in major software for which the creator had a significant reason for choosing a non-GPL license, whether it is permissive or proprietary.


It's not about being apologetic or anti-freedom, it's all about the reality of software.

If it gets reimplemented with a more libral license, then the more liberal version will dominate. If there is any incompatibility whatsoever, the more liberally licensed version will win. Look at gcc for an example: it has gcc-specific extensions and there are other compilers with more open license. These compilers are gaining the upper-hand.

Making it GPL instead of LGPL is just a bad political choice. Especially for something which need wide adoption to even live. Image formats can only exist if they get adopted.


GCC has been around for 30 years. Only in the past 5 years has LLVM been around, and LLVM really is the only compiler to ever have matched GCC. The value GCC has provided to the software community in large is immense. Do you think it would have survived this far if it had been MIT-licensed all along? To support this argument, I'd like you to think about all the other compilers that never had a 10th of the traction of GCC and how they were licensed.

I do agree that the LGPL is a reasonable choice for prototype implementation of a specification. I don't think using the GPL is a huge loss compared to LGPL for a prototype.

I'd like to ask a non-rhetorical question; On an individual basis is it not better if all software was available as sourcecode?

And if so, does that not mean the choice of not-publishing sourcecode is done for other reasons than what's best for the individual(s)?


Unfortunately, a sample of one isn't statistically significant. That a GPL-licensed compiler suite is overwhelmingly popular rather than an MIT- or BSD-licensed one could be little more than a historic accident.

BSD Unixes are BSD-licensed and have "survived this far".

The SBCL implementation of Common Lisp is licensed as a "a mixture of BSD-style .. and public domain" [source: http://www.sbcl.org/history.html] It is a popular CL implementation.

GNU Common Lisp [https://en.wikipedia.org/wiki/GNU_Common_Lisp], the GNU Project's Common Lisp implementation, is LGPL-ed and has been a floundering project.

The choice of license cannot be the determiner of what propels a project to the forefront of popularity in its class, because ... there are many more projects than licenses.

https://en.wikipedia.org/wiki/Pigeonhole_principle


kazinator was correct that saying one product made it proves nothing about overall effectiveness of its attributes. That's unscientific. If we're using uptake and maintenance as criteria, we might start with the venerable Sourceforge to see what percentage of projects go anywhere or get maintained under various licenses (esp GPL). Compare that to proprietary while we're at it. I predict results aren't going to look good for GPL's success rate and that's without a financial requirement.

Far as compilers, proprietary are winning out in terms of longevity, it's a select few proprietary vs GCC in terms of performance, GCC in uptake, LLVM with decent performance/uptake, and some others with intended academic uptake. So far, that's barely any GPL, one industrial BSD, quite a few academic (MIT/BSD licensed), and many proprietary. Apples to apples, GCC is barely special except in what it offers for free and how hard it is to extend. That's why LLVM was designed and why Apple built on it, among other companies and OSS-loving academics. After a mere market survey, GCC suddenly doesn't look amazing.

Now what's your thoughts on GPL getting the only development when I bring up Apache, BIND, FreeBSD, Sendmail, and so on? Plenty get development. Success stories, just like GPL, still have little to do with copyleft of the license and a lot to do with community or resources.


GCC also almost died in the 90s.


I'm a big fan of GCC, but this just isn't true that nothing matched it. It may be true that nothing "free" matched it.'

Where performance mattered we always used the commercial, proprietary, Intel compilers and/or the Microsoft compilers.


> there are a lot of people, especially at companies, which won't touch the GPL

Exactly. And for reasons which not everyone is aware of, for example anti-patent clauses.

What people classify as "GPL-hate" is often a very pragmatic approach. I've seen this a number of times: companies have nothing against sharing improvements to the code, but do have a problem with a) having to share everything they wrote and b) anti-patent clauses which are landmines. And at companies I worked with, (b) was actually the bigger issue, especially if you build and distribute physical devices with software.


If companies don't want to use code licensed under the GPL, they're probably welcome to pay the original the author for a separate license.


But that unfortunately erodes the free-as-in-beer advantage, since in this situation the software is effectively proprietary, and competes with proprietary code on an equal footing: bang for the buck.

"If we're going to pay to license freeware code as proprietary, let's look at all proprietary alternatives."


Is there any evidence that the author will sell commercial licences? Other comments are describing him as a strong proponent of free software, so it seems very unlikely that he is interested in selling proprietary licences.


> Sharing code is no longer cool if people have to share back. Or at least so it seems.

There are plenty of licenses which require/encourage that. The problem with GPL is that everything that touches the GPL code has to become GPL or compatible.

Adobe for example will never put this in Photoshop, or Microsoft in Internet Explorer, as they do not want to GPL their software. Net result is no wide support for this image format, which is a net loss compared to a e.g. LGPL/MPL scenario where companies could do that but still would have to contribute to the library if they made any changes/improvements.


You're not quite right. Everything that touches the GPL code has to become GPL, period. GPL compatibility only exists in one direction. A license is compatible if it is relicensable as GPL, the GPL itself isn't compatible with anything.

Rant: I really want to like the GPL, but this requirement is downright hostile towards other open source licenses. And in my opinion it's not even necessary for the GPL's mission. Surely, it would be enough to require that all other parts of the combined work must also be distributed under open source licenses, not necessarily the entire GPL. Some parts of the GPL, like the installation instruction requirement and the anti-tivoization clause, should apply anyway if only some parts are GPL'd, and the reduced protection would easily be outweighed by a strengthened open source community.


What you are describing in the rant is how GPL works, so surely there must be some misunderstanding here. If you use a GPLv3 library, your own additions to it can be GPLv3 or any compatible license. It can be mit, apache, BSD, mpl, CC-anything so long it allow commercial use, and almost any of the free and open source licenses. The entire thing to do not need to be GPL and in large projects its commonly not.


GPLv3 Section 5. Conveying Modified Source Versions.

> You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:

> c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.

It really does not work like you would want it to. Technically, you only add the GPL, so the work will then be licensed under both licenses at once. But the GPL is so restrictive that this doesn't make much difference in practice beyond maybe retaining the original license text.


There is no form of maybe in it. People who don't retain the original license text are committing copyright infringement.

The license talks about when the the entire work is being distributed. Individual parts licensed under a different license can be used in other works under completely different licenses and the GPL do not impact such use.


I'm not clear on what you are trying to say. Naturally, the GPL only applies when distributing GPL'd code, not when it's used solely privately. The license clearly states that all parts of the "work based on the Program" (anything derived from GPL code) have to be licensed under GPL. The FSF has always asserted that a derived work includes anything that statically or dynamically links to GPL'd code, although the term was controversial and v3 now mentions linking explicitly. So, when the section above talks about "the whole of the work and all its parts", that includes files that are by themselves not GPL'd. These have to be distributed under GPL as well if you want to include them.


The distributor must apply the license to the entire work, but individual parts can always be licensed under different license.

Say for example that you created some GCC code and put that under MIT. When distributing GCC, GCC as the "entire work" will be GPLv3 which also then apply to the MIT part. However, the MIT license will also apply to that part, must be kept in every copy, and you could take that MIT part and put that into LLVM and GPLv3 would not suddenly impact LLVM.

When modifying a GPLv3 project, your own additions can always be any GPLv3 compatible license (as I stated above). In most cases that would not make much sense but in a few cases, say in a compiler, it might make sense if you want to use that code in several project with different or even proprietary licenses. Nothing in GPLv3 prevents this.


That's the whole point of GPL. To move people away from closed source by having a sperate ecosystem that cannot be contaminated. The problem is that the ecosystem hasn't been that compelling besides a few things such as Linux.


That's fine as a long-term goal, but it's pretty directly in conflict with the short-term goal of getting support for this image format into all browsers. If you want this in e.g. IE, someone will have to re-implement it with a license compatible with IE. And there will be compatibility bugs, non feature-parity, etc.

I mean, I get it if you'd like closed-source browsers to go away. You're entitled to that position. But that's not the effect that making this library GPL will have.


It has nothing to do with requirement of code sharing. It's all about realism. So far there has been 0 successful file formats/codecs with GPL reference implementation and there is no reason to except FLIF to buck the trend. So its a huge shame that the effort now goes to waste because of avoidable political reasons.


> RMS has been shown to be right about his stance on freedom.

The GNU project has specifically commented on cases where using an more permissive license makes sense, one being to encourage the widespread use of a file format or standard.

I personally like copyleft quite a bit, but I don't think it makes sense for this particular case.


  > Sharing code is no longer cool if people have to share
  > back. Or at least so it seems.
But in practice if you don't care if someone using your code shares back the sharing actually increases. I guess many got fed up with RMS FUD, although from time to time some fun pops up with observation how RMS was right all along, or if he weren't then you will see how right he was in about five minutes.


RMS himself was against putting such formats/codecs in GPL. Suggested Apache 2 license. See comments above for details. QED.


Because GPLv3 code will never be adopted in IE, Safari, Chrome and likely Firefox... that's why. A reference implementation for an unencumbered format that doesn't also have a permissive license is unlikely to see wide adoption and support.

I'm fine with GPL for a LOT of things... For others I feel that MIT/BSD/ISC/Apache2, etc are better choices.

A software application that's GPL3, great... A library you want to see make it into diverse, embedded platforms, not so much.


What's to stop someone from reimplementing this with some other license?


IMO...

For opensource applications, they're seeking widespread user adoption. To that end, they want to keep users & developers focused on a single project & codebase. Releasing under a license like the GPL ensures they can easily incorporate features from any knockoffs/forks... which pretty well ensures there won't be any longterm knockoffs/forks (unless the project is mismanaged).

GPL isn't so great for opensource libraries though. They don't care about users, they want widespread adoption by developers and their applications. Since applications may have all kinds of licensing requirements of their own (many of which can't mix with GPL), licensing a library under the GPL automatically limits the library's potential adoption.

Which is why LGPL makes a lot more sense for a library. It allows incorporation of forks, but at the same time allows widespread adoption of the library.

Of course, the BSD license almost makes great sense for a library, but mainly for ones which have little expectation that there will be significant enhancements made under closed-source forks; or ones whose purpose is to contribute a basic reference implementation for the greater good.

E.g. a BSD license works well for something like a reference PNG decoder, to encourage adoption of the format; but less sense for something like ffmpeg, where the project's survival depends on garnering as many contributions as possible from a limited base of coders (experts in the intricacies of video codecs).

Thus I think it's less hate, and more frustration that by using the wrong license, the project has hobbled itself at the start, which clack of contributor interest before the project even gets off the ground... and folks would like to see something (like) this succeed.


If you want to your code to always be free, GPL is a good choice. If your want your code to be used by everyone, including e.g. all large browsers, you really need a permissive license.

Just imagine if SQLite was GPL-only, barely anybody would use it.


> Just imagine if SQLite was GPL-only, barely anybody would use it.

Better example would be libpng


Look at it from the perspective of someone interested in implementing this: once you put GPL code into a project, you're required to license the entire project under the GPL. That means that anyone trying to implement this format, even other open-source projects (i.e. every browser except for Internet Explorer/Edge, tools like ImageMagick, etc.) would have to relicense their entire project just to use it, which in practice means the format is never going to be adopted.

The tragedy is that there's already a decent license for this exact need:

http://www.gnu.org/licenses/lgpl.html

That would allow e.g. Firefox to use the format as long as they either did not modify it at all or were willing to relicense any changes they made to the image codec itself under the same license, which is a much more reasonable requirement.

This is important because adding a new image format is already expensive. Mike Shaver wrote a good comment explaining why Firefox never shipped JPEG 2000 support due to the expense of having a high-performance, reliable and secure implementation:

https://bugzilla.mozilla.org/show_bug.cgi?id=36351#c120

If you're adding to that already big cost the requirement that you override the project's decision about what license to use and go through and get permission from everyone who's ever provided a patch to relicense their code, it just doesn't seem likely to happen.


Perhaps it is simply that GPL3 asks much in exchange for what would be a small piece of a larger project. Kind of like asking people to give you their email address and phone number so they can use the shopping cart at the store.


Two primary reasons.

1. The GPL is incompatible with many other licenses, including free and open source licenses. Viral licenses do not play nice with one another. 2. For many people, the risk associated with the GPL death penalty is simply too high. Even the watered down GPLv3 version has some scary scenarios.

Aside from that, a lot of people simply aren't bothered by a proprietary fork of free or open source software with a permissive license. Keep in mind that permissive licenses or proprietary forks do not affect freedoms 0-3.


My personal feeling is that GPL is good for programs, but bad for libraries, as I end up having to make my whole program GPL (or GPL-compatible) if I use a GPLed library. Often I don't want to do that, and if I use a another library which has a GPL-incompatile license, then I'm stuffed.


The HN crowd tends to be a bit more money oriented than some of us older coders.


As a younger coder who has worked at the bigger firms that disallow use of GPL code in projects (Amazon, Google) - it is a huge disappointment to find a quality library that is GPL:

There are plenty of projects at these companies that would be unreasonable to open source - and due to frequent reuse of code within a company, GPL is an unreasonable license due to its ripple effect.

Contributing back to open source projects is the most enjoyable part of the process (Google for example, actively encourages this, and makes it incredibly easy).

GPL projects are dramatically reducing their pool of potential contributors. I'd love to contribute to them, but I can't.


I can't speak for others, but for me it's not about money at all. I just want awesome technology like this to thrive and become the standard instead of falling into obscurity due to licensing complications.


So you're saying that for example the FreeBSD people are money oriented because they prefer a more permissive license over the GPL?


The people here aren't the FreeBSD team--they're wannabe startup founders who'd like to take BSD-licensed code and sell it.


Perhaps, but the GPL licensing of this code isn't a barrier to them doing that, since most of them will be running web services that can use GPL code freely without having to release the rest of their code. If it was AGPL then maybe you would have a point.

The issue in this case is that image formats need widespread support to take off, and picking a license that is incompatible with 4/5 major browsers ensures that won't ever happen.


They claim this is good for responsive images:

    The download or file read operations can be stopped
    as soon as sufficient detail is available, and if
    needed, it can be resumed when for whatever reason
    more detail is needed.
     -- http://flif.info/responsive.php
Unfortunately that's not how HTTP works. Your browser opens several connections to a website, and makes requests for resources. Each connection is in use until that resource is fully loaded, at which point you can send a request for another resource up that connection. Yes, the browser could close the connection after it has received enough of the image for the desired quality, but there's a lot of overhead in setting up a new connection, including some round trips, and so that is probably a net loss.

The JPEG image format also supports progressive images, in browsers today, and no one has figured out how to use that for single-file responsive images yet.

(You might be able to do something in HTTP/2 where you set the priority of the stream to 0 after you have enough of it for your needs, but I don't think anyone has gotten this working yet.)


Could use range requests (RFC 7233), albeit with some overhead.


To use a range request the client needs to know what range of bytes it needs. Currently your html is:

  <img src="foo.jpg">
Now, let's say foo.jpg is progressive, and if you examine the file you can see that you need H bytes for the header, N bytes for the first layer, M for the next, and so on. Then you could mark up the html as:

  <img src="foo.jpg"
       bytes-1x="(H+N)"
       bytes-2x="(H+N+M)" ...>
but that's a huge pain to generate by hand, and people like to compose html by hand.

If we want something that can catch on, it needs to be something where browsers and servers can negotiate in a fully automated way. Or we can just use srcset/picture.


Could leverage HTTP Client Hints: https://github.com/igrigorik/http-client-hints


Client hints are a nice solution, but they don't require an image format stored in progressive order. They work fine with storing multiple copies on disk, and the overhead of low-rez versions server-side is minimal.


Why do you say it is a net loss? Maybe the server could signal the end of file based on some request parameters (the resolution of the image).


Yes, handling it on the server side sounds like a reasonable workaround. For example photo.flif?res=1x returns Content-Length: 100000, and photo.flif?res=2x returns Content-Length: 200000, with both returning data from the same file.


I say net loss because with current http the loss of closing the connection is typically much higher than the loss of downloading more bytes than you need. Connection reestablishment needs a lot of round trips.


HTTP supports range headers to get a portion of a file...


I couldn't see any information on speed performances. There is certainly a price to pay for these impressive results. It could be the compression time. I hope not the uncompressing time.


I am not relating to this project, but I am actually in the middle of the development of a 3d image retrieval server and this very interesting to my work right now.

I will compare this using lodepng as a baseline and hope to report soon.


I'm guessing that they're not releasing the numbers on that yet until the format is finalized. It's likely not gone through any serious optimizations yet.


See the quote I pulled out above: https://news.ycombinator.com/item?id=10318161


Sounds almost too good to be true... any major downside, other than current lack of support?


Because it is GPL3, it won't be supported by Google on Android, by Apple probably anywhere, or by Microsoft anywhere.

In other words, it's a lovely idea, but due to a poor choice of license, it won't get any adoption. C.f. Ogg Vorbis, where the license for the specification is public domain, and the license for the libraries are BSD-like.

To quote FSF:

    Some libraries implement free standards that are competing against restricted standards, such as Ogg Vorbis (which competes against MP3 audio) and WebM (which competes against MPEG-4 video). For these projects, widespread use of the code is vital for advancing the cause of free software, and does more good than a copyleft on the project's code would do.[1]
I would suggest that this is a case where widespread use is more important than copyleft.

[1] https://www.gnu.org/licenses/license-recommendations.html


In the forum thread I linked elsewhere, the author wrote:

"In terms of licenses: GPL is all you get for now. I can always add more liberal licenses later. LGPL for a decoding library, or maybe even MIT? We'll see, I'm not in a hurry."

https://boards.openpandora.org/topic/18485-free-lossless-ima...


Wow, starting with an unusable license with the intention to switch to a more practical one later is a weird strategy. I guess that's one way to discourage people from adopting it before it's ready.

(Edit: As I clarified, I meant usable for adoption in other non-GPL projects. But OK, I deserve the downvotes for adding very little, and will think twice before posting such a reply next time)


Starting with a more restrictive one leaves his options open; moving from restrictive to liberal is a lot easier than moving from liberal to restrictive.


Erm, what? If you have ANY contributors other than your core project team going from a restrictive to a more liberal license requires agreement from all of them, as they ALL have copyright somewhere in the project.

The opposite, going from something like MIT or BSD doesn't require ANYONE to be okay with it, the license permits it.


There's only one author, that's not the problem.

My point was, if he had released his code as MIT/BSD and then wanted to change it to GPL it would be fairly ineffective -- people could still use the old MIT/BSD version in their proprietary products.


> Wow, starting with an unusable license

If you want to use it commercially without publishing source, just buy a licence from the author. Buying licences isn't exactly uncommon. Or are OS X/iOS/Windows also unusable for you?


The author claims it to be royalty free. Also commercial "standards" by one party are worthless


> The author claims it to be royalty free.

And the author delivered: With GPL3 source.

> Also commercial "standards" by one party are worthless

Very true, but I can't see how that applies to this project? First, the existence of this repository doesn't preclude proper standardization. Second, the reference implementation is freely licensed, not commercial.


I think it has more to do with maintain control initially.

It is a lot easier to restrict now and liberalise later once more thought has been given to the options, than it is to draw things back in later if you decide you want more control for any reason.


>unusable license

Pure bullshit. Unusable for whom? Proprietary software developers? That's the intent.


Except... how does a new image format work in modern browsers? It gets included in Chrome, IE, Safari, and Firefox.

Apple and Google won't touch GPLv3, so that means this image format is dead on arrival as a web format.

Sure you can use it locally to compress your photographs, cool. But it won't be a web standard with GPLv3.


Well, we could just get everyone to use Firefox, and this image format will be a web standard.

Google is currently using webp everywhere, too, despite no major browser actually supporting webp.

(I do not consider Chrome a browser, but mal- and spyware)


Not really propriety software - as you & bildung said, it's not a problem if you can just buy a license. I meant anyone working on FLOSS software that isn't GPL compatible, which is probably most FLOSS software. MIT, BSD, etc are very popular these days. And image formats are generally intended to be widely-adopted standards used across different applications.


Most "FLOSS" software is GPL. Almost all is GPL compatible.

But even the FSF recommend sometimes using don't-care-about-user-freedom licenses for strategic reasons, for example driving adoption of a new free codec...


Or any open source project that is not GPLv3, like Firefox.


Firefox is GPLv3 compatible. It's quite likely that it will support this format - although, if it's the only major browser doing so, it will not be adopted by most web publishers.


I would be very surprised if Firefox added support. They have been very reluctant to include any additional image formats that weren't developed by Mozilla employees. See for example WebP and Jpeg2000.

Their reasons being things like added security risk, lack of demand, lack of support in other browsers, patent risk etc, which all apply just as much to this format.


Hopefully adopting Rust code will eliminate the security risk for such decisions.


The Mozilla Public License is GPLv3 compatible. Firefox also contains code that is under different licenses that is not GPLv3 compatible, and thus firefox is not GPLv3 compatible


You agreeing with the intent does not negate the effect.


That's only the implementation, not the image format itself. Unless I am mistaking, you can't apply a software license like GPL to an image format, just patent it.

Is that correct?


You're correct. There is no copyright for ideas, and licenses are just a way to "relax" your copyright. Unless it's patented you are free to make your own implementation of the format without any restrictions.


Agree – using GPL rather than LGPL or BSD in the reference implementation will prevent this from being widely adopted.

UPDATE: See lt's helpful comment above.


So, the free world can use this library and the proprietary world will have to develop their own if they want to use the format. Proprietary software developers get to piggy back off of tons of free software, and then when someone decides that they don't want to enable that for their particular program/library, people complain about it. It makes no sense.


A while ago I wanted to include a library in some GPLv3 software, but I couldn't because that library was GPL2-only with no "or any later version" clause. And both versions of the GPL are not compatible unless the GPL2 software has this "or any later version" clause.

So even if you stay strictly within the free software universe, even only within in GPL universe, a strong-copyleft license is a bad choice for a library.


I think license depends on your goal. If your goal is to create a web image format.. you need all the browsers to agree to include you. All browsers will not add a GPLv3 image type. Therefore it is dead before it launched.


The author has expressed interest in dual-licensing, so there will probably still be options. GPLv3+ or LGPLv3+ sounds reasonable.


LGPLv3 will still prevent use of Windows Phone, iOS and Android.


No, that's backwards. Microsoft and Apple and Google prevent the usage of the GPL/LGPL. Don't blame the GPL for the bad behavior of large corporations that are using their immense power to try to stop copyleft.


No open source software can use this unless it is already GPLv3. Most of it isn't.


> Because it is GPL3, it won't be supported by Google on Android, by Apple probably anywhere, or by Microsoft anywhere.

Why not? As far as I know, GPL3 let you to dinamically link without having to open the source code the resulting software.


You're thinking of LGPL. GPL has no such exception for dynamic linking.

https://www.gnu.org/licenses/gpl-faq.html#GPLStaticVsDynamic


You can't link proprietary software with GPL libraries, only with LGPL libraries.

For but Apple it's not even a licensing concern. Apple doesn't have a problem with the GPLv2, but doesn't touch the GPLv3 because of the patent clauses that it contains. The bash in OS X is a version from 2007 because that's the last GPLv2 version of bash. But Apple has no problem including git because that's also GPLv2 and not v3.


>Apple doesn't have a problem with the GPLv2, but doesn't touch the GPLv3 because of the patent clauses that it contains

It's not the patent clauses that are the issue: https://news.ycombinator.com/item?id=8868994


FTA: "FLIF is a work in progress. The format is not finalized yet. Any small or large change in the algorithm will most likely mean that FLIF files encoded with an older version will no longer be correctly decoded by a newer version. Keep this in mind."


Perhaps the MANIAC/CABAC decoding or encoding is computationally more intensive compared to the others? Concerning lack of support, just like with the BPG decoder you could generate the Javascript through Emscripten so any javaszcript enabled browser can view them. Personally an even more efficient lossy image compressor would be great (getting my avatar down to 1K and still sharp). When using diagrams or text the mozjpeg artifact eraser is adequate.


How does this unicorn achieve all this, and unencumbered by patents on top of everything else?


I cannot imagine it truly is - it uses a variation of CABAC, which sure has patents related to it.

I wonder if they had a real legal person OK that claim.


A million monkeys, scratching their heads, looking for a better way to save/send their photo albums. Progress has been on order for a while now, someone(s) is/was bound to crack it sooner or later.


I hope the idea of progressive downloading partials can be applied to other files types as well.

For example, if commonly used libraries such as minified Javascript source files are encoded and distributed incrementally, we don't have to repetitively download various versions of the same library from different CDNs for different websites, which will save a huge amount of Web traffic.


Where are the images to compare? Bellard has a very neat demonstration on his page http://xooyoozoo.github.io/yolo-octo-bugfixes/#nymph&jpg=s&b...

BTW, credit to the flif people for linking competitor formats bpg and webP


Well given that they're all lossless formats, the image should appear identical with any of the codecs, with only the file size (and encoding style) being different.

More practically, since browsers won't have an flif decoder there isn't an easy way to embed the images online without reencoding them in a different format, which would rather defeat the purpose of putting up sample images.


They could demonstrate interlacing comparisons by slowing down (through setTimeout()) the image processing in the browser.

Having a Youtube video already shows that it works really amazingly well (especially coming from Adam7), but seeing it interactively on a selection of images would be nice.

After all, the single biggest feature is that you can actually stop downloading the image whenever you feel like you don't want to spend more bandwidth!


Oh! Derp! I guess I am used to marketing-level abuses of language.

>More practically, since browsers won't have an flif decoder there isn't an easy way to embed the images online without reencoding them in a different format, which would rather defeat the purpose of putting up sample images.

Did you follow the link I posted? I think Bellard put together a pretty nifty demo. Note: Bellard credits xiph.org as the originator of the demo page. http://people.xiph.org/~xiphmont/demo/daala/update1-tool2b.s...


It's lossless. They look just like the originals.


Thanks. I had a derp moment. I guess I wasn't prepared to believe that "lossless" really meant lossless.


> FLIF is completely royalty-free and it is not encumbered by software patents.

That's good but does it actually avoid patent minefields that others scattered around? That's equally critical for healthy adoption.


I don't see any comparison here on the impact of decompressing? Is this going to hit a processor harder than JPG/PNG/BPG/et al will, and thus be a hit to people on mobile devices?


Decompression speed is important. (And more important than compression speed.) But the bottleneck on mobile devices is increasingly falling on the mobile network. CPU performance per watt is falling dramatically where mobile bandwidth per watt is staying relatively static. If this continues eventually we'll get to the point where the power usage (e.g. for loading and displaying a web page) is completely dominated by data transfer, and more and more computationally expensive compression becomes the best way to save overall power by trading cheap cpu cycles for expensive bandwidth.


Processor and storage are much cheaper than network bandwidth this year.

The most relevant tradeoff calculation now would probably be bandwidth versus battery power consumption.

The FLIF image decoding library might want to be battery-aware, such that it can automatically scale back the power consumption at lower battery charge levels, in a user-configurable fashion. Or perhaps it caches a fully decompressed or partially decompressed file to storage, so that it only does the battery-devouring steps once.


One more nice feature they didn't mention: unambiguous pronunciation.


So, is it eff-leef or fleef ?


/'flɪf/


An interesting aspect that has not been mentioned on flif.info nor here, is the implementability on silicon.

How many transistors would be needed for and encoder? How about a decoder or a codec (encoder/decoder)? And what about clock speed? How long is the longest pipeline in the codec?

For any other modern codec the hardware aspect is a very important one. Usecases like smartphone browsers or smartwatches are very common, and on platforms like that performance==battery life==usefulness.


"A FLIF image can be loaded in different ‘variations’ from the same source file, by loading the file only partially. This makes it a very appropriate file format for responsive web design. "

Awesome! I have been toying with custom png 'container' ideas in the past offering similar 'responsive' features (i.e various resolutions, lossy-lossless versions)

I would like to see the ASM.js version of the decoder and see how that one performs.


It would be nice to have a comparison with https://en.wikipedia.org/wiki/JPEG_XR


One would expect, in theory, Google to push this like mad, based on what they did with HTTP 2.0 in another part of the stack, for possibly less gain at the cost of much more complexity and standards shaking. Let's hope it really happens.


I don't see how the author can claim that FLIF is "unencumbered by software patents," only that he/she hasn't obtained patents on the work already. This has traditionally been one of the problem with so-called "free" image formats -- it's not that the standards body or creator asserts patent rights, it's that other inventors claim that their pre-existing patents cover the new format.


Would like to point out a caveat listed in that page, as follows. Don't convert all your photos in your drive (yet).

"WARNING: FLIF is a work in progress. The format is not finalized yet. Any small or large change in the algorithm will most likely mean that FLIF files encoded with an older version will no longer be correctly decoded by a newer version. Keep this in mind. "


Sounds amazing. But: no information on encoding time. I'm assuming it's going to be a beast.

Which, IMO, is totally fine.


Ignoring patents is a concern.

The patent issue is a major one and not lightly ignored. I haven't checked patent status, but one of the things that was pretty contentious with JPEG 2000 during the ISO standardization process was patents around arithmetic coding -- not just the method for doing the encoding, but also things like context based modeling.

In reality, the majority of image compression comes about in the "modeling" stage -- be it predictive coding, context based encoding coefficients, etc.

I'm happy to see new advances in image coding, but having spent many years working with Glen Langdon and others, the depth of IP concerns is still fairly fresh in my memory.


I think the important thing that needs to be addressed here before this file format takes hold is how the hell you are going to pronounce this format. I don't want another gif/gif debate.


> FLIF is based on MANIAC compression. MANIAC (Meta-Adaptive Near-zero Integer Arithmetic Coding) is an algorithm for entropy coding developed by Jon Sneyers and Pieter Wuille. It is a variant of CABAC (context-adaptive binary arithmetic coding), where instead of using a multi-dimensional array of quantized local image information, the contexts are nodes of decision trees which are dynamically learned at encode time. This means a much more image-specific context model can be used, resulting in better compression.


[deleted]


"Free Lossless Image Format": the quality metric for lossless images is "identical pixel by pixel to the reference format".

(There's obviously a question about how to judge quality for a given partial download of a FLIF file, but for the total file there really isn't any question of quality metrics at all.)


This is lossless, not lossy. The output is the same.


GPL 3. Meaning it won't ever be adopted by anyone.


IANAL, but I guess the file format itself is not covered by the GPL 3, only the reference implementation.

They specifically say that "FLIF is completely royalty-free and it is not encumbered by software patents". So, I suppose people will be allowed to develop their own libraries with another license.


True, but the only complete documentation of the file format I found was the source code of the reference implementation. So the only way to write a new implementation is to study the reference implementation, which will make your new implementation "tainted".


Studying GPL'ed source code does not "taint" anything. You're absolutely allowed to do that. If you're really concerned about subconsciously copying elements of the original into your implementation, you could let somebody else study the original and write a spec.


Studying GPL'ed source code does not "taint" anything.

This is far from my area of expertise, but lawyers smarter than me about these things disagree.


That's what I had people do.


Guess how many court cases has been about developers tainted from reading code.

1000? 1? 0?

Zero. You are about as likely to be tainted from reading GPL code as to be tainted by reading HN, the news paper, or driving around in silicon valley and looking at building with programmers in them.


You, I and common sense may believe that. However highly paid lawyers at very big companies has told me otherwise in no uncertain terms.


They also go on to say. That might be a clear indication that this is in fact GPL v3.

FLIF is Free Software. It is released under the GNU General Public License (GPL) version 3 or any later version. That means you get the “four freedoms”:

    The freedom to run the program, for any purpose.
    The freedom to study how the program works, and adapt it to your needs.
    The freedom to redistribute copies.
    The freedom to improve the program, and release your improvements to the public, so that the whole community benefits.


No documented format (actually an explicit acknowledgement that the format will likely change). The implementation is the spec, the implementation is GPLv3, so many companies lawyers will prevent engineers even looking at it to write a clean-room implementation.


>so many companies lawyers will prevent engineers even looking at it to write a clean-room implementation.

Maybe you should look up what 'clean-room' reverse engineering is. One person looks at the code, writes down how what it does, then gives that description to another person who writes CLEAN-ROOM code based upon the description.

There's no difference in doing a clean-room implementation of GPLv3 code than of any other code license.


The difference is that the lawyers for some companies will not sign off on employees even looking at GPLv3 sources. Rightly or wrongly, this really does make a clean-room implementation impossible.

One can reasonably argue that this is stupid and not the fault of the GPL. However, it's also reality, and a real block to adoption of technologies like image formats.


Get better lawyers.

Clean-room implementation has been around for ages, I've done it professionally myself, both from source code and disassembly.

Also not every company has clueless lawyers, nor does a clean-room re-implementation of this format need to come from a company, I'd say it's more likely not.

Seriously this sounds more like GPLv3 scaremongering than anything else, I've never seen anything like this in my professional life as a developer.

edit: as for my personal preference, I think GPL is a great license for full application/solution style software, but for libraries/frameworks, I prefer permissive licensing.


+1 for the "full app/solution" vs. library/framework applicability for GPL vs. permissive: That is exactly my opinion too. When I see GPL'ed libraries, I think "try before you buy"-style open source code, and these invariably also comes with an option to buy a commercial license.


The code won't but the spec/format can. What you do is have a third party look at the code. They need to identify a high-level description of what it does, what inputs the format takes, what outputs it uses, the storage format, and so on. Precise enough for someone else to implement it from scratch with likely compatibility. Yet, not copying the code itself and maybe not even copying the implementation strategy. Should avoid GPL tainting.

Even if someone wants to sue over it, I'd say that legal battle is worth fighting because this is similar to how FOSS makes stuff compatible with proprietary software/protocols: reverse engineering their function to make a separate, compatible implementation. Knowing how important that is, I doubt even the zealots would sue someone using above methodology knowing it could set a precedent which might be used against them.


I found what appears to be an early draft of the paper: http://webcache.googleusercontent.com/search?q=cache:z2rYA4Z...


Pretty sweet. Will be interesting to see where this lib goes. If someone has some time, maybe they could give it a mention on the Wikipedia page: https://en.wikipedia.org/wiki/Image_file_formats


Comparing compression ratios without comparing compute/memory requirements is...not terribly meaningful.


For all the GPL-hate in this thread: GPL protects this specific implementation and not the algorithm. Even if the authors released this under GPL, this would only be a minor nuisance to getting this unicorn into all kind of products via re-implementations of the same protocol.


I guess this would be considered a Las Vegas algorithm: https://en.wikipedia.org/wiki/Las_Vegas_algorithm § “Relation to Monte Carlo algorithms”


I don't see any foundations or corporations backing this project. Without support (I hope something like this does gain wide scale support (SVG in browsers for the Internet please already!) I don't see this being accepted.


What you say it's true for a wide scale usage. However, imagine a small company, startup, or a sole developer using this for their projects or as a internal format. I found it very interesting even if there is not a widespread use of it.


It is interesting! I agree but I guess I am getting to old. I keep hearing awesome new formats and than they just waste away. The few success stories are few and in between. FLAC and OGG are a good example. OGG is a decent format and I prefer it to MP3 but it really just withered. FLAC is awesome but it is far from main stream. Heck even SVG is far from where it SHOULD be.


And then there is Opus which took the world in storm.


It has the devices covered

> Devices based on Google's Android platform, as of version 5.0 "Lollipop", support the Opus codecs. Chromecast supports Opus decoding. Grandstream GXV3240 and GXV3275 video IP phones support Opus audio both for encoding and decoding.

Developers were Mozilla and Skype (MS after the purchase)

>Its main developers are Jean-Marc Valin (Xiph.Org, Octasic, Mozilla Corporation), Koen Vos (Skype), and Timothy B. Terriberry (Xiph.Org, Mozilla Corporation). Among others, Juin-Hwey (Raymond) Chen (Broadcom), Gregory Maxwell (Xiph.Org, Wikimedia), and Christopher Montgomery (Xiph.Org) were also involved.

Standardized

> https://tools.ietf.org/html/rfc6716

Still I think most people have no idea about Opus as a codex and only thing OGG (Container which Opus uses) and Vorbis codec format (Which Opus was to replace).


How cpu-intensive is it? Decompression speed can be an issue, especially on mobile.


Here is what he says about that: https://boards.openpandora.org/topic/18485-free-lossless-ima...

Decode speed to restore the full lossless image and write it as a png is not so good: about 0.75s for a median file, 0.25s for a p25 file, 1.5s for a p75 file. That's roughly 3 to 5 times slower than the other algorithms. However, decoding a partial (lossy) file is much faster than decoding everything, so in a progressive decoding scenario, the difference would not be huge.

There's still room for optimizing the encode/decode speed, but it's not very useful to do that before the bitstream is somewhat finalized. The prototype implementation I have now is not extremely fast, but at least it's in the right ballpark. Most likely even an optimized decoder will still be slower than an optimized PNG decoder, but the difference will be small enough to not matter much (certainly compared to the time won by downloading less bytes). comradekingu likes this


For me, the license choice makes this project "interesting, but no way this is gonna fly!!", and I immediately think that this is a "try-before-you-buy"-usage of the GPL, to which there are many.


How does this compare to packpnm http://packjpg.encode.ru/?page_id=73


GPLv3 will make adoption problematic... if it were at least LGPL so that it could be used as a library that would make adoption far more likely.


Those diagrams are incomprehensible. The sorted and unordered WTF-per-diagram rate is too frigging high.


BE SKEPTICAL!

Not a single mention of "signal to noise ratio" (SNR). If you are going to list compression rates you have to give the associated quality -- SNR is used in the literature.

I was suckered into "fractal image compression" by those selling the snake oil back in the 90's -- I am far more skeptical about these things now.


could you go into detail about what you mean with "associated quality"?

These are lossless compression algorithms that are being compared here. The only other pertinent details regarding a lossless compression algorithm are the Compression Ratio, CPU usage, and Memory usage.


Because I am an idiot. Disregard. Move along.


Can the progressive data obscure previous portions of the image?

If so, it can also be a hacky video format.

EG: A flif would specify how many megabytes it used per second. There would not be discrete "frames".

If a video flif said that it used 1MB per Second, and you wanted to see what the video looked like at 10.8 seconds, you would download 10.8MB .


am I the only one who doesn't understand the horizontal axis on their graphs: "Images sorted on compression ratio"?

what does it mean that the FLIF lines suddenly get worse than PNG over on the right-hand side?


For each algorithm, the images are sorted from best (on the left) to worst (on the right). So if you look at the middle region of the graph, you see how they behave on "median" images.

Another way to view the graph is: the area under the curve corresponds to the total disk space needed to store a large corpus of images in the given format. So obviously lower is better.


I wonder how it compares to lossy (but high-quality) compression codecs.


Then it was written, that we had to be doomed by animations everywhere!


Anywhere on the internet where you can post an image today, you can already choose an animated GIF.

If the site doesn't want animations, it will only show the first frame. For example, try uploading an animated GIF to Facebook.


Between Brotli and FLIF, which is good enough?


Brotli is excellent at text and does a great job at compressing overall, but FLIF is written directly to excel at lossless image compression, hence FLIF will be much better unless it's a really poor solution (which results show it's certainly not).

Likewise Brotli will not beat FLAC at compressing audio, since FLAC is specifically written to do that.


Actually, FLAC doesn't have a very good compression ratio (e.g. see [1]). It does have features that are very important for audio streams: it can be quickly resynced and AFAIK decompression proceeds at nearly-constant speed.

[1] https://news.ycombinator.com/item?id=7893171


Even so, I would be very surprised if FLAC did not outperform general purpose compressors if the test is done on a wider range of music than a single piano piece (which should be very compressable overall as audio goes).

edit: went and did a test on a couple of tracks, using brotli: bro --quality 11 --window 24 against flac -8 (best settings in both cases)

  satie - gymnopedie no 1:
  wav: 41.6mb
  brotli: 28.1mb
  flac: 14.7mb

  ram jam - black betty:
  wav: 41.8
  brotli: 35.8
  flac: 26.5

  vangelis - tao of love:
  wav: 29.4
  brotli: 23.0
  flac: 13.4

  meat load - paradise by the dashboard light:
  wav: 89.3
  brotli: 78.3
  flac: 58.5
certainly not a massive test, and there are likely general purpose compressors which does a better job than brotli, still I feel rather convinced that they will not do better than FLAC other than in extreme cases such as the piano piece you linked to.


How does it compare to slightly lossy jpegs?


Does it use a middle out algorithm? :)


That's exactly what I was wondering! :)


wow. this is amazing!


And the Weissman score is?


I want to believe.


Request denied by WatchGuard HTTP Proxy.

Reason: Category 'Newly Registered Websites'


WatchGuard is a disease. They also listed Hackaday as banned for "Hacking". Ever try to get a site unlisted from them? It's nearly impossible, and may only last a month or so before it goes back on the blacklist.


looks great!!!

i want to see it in my browser and websites, home some browser support come from chrome and firefox


BMP is free isn't it????


I guess trollers/jokesters aren't appreciated here...


Doesn't the GPL license generally ruin being able to use this in a commercial product by having to provide source code and affecting derivative works, too?


Obligatory XKCD: https://xkcd.com/927/

No but I'm excited, this is progress!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: