Hacker News new | past | comments | ask | show | jobs | submit login
Biggest image in the smallest space (bamsoftware.com)
359 points by fekberg on Sept 2, 2015 | hide | past | favorite | 103 comments



Actually, it decompresses to a 5.8MB PNG. However, many graphics programs may choose to use three bytes per pixel when rendering the image and because it has incredibly large dimensions, this representation would take up 141GB of RAM.


Better graphics programs will not attempt to put the whole image into RAM, but only decompress the pieces needed for processing it.

I remember working with multi-megapixel images on systems with far less than 1MB of RAM, many years ago. Perhaps this is a good example of how more hardware resources can lead to them being wasted - the fact that RAM has grown so much that most images fit completely in it, has also meant programmers assuming they can do this for all images without a second thought when often all that's needed is a tiny subset of all the data.

Even if the image data is compressed, there's absolutely no need to keep all of it in memory - just decompress incrementally into a small, fixed-size buffer until you get to the "plaintext" position desired, ignoring everything before that. The fact that it's compressed also means that, with suitable algorithms, you can skip over huge spans at once - this is particularly easy to do with RLE and LZ - and the compression ratio actually boosts the speed of seeking to a specific position.

Currently, (hopefully...) no application is attempting to read entire video files into memory before processing them, but I wonder if that might change in the future as RAM becomes even bigger, and we'll start to get "video decompression bombs" instead?


This! Command line programs have no excuse, they should never need to decompress the entire file to memory. GUI image editors and web browsers probably generally do need to, but there definitely are such options for dealing with more pixels than you can display.

Anyway, do you know some of these "better" graphics programs that actually behave this way, especially command line processing? I am interested in finding more of them.

EDIT: Okay, I have to add & admit that by "no excuse", I actually mean somewhat the opposite. ;) I mean that its possible to do streaming image processing on compressed formats, not that its trivial to do or as easy as decompressing the file in a single call. I just wish that programs would handle very large images more, and it sucks when they don't even though I know its possible. Especially programs intended for dealing with large images like Hugin. Now, I know its a PITA to tile & stream compressed formats because I've done it, but I'm sure I've written image I/O that decompresses the entire file to RAM 100x more frequently than anything that tiles and/or streams, because I've only handled tiling or streaming myself once, and it was harder. :P


Virtual memory management saves your butt, almost all the time.

It's ridiculous when the compression ratios are so absurd, like in this case.


What you describe sounds a bit like demand paging of a memory-mapped file. The problem with implementing it for a 2d image is that a given rectangular region doesn't map to a contiguous area in memory. It's easy to construct a long thin image that would cause problems for a line-based demand paging strategy. For example, ten pixels high by a billion pixels wide.

Edit: skipping sections of the line to get to the region of interest is fine, I suppose, but what's really needed is a hierarchical quadtree-like organization of the storage, surely...


It is a bit like demand paging, yes. There are formats for which rectangular regions map to contiguous block of memory.

TIFF has specs for tiles, strips, subfiles, layers, etc. AFIAK, hardly anyone uses those, but they certainly exist.

JPEG is also composed of 8x8 squares and is easy to stream. The API has per-scanline read & write callbacks IIRC. But you're right; a very very wide one might be a case it can't deal with.


One of the rules of secure programming is that any program that is used in an even remotely security-sensitive context, and anything displaying a Portable Network Graphic is likely to be used in such a context, must be able to specify resource usage limits. In this case that could be dimensions or a limit on the total RAM allowed to be used. Limits need not be hard, either, but could produce a query, for instance, the way very long-running scripts in the browser ask you if they should continue.

Now, go find an API/library for dealing with PNGs that allow you to pass in such a limit, let alone pass in a callback for dealing with violations. Go ahead. I'll wait.

(The Internet being what it is, if there is one, someone will pop up in a reply in five minutes citing it. If so, my compliments to the authors! But I think we can all agree that in general image APIs do not offer this control. In fact, in general, if you submit a patch to allow it, it would probably be rejected from most projects as unnecessarily complicating the API.)

This is the sort of thing that I mean when I say that we are so utterly buried by insecure coding practices that we can't hardly even perceive it around us. I should add this as another example in http://www.jerf.org/iri/post/2942 .


Now, go find an API/library for dealing with PNGs that allow you to pass in such a limit

The article itself links to http://libpng.sourceforge.net/decompression_bombs.html

These new libpng versions do not impose any arbitrary limits, on the memory consumption and number of ancillary chunks, but they do allow applications to do so via the png_set_chunk_malloc_max() and png_set_chunk_cache_max() functions, respectively.


Now, go check your bindings, too. Often binding authors consider these incidental and unimportant and don't expose them.


You're overdoing it a bit. I believe the most popular API/library for server-side manipulation of images is ImageMagick, and it has a few options for specifying limits that will easily protect against decompression bombs.

That being said, even with these limits, it's undeniable that something like ImageMagick still has a very large attack surface (especially since it uses many third-party libraries), so it should run in its own heavily unprivileged or sandboxed process.


I should have added my "no credit for showing the existence of limits if your code doesn't use them" criterion in advance. Now it looks like I'm moving the goal posts. Oh well; you can see a similar point in my linked post from several months ago which at least adds credence to the idea that this isn't a new reaction of mine.

As you can see by reading that post as well, I'd also contend it really ought to be in the core API, not an optional thing that defaults to no limits. A sensible default could be imposed, too, though that turns out to be tricky to define the closer you look at it.


I think anyone that accepts scanned documents on their site will add these limits sooner or later out of necessity. At my last job we had them, because occasionally users would upload a 6000 dpi letter-sized page as a 1-bit tiff, or other such nonsense.

I'm actually about to add the same limits at my current job. It is an online slideshow builder, and most of the slides are scanned photographs. Last week someone uploaded a 11k x 17k JPG which hit us during a time of peak load and it caused quite a bit of trouble while the server was trying to build a thumbnail of it.

> I'd also contend it really ought to be in the core API, not an optional thing that defaults to no limits.

These limits really ought to be at a higher level, such as a ulimit on each apache process. In a large code base it requires too much effort to protect against every single avenue of potential accidental denial of service.

For example, at a previous job we had a lot of reports where the date range could be customized. This was fine for a long time, until a large client came along and ran his reports for the past five years. Since it was written in perl, it'd use up the RAM available on the system to generate the report, locking up the only webserver we had.


Ultimately it's going to require a malloc to get the space for all those pixels. That is where things should fail. If not, how is one to specify what the image size limits should be? Ever try to open the Blue Marble images from NASA? In a web browser? Back in 2001?


Any decent library (of any sort) should already be providing hooks for its memory allocation. This isn't necessarily provided as a security feature, and it's common for your callbacks not to get much in the way of information about what's going on, but it will allow you to at least crudely put a cap on the library's memory usage.


I'm inclined to agree. As legitimate image sizes increase, there's more need to sanely limit the resources thrown at such images.

While on vacation last week, I finally grokked that my relatively cheap Nikon camera is producing 6000x4000 images...that's about 100MB uncompressed. As a mobile app developer, I'm becoming painfully aware how images breaking the 25MB uncompressed line are breaking apps, with some still-in-use 256MB RAM iOS devices crashing when memory fills under normal usage plus a few instances of such large images (1-2 vacation photos can easily overwhelm available memory).


ulimit may help. Perhaps it could (maybe already is) be added to Chrome per-tab process.

Not sure of Windows equivalent, Microsoft has deprecated Windows System Resource Manager, not sure of a good equivalent (except going fully to Linux).



And almost every program that tries to display it.


Some image programs will allocate space based on the metadata in the file. The actual image data isn't actually required. So, if there's corrupted image data, say a byte or two (or even missing), there's nothing stopping the reported size being in the gigapixel range.


That's impressive. Here are some other compression curiosities.

http://www.maximumcompression.com/compression_fun.php

A 24 byte file that uncompresses to 5 MB; another file with good compression under RAR but almost no compression under ZIP; and a compressed file that decompresses to itself.


This isn't a decompression bomb, but here are some fun virtual disk images I found using AFL fuzzer. One of the files is 329 bytes, but causes qemu to consume 4 GB of heap trying to process it. This has interesting consequences for the public cloud, where people can upload any old stuff and it is usually processed immediately by 'qemu-img'.

https://bugs.launchpad.net/qemu/+bug/1462949

(I have a big collection of these, but most of the bugs have now been fixed in qemu)


The file that decompresses to itself is a work of art. How does one even go about to create something like this?



Like this: http://research.swtch.com/zip. Fascinating read.


Written by THE Russ Cox?


I have no idea.

There are people who take special interest in compression. The Hutter Prize would be a good place to find them. http://prize.hutter1.net/

It's a bit old now.

> Restrictions: Must run in <10 hours on a 2GHz P4 with 1GB RAM and 10GB free HD

I'd be interested to see what can happen without those restrictions.


> I'd be interested to see what can happen without those restrictions.

If you did not have time limits you could exhaustively search through Pi looking for your data, then store just the offset into Pi.

Or if the offset got too large then Pi to the power of lots of random numbers. One of those numbers will, somewhere, have your data. But it's infeasible to search for it.


A common misconception.

There is no universal compression algorithm, by the pigeonhole principle.

And your compression scheme won't work by the simple fact that the offset of where in Pi your data lies will, on average, take the same number of bits to store as the data itself. Ditto, with pi to the power of lots of random numbers, the offset and the power will, on average, take the same number of bits to store as the data itself.


I'll add mine: https://brage.info/hello

It's a 1MB file that decompresses to 261 tredecillion bytes of "Hello, World".

No terribly clever stream manipulation; it's a perfectly normal gzip file, other than the size. The generation script is here: http://sprunge.us/VhFc, but see if you can figure it out without peeking.


Warning to my fellow chumps: that is a direct link to the gzip bomb. Clicking on it causes a download, which results in Windows Defender promptly losing its mind as it tries to decompress the whole thing looking for viruses. (win 8.1)

Interestingly enough, it's not using up any memory, but it is hitting the disk at 55MB/s, so I'm guessing it's decompressing it to disk, and will eventually crash when it fills up my hard drive.

EDIT: It gave up after fifteen minutes. That's nice, now I don't have to reboot. There was about 2 gigabytes of stuff in my temp folder when I ran disk cleanup, don't know how much of that was gzip bomb output.


May I suggest for the merely curious:

    curl https://brage.info/hello | od -c | head -40


Is that short or long scale tredecillion?


Actually, it decompresses into a 1.4MiB file, which decompresses into another 1.4MiB file, recursively.


I sized it to fit on a floppy disk. :-3

But, no. There's a finite number of levels, and it blows up pretty quickly.


Malware scanners that search inside archives really love zips that decompress recursively forever.


Here's a fascinating thought.

It should be possible to write an archive file that decompresses to a slightly different archive file. Say, the original archive file plus a zero byte. (Things get hairy because of the checksum in most archive file formats, but it still should be possible.)

Then a malicious party writes an archive file that decompresses to two slightly different archive files. And sets it up so that, say, 128 levels in iff you follow a specific path in said tree you get to the actual malicious content. You can't get to it unless you follow the specific path, and you can't find it unless you do substantially more than just naively decompressing files (as you won't be able to decompress 2129-1 files).


Maybe someone takes the recursive file to a new level and creates one that generates to n new identical files. So decompress to a tree.


Better yet: a file the decompresses to multiple slightly different versions of itself. Say, one that is a version of itself with an additional one, and one that is a version of itself with an additional zero.


That's neat, but I still think the self-reproducing r.zip from "zip files all the way down" is the best compression trick I've seen:

http://research.swtch.com/zip


There's also dynamic generation of output by specifying a custom filter in a RAR archive that is executed during decompression: http://blog.cmpxchg8b.com/2012/09/fun-with-constrained-progr...


Photoshop was able to show it: http://i.imgur.com/7EdBySv.png (Macbook Pro, 16GB RAM)


Photoshop is an example of a graphics program that doesn't attempt to read the entire image into memory.

How much RAM did it actually use?


Difficult to say. I don't completely understand the activity monitor RAM column: http://i.imgur.com/QS3NPQQ.png Looking at the activity monitor details we see that it uses something in the order of 2.54 GB of real memory. I suspect the rest is mostly compression.


Tells you why it's a solid image application. Kudos to them.


If you follow the "related reading" link on the bottom of TFA, you come to a page by Glenn Randers-Pehrson discussing how libpng deals with decompression bombs. On the bottom of that page you find the following curious note; anyone know what to make of it?

""" [Note for any DHS people who have stumbled upon this site, be aware that this is a cybersecurity issue, not a physical security issue. Feel free to contact me at <glennrp at users.sourceforge.net> to discuss it.] """


He's presumably had problems with people confusing "decompression bombs" with the blowy-up kind and sending him panicky e-mails.


Another possibly apocryphal case of linguistic collisions resulting in governmental interest: When the MIT Media Lab started doing work on intelligent kitchen counters, they found that a lot of shadowy government agencies wanted to talk to them about their research into "counter intelligence".


Ah, of course. That didn't even cross my mind, for some reason; bomb in this context was so obviously not a physical device.


What to make of it? Seems clear enough; he's (half-jokingly?) afraid that someone in the federal government will see the page and think "oh no! bombs! explosions! TERRORISM!" and identify more clearly that this is only a computer analogy.


I think it is related to the word "bomb" existing on the page


DHS = Department of Homeland Security?


PNGs also have optional compressed text metadata chunks, and it's possible to sneak a decompression bomb into one of those as well. You can get about a factor of 1000 in the compression -- 1MB of 'a' winds up being about 1040 bytes. You can have multiple itxt chunks, and it appears that the chunk size is only limited to 2^31-1.

See https://github.com/python-pillow/Pillow/blob/master/Tests/ch... for a quick way to generate some of these.


Reminds me of how you could crash a fido node by sending them some big empty files, so when they got automatically unzipped the filled of the harddrive :)


I think this kind of thing was common even a few years ago in DoS'ing mail gateways that uncompressed and scanned various archive formats. Things like really huge files when uncompressed or ridiculously deep nested directory structures.

I think most software these days is immune to such tricks, or at least has tunables to reduce the chance of such tricks causing harm.


Zip bombs, a relative of the fork bomb.

https://en.wikipedia.org/wiki/Zip_bomb

The billion laughs XML attack is also lovely in its simplicity.

https://en.wikipedia.org/wiki/Billion_laughs


I was doing a presentation about various bombs last year and crashed PowerPoint by copy-pasting billion laughs in a slide. Simple but extremely effective.


Not sure what is worse: that MS has Powerpoint interpreting randomly pasted XML, or that they do not have handling for excessive memory usage beyond crashing the whole program.


There was also the trick of infinitely recursive zips that kept decompressing to a copy of themselves.

Zip-bombing was such a problem for our corporate network in the late 1990s that inbound e-mail attachments were deliberately discarded for a while. Chaos ensured.



This does wonders when used in favicons :D


I just tried it on a locally served page, and my browser handles it quite well (although it won't really display it).


Only on firefox and chrome since they fixed it https://github.com/benjamingr/favicon-bug


You just made my stomach turn at the thought


Fun fact: When trying to upload this as a profile picture (on a site I host myself), chromium crashes.


Having dealt with and printed a lot of very large images, e.g., 60k x 60k pixels, I have been on the lookout for image processing software that never decompresses the entire image into ram, but instead works on blocks or scan lines or blocks of scan lines, but stays in constant memory and streams to and from disk. For example, the ImageMagick fork GraphicsMagick does a much better job of this than ImageMagick. What other software is out there that can handle these kinds of images?


The key is not to store it in raster form in RAM. Either tiles (like GIMP) or I prefer Z-ordering. Then a user can zoom in and pan around easily - you let the system swap and it won't be bad at all. If they zoom out though, you probably want to store MIP maps of it.

Swap works well for this as long as your data has good locality. huge raster images don't.

But no, I'm not aware of any software that handles stuff like that well - except the GIMPs tiling, but that's not going to help when zoomed out.


What does Z-ordering mean in this context?

I definitely want to avoid swap at all costs and find things that are designed to tile & stream instead. The difference between GraphicsMagick resizing an image by streaming and ImageMagick resizing an image that hits swap is orders of magnitude - seconds versus hours.


>> What does Z-ordering mean in this context?

You divide the image into quarters and store each quarter as a continuous block of memory. Do this recursively.

Normally we'd index into the pixel data using pixel[x,y]. You can get Z-ordering by using pixel[interleave(x,y)] where the function interleave(x,y) interleaves the bits of the two parameters.

This works fantastically well when the image is a square power of two, and gets terrible when it's one pixel high and really wide. I think a combination of using square tiles where each one is Z-ordered is probably a useful combination.

For my ray tracer I use a single counter to scan all the pixels in an image. I feed the counter into a "deinterleave" function to split it into an X and Y coordinate before shooting a ray. That way the image is rendered in Z-order. That means better cache behavior from ray to ray and resulted in a 7-9 percent speedup from just this one thing.

Once you have data coherence, swapping is not a big deal either in applications where you're zoomed in.


> You can get Z-ordering by using pixel[interleave(x,y)] where the function interleave(x,y) interleaves the bits of the two parameters.

That is pretty cool (also the raytracing application), but most surprising to me, is I remember the interleave operator from reading the INTERCAL[0] docs ... I never even considered the function could actually be applied to something useful :-)

[0] one of the early esoteric programming languages, designed to be "most unlike any other language". It also features a "COME FROM" statement (because GOTO is considered harmful), which iirc also actually has a parallel in some modern programming paradigm.


Nuke works in scanlines like this, and can process a whole tree of operations only loading the input lines necessary for the current output row. The SDK docs explain the architecture somewhat: https://www.thefoundry.co.uk/products/nuke/developers/90/ndk...


I used to work on a scanning SMTP/HTTP proxy and even back then it wasn't unknown for people to send crafted decompression bombs to attempt to crash the services. We handled it by estimating the total uncompressed size upfront (including sub archives) and throwing out anything with a suspiciously large compression ratio.

I imagine that .pdf files are another avenue for mischief. They contain lots of chunks which may be compressed in varying ways.


Neat. I needed to make very large PNG bombs recently and toyed with the idea of doing it "manually." In the end I decided to take the lazy route and use libpng[1].

[1]: https://bitbucket.org/tetrep/pngbomb/src/03dfc95065d78562c15...


This works wonderfully! With an image size 123456x123456, I made this happen: http://i.imgur.com/2Dgrazj.png

I killed it at about 25GB memory usage, who knows how high it would have climbed otherwise.


That's cool. Presumably the same "attack" could be applied to any file format that uses DEFLATE.

From a legal stand-point, I'd be wary about following through with the authors suggestion of "Upload as your profile picture to some online service, try to crash their image processing scripts" without permission. Sounds like a good way of getting into trouble.


Yes, but on the other hand it's a good reminder for everyone processing user provided files to sanity check or convert them to a canonical format in a sandboxes and resource limited process.


What about responsibly disclosing the bug you found with steps to reproduce, the impact and the solution? As long as you only timed out the backend without entirely crashing it, I can't imagine any sane company would prosecute you for trying to improve their service with this level of detail.


How do you know that you're only going to time out the backend without entirely crashing it, without actually attempting it? It's a kinda Schrödinger's cat scenario.

It's all good and well saying that you had good intentions, but if you can't prove it, and they didn't invite you to test it (via a responsible disclosure policy), then I would steer clear.

While I wouldn't personally attempt to prosecute anyone for responsibly disclosing a bug to me, it doesn't meant to say that BigCorp™ wouldn't.


>The image is almost entirely zeroes, with a secret message in the center.

too pressed for time, did anyone look? What is it?


It is "SORRY, OUR PRINCESS IS IN ANOTHER PIXMAP"


I realize that this is besides the point but going on the title alone we could write a script that could generate an 'infinite' (max out available memory) sized image.


Everyone's focusing on this being a PNG problem but actually if my server unzips a 420 byte file into a 5M file of any kind, I'd say that's the first red flag. Assuming some sort of streaming decompression, you could write an output filter that shuts off the decompressor when it's seen a factor of X bytes. A reasonable factor would be 10 - which in this case would have halted bzip decompression at 4kB.

This would probably be a trivial patch to bzip2. But I like the idea in general of passing an "max input/output ratio" to any process or function that might yield far more output than input.


The real problem is image handling libraries that blindly render images into too-large objects where unnecessary. While full-res uncompressed images are very convenient under the hood, the image library should inherently handle anything "too big" gracefully. Instead we're often prone to apps crashing when someone feeds in a ridiculously large image.

A 420B > 5MB expansion should not be a "red flag" because there is nothing about it (including the subsequent attempt to process a 141GB uncompressed image) which cannot be handled appropriately in software. Flagging such ratio limits is arbitrary, and setting an arbitrary limit is usually a sign the software is incorrect, not the data.


A ratio limit is a hueristic.

There is an upper-limit to how much information you can compress into a given space. (Note that we may want to write a pathological program that is very small and allocates a lot of information-free memory. But that's not decompression.)

If we accept the premise then we can look at another approach to solving this problem, once and for all! I like examining memory allocation because it's so general. But there may be another way. We can examine the input to estimate compression ratio.

The problem here is that image decompression is apparently giving strangers the ability provide an arbitrary N and say "Please loop N times and/or allocate N bits". A modern CPU/ram is overwhelmed by an N around 30 bits (and for many workloads even a 20-bit N is too much). This is a root cause of many problems! You know, I'm going to go out on a limb here and make a bold assertion: I assert there is a very safe upper bound on the decompression ratio, and that for any real algorithm you can indeed examine the input to determine whether N exceeds your allowable threshold. 10x might be a bit low (although I doubt it) so let's be generous and say 100x. (Which seems crazy. Nothing that I know of, not even text, compresses that well.) This means that I believe that any image format, for example, has a trivially calculable N (for example, width*height in pixels). I would argue that in the general case (unless you are doing some sort of compsci research) the image file should be related to N. That is if the image is 10 bits wide, 10 bits high, we should expect a roughly 20bit file-size.


Looks handy for large image processing tests, thanks.


Trying to run the program and create my own image, however a few questions, what did you use for secret.png? Any old png?

Are you using PIL or pillow?


Cool, but most web sites wouldn't allow to upload a 5-MB picture as a profile picture. Or do they, these days?


Is there a way to check for decompression bombs? I'd like my software to be able to unzip zip files safely.


A python example:

    def decompress(data, maxsize=262144):

        dec = zlib.decompressobj()
        data = dec.decompress(data, maxsize)
        if dec.unconsumed_tail:
            raise ValueError("Possible zip Bomb")
        del dec

        return data


Monitor zip files as they decompress. Halt decompression process if the size ratio between zip file and decompressed file exceeds a fixed ratio (for example, if ratio between the file sizes is something like 10:1).


If you do that, pick something a little more extreme. When using BEM, for example, your CSS becomes pretty repetitive and you easily get better than 10:1 ratio with GZIP, for example.


You can do it in zlib -- there's one call that effectively does the whole thing, and one that fills a buffer. You can check to see how much input has been consumed, if there's more, then you know you're getting large. It's up to the friendly programmer to decide when large is too large.


Sandbox them. We once created a 1024MB, 6GB disk single-core VM and built a tiny API around image decompression and scaling. Never had any issues with it, but it was a simple way of preventing things from filling up the regular web servers.


Yes. A method whose only purpose is to answer the question 'is this file larger than <parameter>'. If it is don't go further.


It's probably using middle-out.


I wonder what the ratio would look like if the equivalent was done with a JPEG instead of a PNG.


does anybody tried to upload it on facebook as profile picture?


"Your photo couldn't be uploaded due to restrictions on image dimensions. Photos should be less than 30,000 pixels in any dimension, and less than 41,000,000 pixels in total size."


The title was changed and is now more opaque and less descriptive.


Yeah, I agree. The original title was a lot more descriptive.


The original title is the one the author gave it. The HN guidelines ask you to not to change that unless it is linkbait or misleading.


Ah yes, of course. Thanks, I'll keep that in mind for the future!


righto pied piper


This is a very easy form of attack in security circles.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: