I'm not sure you realise the scales involved. 5MB (40Mb) is enough to create 1e12626113 different files, that's a number with 12.5 million digits
For reference, the number of atoms in the visible universe is ~1e80 (1e78 to 1e82), you'd generate 1e12626033 images per atom (doesn't look any different does it? Well it's got 80 zeroes chopped off). For a second reference, assuming you're a standard-size hooman you have 1e14 cells tops (estimate vary, "An estimation of the number of cells in the human body"[0] gave 3.7e13).
So no, it's not possible. Even if you remove a few order of magnitudes to account for invalid jpegs, let's say we cut it by a thousand orders of magnitude it's still 1e12000 images to generate[1].
They're also all contained in PI. And the thought experiment is but a variant of the infinite monkeys theorem[2]
[1] if you can compute full tilt until the heath death of the universe you have ~1e47 seconds, you'd need to compute 1e11953 images per second… or 1e11873 images per second per atom in the universe
I thought of a similar thing for an mp3; a computer would produce every possible "song"..but even at an incredibly low resolution of 100 kilobytes for a 3 minute song, you're looking at 2^800,000 possibilities. (1e240000).
Another cool one is possible games of Go, ~1e768. This is one reason top computers still can't touch top humans at winning the game.
I don't think the number of possible games has anything to do with the difficulty of programming computers to play Go. There are 10^120 possible games of chess, for instance - still 40 orders of magnitude larger than the number of protons in the observable universe - and yet computers roundly defeat humans at chess these days.
But what if we imagine/allow/invent the series of algorithms that 'pluck' a good picture of Usain Bolt from the set of 5MB files without inspecting too many (let's say O(log) or O(log log))?
That's only ~c25/~c5 bits of entropy to sort through, even less if the algorithm were to work in some sort of sparsity domain.
You remind me of the space pirate Pugg, as encountered in the sixth sally of Trurl and Klapaucius as described in The Cyberiad by Stanisław Lem.
That is to say, however much you pre-sort your data, there will still be a virtual infinity of them still left, all abiding your pre-set conditions, but all still uninteresting.
I don’t understand you. The amount of human art, or indeed the possible maximum number possible of human artistic expressions (from the beginning of time until the end of the universe), is, I believe, far smaller than the pictures you would have left after your sorting filter.
I don't know what you mean by a 'sorting filter', but what I'm talking about is an algorithm that takes >40Mb of data as input (pictures of Bolt and Athens for example) and uses this data to output ~40Mb of Bolt in Athens.
I mean human artists to be an analogy: humans can 'select' a picture of Athens Bolt without enumerating all others. I don't see why computers couldn't.
For reference, the number of atoms in the visible universe is ~1e80 (1e78 to 1e82), you'd generate 1e12626033 images per atom (doesn't look any different does it? Well it's got 80 zeroes chopped off). For a second reference, assuming you're a standard-size hooman you have 1e14 cells tops (estimate vary, "An estimation of the number of cells in the human body"[0] gave 3.7e13).
So no, it's not possible. Even if you remove a few order of magnitudes to account for invalid jpegs, let's say we cut it by a thousand orders of magnitude it's still 1e12000 images to generate[1].
They're also all contained in PI. And the thought experiment is but a variant of the infinite monkeys theorem[2]
[0] http://informahealthcare.com/doi/abs/10.3109/03014460.2013.8...
[1] if you can compute full tilt until the heath death of the universe you have ~1e47 seconds, you'd need to compute 1e11953 images per second… or 1e11873 images per second per atom in the universe
[2] http://en.wikipedia.org/wiki/Infinite_monkey_theorem