On a more serious note, I think his can still be gamed a little bit.
For instance, DEFLATE has default variable width encoding tables built in based on letter frequency of the English language as measured some time ago. If you use an algorithm with any kind of defaults those count against the cost, but they don’t necessarily cost more to tune it for a solitary input.
DEFLATE compressed data consists of blocks. Each block can be either uncompressed, or compressed with a pre-defined Huffman table (the variant mentioned in the parent comment), or compressed with a dynamic Huffman table. Many DEFLATE compressors would consider all three options (at higher -j values) and choose the best. So, one does not need to tune DEFLATE to get the advantage of dynamic Huffman tables. And any kind of arithmetic encoding (or the new hotness, Assymetric Numeral Systems) would do a better job.
For a competition like this, all simple approaches have probably been tried already. The current leader has been tuning their approach for 5 years: http://mattmahoney.net/dc/text.html#1159