> Stranger Things season 2 is shot in 8K and has nine episodes. The source files are terabytes and terabytes of data. Each season required 190,000 CPU hours to encode.
High resolution video is larger than you might expect.
Is there such a thing as a video, image, or audio format that is inefficient for viewing, but is radically more efficient for transcoding? Such a technique would be useful for a number of applications.
Sure, Raw video is one example of that, it takes insane bandwidth and storage, but can load it from SSD fast enough that's not an issue. Really, codecs need to be closely related before encoding X helps make encoding Y.
To compare: my camera uses 25 megabytes of storage for each single raw frame. I have no idea how much raw sound takes up, but you would have to move around a lot of SSDs to use that.
Interesting idea. Could there be a way of making a format requires some sort of extra work to access? So sort of like a bitcoin setup where the local device is a miner (but actually doing useful work) and it gets access to the video as part of that computation. Effectively the content creator would trade computation for content.
I'm sure it's totally impossible in theory and in practice.
In many ways - older video codecs. Newer video codecs gain more compression by doing more and more complicated encoding and decoding. Trading CPU for Size.
High resolution video is larger than you might expect.