Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A 4k 30fps video sensor capturing 8 bits per pixel (bayer pattern) image, is capturing 2 gigabits per second. That same 4k 30fps video on Youtube will be 20 megabits per second or less.

Luckily, it turns out relatively few people need to record random noise, so when we lower the data rate by 99% we get away with it.



1. I believe in modern cameras it’s 10+ bits per pixel, undebayered, but willing to be corrected. Raw-capable cameras capture 12+ bits of usable range. Data rates far exceed 5 gigabit per second.

2. Your second paragraph is a misunderstanding. Unless you really screw up shooting settings, it is not random noise but pretty usefully scene data available for mapping to narrow display space in whatever way you see fit.


I read the second paragraph as a reference to compressibility of the resulting stream, not the contents of the encoded/discarded data.

Only random noise is incompressible, so realistic scenes allow compression rates over 100X without a 100X quality loss.


It is not even about YouTube data rates but about display media limitations. There is not going to be any sort of realistic scene data going over the wire just because of that. 99% of it has to be discarded because it cannot be displayed. It cannot be discarded automatically, because what should be discarded is a creative decision. Even if you could compress 5+ gigabit per second into 20 megabit per second losslessly, it is a pure waste of CPU.

Also, noise is desirable. Even if you magically can discern on the fly at 30 or 60 fps, and at 5 gigabit/second, what is noise and what is fine details and texture in a real scene, which is technically impossible (remember, it is a creative task because you cannot automatically determine even neutral grey), eliminating noise would result in a fake-ish washed look.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: