Hacker News new | past | comments | ask | show | jobs | submit login

I chuckled at that, too. I struggle to imagine what would be on the receiving end of that. I doubt most desktop machines could decode a 60fps 4k MJPEG stream, let alone a phone. Maybe they sell an SOC the other side too.



JPEG, and by extension MJPEG, is far simpler to decode than most video codecs, and also massively parallelisable. 4k at 60FPS is less than 500MP/s, which is easily achievable by a GPU.


Sorry, but I've benchmarked this recently as part of a project that I'm working on, and JPEG decoding with libjpeg-turbo consumed far more CPU than software H.264 decoding.

The encoding side is very expensive with H.264, but as I understand it a lot of work goes into building the right reference frames so that higher compression and faster decode can be achieved.


You are comparing CPUs to GPUs right now


he said software x264, so CPU to CPU?


It may be different when comparing GPU to GPU because of difference in parallelization


Is that the new apples to oranges?


Any desktop would be able to do so.

I wrote a multi stream mjpeg decoder some years ago and it could run three MJPEG streams at 1920x1080@30 on a 2008 Macbook Pro (Core 2 Duo, 2.5ghz ).


That's only ~187MP/s. A single 4k 2160p60 stream needs almost 500MP/s, which might be at the edge of what's possible on a CPU today, but would make far more sense for a GPU to do.


It shouldn't be a stretch for a quad-core desktop processor today. Doubling the core count and increasing clock speed by 40% compared to a mobile Core 2 isn't hard. DDR4 instead of DDR2 means memory bandwidth is probably not an issue, and AVX can probably provide further headroom on the compute power.

And, of course, it's much easier to build a desktop with far more than four CPU cores these days.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: