Hacker News new | past | comments | ask | show | jobs | submit login

I kind of wish the author touched more on the one-frame-at-a-time aspect. Why aren't they copying more data into the device decoder? It seems somewhat silly on a non-realtime OS to grab one frame, request a 15ms wait, and then grab another frame, when your deadline is 16.6ms. That essentially guarantees a stuttering playback at some point, no?

Is there not a better way to do this? There must be. Why not copy 10 frames at a time? Is the device decoder buffer that small? Is there no native Android support for pointing a device decode buffer to a data ingress source and having the OS fill it when it empties, without having to "poll" every 15ms? So many questions.




What makes you think that decoder actually has enough RAM to receive more than a single frame?

Formats like H.264 are designed around hard constraints where the OEMs can build HW decoders that only have maybe 1-3 frames worth of internal memory (this includes reference frames required for forward/backward decoding of B frames). All to keep both costs and latency down. Having your decoding block add 5-10 frames of latency will cause many problems down the line.

It's really not a given that your decoding block will be able to take more than one frame at a time. 16ms really is plenty for video decode handling for most usecases.


> It seems somewhat silly on a non-realtime OS to grab one frame, request a 15ms wait, and then grab another frame, when your deadline is 16.6ms

A couple reasons this isn't as silly as it seems

1) ~All buffers in Android are pipelined, usually with a queue depth of 2 or 3 depending on overall application performance. This means that missing a deadline is recoverable as long as it doesn't happen multiple times in a row. I'd also note that since Netflix probably only cares about synchronization and not latency during video play back they could have a buffer depth of nearly anything they wanted, but I don't think that's a knob Android exposes to applications.

2) The deadline is probably not the end of the current frame but rather than end of the next frame (i.e. ~18ms away) or further. The application can specify this with the presentation time EGL extension[1] that's required to be present on all Android devices.

[1]: https://www.khronos.org/registry/EGL/extensions/ANDROID/EGL_...


My guess is it's simple and prevents A/V desync?


He touches on that here. They have a catch-up mechanism that was thwarted by the same bug: https://news.ycombinator.com/item?id=25428127


My guess is DRM. Less data in the buffer means if someone is trying to rip a stream they won't be able to speed up the process by dumping the buffer.


How much does that speed (of copying the buffer) really matter though?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: