Hacker News new | past | comments | ask | show | jobs | submit login

Will RDNA3 cards have AV1 encoder? RDNA2 ones got the decoder at least. With encoders in place, video calls can start using AV1 too.



We only just got AV1 decoding on the latest gen of GPUs, it might be too optimistic to hope for encoding in the next one.

In 2019, Twitch predicted roll-out of AV1 support by 2024, but then the pandemic happened: https://www.streamingmedia.com/Articles/ReadArticle.aspx?Art...:

> For head content, it's okay to streaming multiple formats because the viewership is huge, so streaming multiple formats, although increase our cost, it still actually save our traffic cost, so it's still worth while. But for the tail content, it's very different. We can only afford streaming one single format, so our strategy is currently still doing H.264 using hardware, high-density hardware solution, but we're hoping towards 2024, 2025, the AV1 ecosystem is ready, we want to switch to AV1 100%.

>

> Jan Ozer: Did you say 2024 and 2025?

>

> Yueshi Shen: 2024, this is our projection right now. But on the other hand, so as I said, our AV1 release will be, for the head content will be a lot sooner, we are hoping 2022-2023 we are going to release AV1 for the head content. But for the head content, we will continue to stream dual-format, AV1, H.264. But for the tail content, we are hoping towards five years from now to AV1, whole eco, every five-year-old device supports AV1. Then, we will be switching to AV1 100%.

The initial roll-out will probably just involve transcoding at sub-source quality settings to save bandwidth with no encoding to be done by the streamer.

Right now, platforms have only talked about AV1 for transcoding purposes, so there's not the same urgency to add encoding support on consumer-level GPUs. Even thought it'd be amazing to see AV1 streams and YouTube videos, it's mostly on the server-end to cut costs and make things easier on people with data plans and cellular.


AFAIK, none of commercially-available hardware supports AV1 encode yet.

I guess it is quite expensive to implement AV1 encoder.

Edit: I meant no consumer hardware support it.



That is for data centers though.

Oh I did a typo. There still is no consumer-grade product supports AV1 encode.


DG2 has a codename "Alchemist" and it's a mid-range card line that targets gamers, too.

You're still right that there's no standard, consumer-focused card, but that's "yet" – the product is just launching later this year.


Nvidia Orin(the SOC, successor to Xavier) that coming out this Q will have AV1 encoding(up to two streams of 4K60).

One can assume next gen nvidia gpus(rumored for the end of this year) will have it as well.


I wonder what the timeline is from “encoder appears in consumer chip” to “people use it for video calls”.



I think it's negative for every major codec thus far, except maybe VP9 (did any videoconferencing apps use VP9?)

H.263, H.264, VP8, HEVC, and AV1 certainly were all used by various videoconferencing apps before hardware blocks reached consumer devices.


Linphone if you mess with it.


Well, RDNA3 GPUs aren't available yet either. But at some point encoders should become viable I suppose.


There is no additional benefit to use AV1 for video calls since VP9 can do that just enough for the next 5 years.


Isn't it supposed to have lower bitrate for the same quality? If there wouldn't have been any benefits they wouldn't have developed a whole new codec.


You want low latency encoding. It doesn't matter if you can half the bitrate of Video but requires 3s delay to encode it. Something which as far as I am concern is done better with VVC.


And that's exactly why OP asked for GPU encoder, those are low-latency and if the resulting stream requires less bandwidth it means that there can be fewer packets on the network, meaning lower absolute re-transmits (the networks relative error rate is the same independent of the data transferred) and faster transmission for slower networks, i.e., in general also lower latency as by-product; latency, available bandwidth and packet amount are often coupled.


Low-latency encoder are not exactly known for producing bit-efficient streams.

Their purpose is to dump the frame into compliant stream in least possible time, so the compliant decoder can decode it back. It doesn't concern itself with using less bits; if it does, it is nice, but not deal breaker.

So if the hardware encoders can produce VP9/HEVC/AV1 at roughly the same bit-budget, it doesn't make sense to use the more complicated one. It makes sense to use the one, where you do have to pay less fees, though.


Some codecs might be more suitable for encoding video streams which are known ahead of time and utilize similarity of frames both before and ahead to maximize compression (multi pass encoding); they might also require larger buffer. With real time video calls you can't really do that. There is no codec-fits-all solution unfortunately.


I don't think bandwidth is the main challenge with video calls. Packet loss and latency cause far more quality issues.


Packet loss and increased latency are the main symptoms of using more bandwidth than the network can reasonably provide.


In live video, latency from encoding is a limiting factor for what types of compression you can use. Its not like bandwidth is the sole limiting factor here.


I can still see compression artifacts, so it's clearly not using enough bandwidth. It could probably compress it better if it had hardware available, rather than needing to do real-time encoding in software (since it doesn't have a VP9 encoder either).


I don't think you can just dismiss bandwidth concerns, especially when you consider video conferencing.


On the desktop you can probably spare a core or two for encoding just fine.


A core or two is not enough for high quality real-time encoding.


Says who? Software encoding works just fine for current video call applications, and also most twitch streams, without using more CPU than that.

If you mean AV1 specifically, do you think software encoders won't get anywhere near x264, which can do it in less than one core?


> If you mean AV1 specifically, do you think software encoders won't get anywhere near x264, which can do it in less than one core?

From what I understand, AV1’s encoding complexity is more like H.265 than H.264. It’d be more appropriate to look at the performance of x265 to see what might be possible.


>Software encoding works just fine for current video call applications

Current video call applications look like crap

>also most twitch streams

Don't they use hardware encoding when possible? And higher bitrate than video calls.

>do you think software encoders won't get anywhere near x264, which can do it in less than one core?

Not at full quality


True, but it can still be pretty CPU intensive. Plus on laptops it's a bigger deal.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: