Hacker News new | past | comments | ask | show | jobs | submit login
X265 3.0 released (bitbucket.org/multicoreware)
138 points by htfy96 on Feb 5, 2019 | hide | past | favorite | 119 comments



Having spent time reading the standards recently (and implementing some toy decoders) I've noticed a pattern in video codec designs, starting from the very first one:

    H.261 - simple, not much more than MJPEG with P-frames.
    MPEG-1 - basically '261 with B-frames, pretty simple
    MPEG-2/H.262 - MPEG-1 with more complex interlacing stuff
    H.263 - no more interlacing, better low-bitrate performance
    MPEG-4 - absurd complexity that no one turned out to use most of anyway (3D scenes, face animation(!?), etc.)
    H.264 - back to regular video, with better I prediction
    H.265 - complex again?
Of course they do get more complex over time, but it seems like a cycle that alternates between incremental-yet-significant changes and huge redesigns that don't seem quite worth it.


Please don't use code formatting when not necessary, it's really hard to read on mobile.


Film grain synthesis of AV1 is apparently worth it: https://ieeexplore.ieee.org/document/8416572


Yes they are all built on top of previous innovation and work. So it is a long evolution in terms of video codec. And there is H.266 coming in 2020.


It is an evolution, but technological evolution doesn't necessarily follow an exponential curve. It took many years for H.264 and it looks like many more for H.266. The next version may take even longer. Innovation is slowing, and it has pretty profound effects for a variety of industrial sectors.


H.264s intermediate frames are none trivial. Some features do get dropped, but in the majority later standards are more complex.


Must say I'm massively surprised at how low adoption of 265/hevc is in ahem the eyepatch wearing part of the internet.


It's only recently been possible to actually use x265 for transparent encodes, previous versions of x265 removed film grain/digital noice to such a degree that the quality was worse than x264 at equivalent bitrates. Combine that with the massively increased costs in both encoding/decoding time (and worse x265 encoding tools/knowledge), x265 simply wasn't worthwhile except for crappy re-encodes at super low bitrates for people with crappy internet.

Recently you've begun to see a lot of x265 releases though, usually with HDR, the only significant feature x264 can't provide.

For sub-4K SDR content there's really no incentive for pirates to switch to x265, it's just a nuisance. With torrenting people don't pay for the extra bitrate x264 requires, unlike hosting services like Netflix, the ~20% bitrate savings of x265 are not important at all, especially compared to the other "costs" of compatibility issues and 10X longer encode times.


> It's only recently been possible to actually use x265 for transparent encodes, previous versions of x265 removed film grain/digital noice to such a degree that the quality was worse than x264 at equivalent bitrates.

I've noticed this recently - Slow RF16 encodes were _visually_ worse (as in during casual watching I was asking if the source was really this bad, and when I went back to compare it was not[1]). Going to Slower is not tenable (a movie encode already takes ~8 hous, going to Slower makes it take nearly 2 days) (and I'm not even sure it would help), and increasing the RF to 14 would mean as I'm as well off keeping the original file (in some cases, RF16 is already almost as big as the original).

[1] Random framegrab, not checking the labels until after - in every case, it was obvious which was which.


VR took to HEVC pretty much from the beginning for mobile platforms. While it did take longer to encode, the resulting file was much more size friendly for mobile use and even the delivery bandwidth/download times to get it. I worked for a large post house, and developed a backend pipe line for the artists to be able to submit their output to the farm to create optimized h.265 files for their target devices.


> previous versions of x265 removed film grain/digital noice

I noticed it as well, glad it gets fixed.

Do I need to toggle/change any particular parameter, or just use defaults on newer versions to fix it though?


Eh frankly, x265 is not worth it unless you're an expert encoder or don't care about transparency. If you're not making multiple test encodes and fiddling with the encoding options, you should stay on x264.

If you're interested, here are the settings for four recent transparent expert x265 encodes: https://paste.ubuntu.com/p/PRZFyrgxzc/

You could compare them to find interesting options to review.


>I noticed it as well, glad it gets fixed.

It is not that it is fixed, it is just that putting out a encoder that is good at PSNR and SSIM are like 10% of the work, actually getting that film grain right takes forever. That is if you care about these sort of things, other wise you can encode a Digitally Cleaned Video with x265 and get ~30% reduction in file size compared to x264.


> ~20% bitrate savings of x265

Ironically, most x265 movies I’ve seen appear to be HDR blu-ray rips with no attempt to reduce size at all, and they’re usually north of 50GB. You can get 80-90% bitrate savings on these without significant loss of quality.


What's actually in the Blu-Ray files that makes them so large?


Nothing interesting. It's just that it's inconvenient to mass-pirate at those sizes.


Lots of pixels and lossless multichannel audio.


It's actually closer to 50% unless you are using hardware assisted encodes which aren't great currently... though I haven't tried on an RTX card yet, which is supposed to be better. So it does come down to time though.


Anecdotal evidence. But doing H.264 to HEVC encodes on my GTX 1080 result in significantly smaller files than Blu Ray rips

I purchased the X-Files Blu Ray Box Set. Ripping it all to my hard disk resulted in roughly 1.7 terabytes of footage

Encoding them with a likely higher than needed bitrate to preserve quality shrank them to a much more palatable 237GB (with subtitles) https://i.imgur.com/eJFuxaV.png

Bar removing the film grain (which I prefer, but not everyone does) I think the quality looks absolutely spectacular https://i.imgur.com/xlWJEol.jpg


The image you linked to exhibits horrible artifacts - but then again, you saved the screencap as a JPEG then uploaded it to a service that likely further butchers the image.


It looks like VLC at saved it as a 1.5 meg PNG, then Imgur chewed it up and spat out a JPEG, irritating

Here's a shot from a discussion a few weeks ago on HN I took of Star Trek: The Next Generation. It seems to have survived Imgur-ification much better

https://i.imgur.com/VklppOK.png


Afaik Imgur doesn't process PNGs at all, while I'm not sure about JPGs (they might render previews, but I'd expect direct links to lead to the original file).

Besides the weird new social stuff they're still by far the best image hoster I know.


The NVidia GTX compresion for HEVC is really bad, and only intended for use with live streaming. If you compare to similar file sizes or even half the size for x265 (CPU) encoding the quality is a lot better. As you note, the size is a bit larger than probably could be achieved... I also have encoded a bit using my GTX for hardware, but the quality:size difference was a bit much. Now when I do it, I just rip as much as my system can hold, then batch everything up and let it run for days/weeks.

Currently using an i7-4790k (nearly 5yo), which doesn't have a the current intel hardware encoding options either. Considering upgrading to a Zen 2 Ryzen or Threadripper when they come out later this year.


It's nowhere near 50% for transparent encodes, I just checked. GPU encoding is currently incapable of creating transparent encodes.

Most x265 2160p->1080p encodes actually use higher bitrates than the x264 1080p encodes they trump, due to the increased noise detail in the UHD Blu-Ray source.


I'm not using GPU encoding, I've tried it, it generally sucks and winds up being about 2/3-3/4 the size of x264 for similar quality. My experience with h.265 via x265 is usually about half the size for similar quality. Again, somewhat subjective as I really don't mind the blurry fallback for h.265 so much, and don't notice it nearly as much.


The standards "aims" to have 50% reduction, in reality that is only the best case scenario with x265 encoding 4K video at low bitrate ~2Mbps.

I have yet to see a 50% reduction at any bitrate with 1080P videos.


When you talk about film grain, are you referring to older movies, or also to recent ones? I usually don't notice it in recent ones.


The Marvel movies are notorious for oodles of film grain. I think it looks horrible and adds nothing to the movie.


x265 never removed film grain; you need to enable it via --tune grain option (or -tune when using it via ffmpeg).

There's a new --tune animation option in 3.0 if you're encoding anime/cartoon.


Good points. Thanks for explaining


The scene values quality. It's why their file sizes seem large to some. x264 at CRF 18-21 still can't be matched by x265 at any bitrate.

I'm a videophile and have been using Avisynth (Vapoursynth mostly these days) since the DivX vs Xvid wars and every year I give x265 another chance only to be disappointed. There's also the hardware compatibility issue but regardless the quality would still have to be there.

Take a look at the most popular outstanding bugs and you'll notice many of them are regarding quality [1] The doom9 forums is also a good source. Many have posted comparisons proving the point.

[1] https://bitbucket.org/multicoreware/x265/issues?status=new&s...


Really, it's blurry even if you throw a bunch of bitrate at it? Seems odd.

The one area with significant uptake that I've seen so far is anime, which if regular stuff gets blurry does make sense. The quality is actually really great there, and it can produce files that are extremely small at good quality (eg 25 minutes at 1080p with multi-language AAC tracks is often around 250-300 MB, with bluray quality).

(oh, also there is a poster who used to do a lot of scifi releases in x265, it looks OK although not bluray quality, and you can get a 45 minute episode in 1080p quality in about 700mb)

Personally I've tried encoding Shadowplay recordings to x265 a couple times (via ffmpeg, both through Handbrake and directly invoking ffmpeg on my server) and I get solid green video. I think it's the variable frame rate, it works OK if I pull it down to a constant framerate first, but that can produce juddering if I get framedrops when I'm playing.


Anime are comparatively easier to encoder due absent of grain. And large blocks of colour do very well with improve h265 efficiency. Not to mention a lot of the psychological model imported from x264 had decades of Anime tuning in it.


> The scene values quality. It's why their file sizes seem large to some.

So many groups are still releasing shows & movies in XviD so I don't think that's necessarily true, you think by now everyone would be on x264.


I was referring to The Scene [1]. They stopped releasing in Xvid around ~2012. What you're talking about are known as "peer groups". These groups/individuals do as they wish (to put it kindly).

[1] https://en.wikipedia.org/wiki/Warez_scene


I would have thought it was more preferring faster pretimes, as X264 encoding is much faster than X265? At least in the case of TV shows, maybe not so much movies where there might be a more exclusive source.


>There's also the hardware compatibility issue

Seems to be getting better (slowly). I just bought an asus tinkerboard for 50 bucks that supports 265.


It comes down to the type of degradation that happens and device support.

h.265 gets blurry, h.264 gets blocky. Higher quality encodes, especially of film will degrade worse in 265. Hardware is also much worse in terms of overall support for h.265.

My Nvidia Shield TV is the only small/arm device I've used that can reliably playback 4K h.265 video, specifically network content from my NAS (Kodi).

h.265 is much more common as an option for TV feeds, the quality is usually lower than BR anyway and the smaller size leads to better near term gratification, you can get a 1080p h.265 for often smaller than the 720p h.264 which looks significantly better.

I tend to prefer it for my own rips as I get a much smaller file size, which is starting to get scarce on my NAS. I prefer the blur to the blockiness myself, and my vision isn't the greatest anyway. YMMV.


Also, hardware assisted compression has really crappy quality for the file size you get. Using a software encoder like x265 is pretty much required, and even then takes a LONG time relatively speaking. Most are unwilling to do so for the gains in storage.


Hardware H265 encoding has always struck me as a completely bizarre feature. So... you're concerned about file size, but not concerned enough to use software encoding?

And GPU encoders are really bad... you're probably better off running x264 at faster preset than running NVENC H265.


My dad downloads x264 in 720p sometimes by accident and he say the 1gb file is too big so before he watches it he will open his old version of DivX official converter software and encode it into 480p with some really low bitrate 'fast encode' setting that makes his 1gb mp4 into a 300mb avi. The blocks are so huge and noticeable it makes me think he's watching .RM files from the 90s lol but hey at least he saves room on his NAS right?


I am using x265 hardware encoding... I'll rip it all, queue it up and just let it run for days as needed.

Agreed, GPU encoders are pretty bad... I cannot comment on RTX was all I said, which is supposed to be much better than GTX was.


I think gamers/streamers get the most out of hardware h265 encoding, since they aren't as likely to just add more hard drives. And you probably can't spare the CPU power for software encoding, even at fast presets.


Most streamers use a dedicated capture card to do encoding/streaming.


Probably not. The established streamers might.


Yeah my PC with 6700k and GTX 1080 really struggles with x265 4K playback which is an incredibly laggy screen tearing mess. On my $50 Android box which has a hardware decoding chip for it the same video files run perfectly, butter smooth.


>The eighth generation of PureVideo HD, introduced with the GeForce GTX 1080, GTX 1070, GTX 1060, GTX 1050 Ti & GTX 1050, GT 1030 a Pascal (microarchitecture) GPU, adds full hardware-decode of HEVC Main12 profile to the GPU's video-engine.

Are x265 files different then, or is it just your software?


Likely his software. I’m also on a 1080GTX and it works perfectly (though I also have a 8700k)


software. I have a 1070, and hardware decoding (using mpv) works excellently.


Could you share your h.265 Handbrake settings for 1080p?


Many devices, especially older ones, only support h264 in hardware, which makes a huge impact on battery life. h265 would need software decoding and this a huge drop in battery life for those devices.


This right here is the main reason. Pirate groups continued to do xvid encodes for many years after h264 was unquestionably superior for the exact same reason.

The big picture here is that we've hit the point of diminishing returns with video codecs. Up until h264, with lots of work you could improve codecs to get significant gains in compression with very little increase in complexity. That's pretty much done though. If we had to keep using h264 for videos for the rest of eternity it wouldn't be all that bad. Sure, eventually we'll want something a bit better most of the time, but when it's a choice between a 16 GB h264 encode and a 8 GB AV1 encode that takes 10x the CPU time to encode and decode, it's not unreasonable to think we might choose the former for many applications.


It’s not 10x, currently more like 1000x the time to encode and 10x to decode.


... or just plain impossible if you don't have a recent beefy desktop-class cpu.


Still very poor hardware decode support. What good is saving a few GB of HD space if your phone runs out of battery 3x as fast because it's having to decode in software?


That whole scene perplexes me. They're still obsessed with their 10bit h.264 videos which just don't even play on a Roku or smart TV.

I've been using x265 for my personal rips for about two years now.


I believe you are talking about the 'anime' encoders, as for 'scene' releases, they are all in 8bit h264 AFAIK.

Also when it comes to anime, doing a quick search on Nyaa it looks like 10bit is used on less than a tenth of releases, with h265 being in that same range as well.

I dare say 8bit h264 is still the undisputed king of piracy.


Simple guess/answer: in quantity upload time/bw (x2 for x264) is massively cheaper than encoding time (x100 for x265).

Also: if you’re doing 2160p 10bit HDR, there’s no chance those Rokus are going to play them anyway.

Disclaimer: not involved with the scene at all, so this is mostly educated speculation.


Decent x265 encoding is not that much slower than x265. Going for roughly the same perceived quality, my encode times went up about ~50-70% when switched to x265. With how much better it handles grain in older films, it's well worth the extra time requirement.

But I imagine those in the scene are more interested in being the first with a passable 1080p Blu-Ray rip than they are in cramming as many high quality movies on their iPad as possible before a long flight.


They explained their decision I think it was in late 2017 or early 2018. Reducing bitrate of lower encoder time are not their priority. Even at the same bitrate x264 still beats x265 in terms of video quality. There are some problematic scene x265 just doesn't do well.

If they could reduce bitrate while getting the same quality, they would. If they could get even better quality while having the same bitrate, they would do it too. The problem is at their quality level requirement x264 is still king. And as of late 2018 that still seems to be the case. ( Tuning Encoder is a god damn insane job )


> Also: if you’re doing 2160p 10bit HDR, there’s no chance those Rokus are going to play them anyway.

I have some 10bit 2160p ATMOS remux movies that are huge 50GB MKV files and I stream them over my gigabit LAN to a no-name $80 Android box I got off some obscure Chinese marketplace and it plays them back perfectly with full HDR and my amp gets the atmos signal too.

If your device has hardware based x264/x265 chips it can decode them with very little processing power.


10bit h.265 has wide hardware support. The problem is 10bit h.264, for which hardware support is borderline nonexistent.


You do encoding once, you pay for space/bandwidth forever. Same logic that Youtube uses.


For the scene BW costs are largely distributed so it doesn’t matter.

Being furstcwith the release is. And then you don’t want to spend 10x-50x more time on your encode.

The incentives here are nothing like YouTube.


>10x-50x more time on your encode.

x265 is nowhere near that much slower unless you are using a very high preset. Even preset #9 ("slow") is only about 2-3x slower than x264, in my experience.


They make 4k hdr Rokus


Unlicensed content providers (this is the preferred politically correct term) have been doing HEVC UHD (4K) from day one.

Most TV shows and movies are available in HEVC at 720P and 1080P now but all it offers is smaller file sizes (not better quality) so most people don't care about it.


My laptop can't decode 1080p x265/hevc but works fine with 2.7k x264


You generally don't want to (lossy) re-encode released media (be that audio cds, network streams or Blu rays). One think is using a lpsdy codec to make the final "print" - it's another to try and manipulate that version further with lossy transformations.

That said, appears 4k hdr will see some x265 uptake.


Apologies for the mangled text - autocomplete abhors foreign language.

s/One think/One thing/

s/lpsdy codec/lossy codec/


Same. It's a great space savings at (to me) a not very perceptible loss of quality. PSARips is the only scene group I know of doing regular x265 releases.


I know very little about these formats, but is H.265 patent-encumbered? If so, maybe people don't like supporting it?


It is, just like H.264 and XviD (MPEG-4 ASP) before it. Pirates do not care about patents, only the technical aspects.


Hopefully they'll adopt AV1 instead.


I wonder if there is an article that compares X265 and https://github.com/intel/SVT-HEVC (in terms of speed, feature set, required hardware, software license, etc..) ?



Can't wait for fast AV1 encoders, and especially hardware support.


From completely personal opinion and that of close friends:

x265 movies look way cleaner while having way smaller file sizes.

Also it's the best tradeoff between quality/filesize by factor of 2 or more. I'd personally notice artifacts in x264 movies unless they were over 7gb in size. With x265 i'll grab a 3gb version of a movie in a heartbeat (well a few heartbeats on gigabit fiber :) and not worry about picture quality at all.

at 15gigs or more for a 1080p movie x264 is pretty much perfect, unless you really want to see the film-grain.


I'm re-coding all my drone footage from H.264 (hardware encoders on the drone) to x264. It's about 1/4th size with the same visual quality, so it's win-win


x265 HEVC Encoder


Great to see Dolby Vision in the features there - I've been very impressed with it vs HDR10.


Why not admit AV1 is all around better and work on it instead


What value do you think your comment adds to the conversation? H.265 is a widely-used standard which millions of devices support and since it's older than AV1 anyone who cares about, for example, iOS users has the choice of using either H.265 or H.264 but not AV1. Even if you're using Chrome or Firefox on a desktop, you often don't want AV1 unless you have a burly CPU, aren't running on battery, and don't mind hearing your CPU fans (a quick search will show this is true even for WebM because millions of people still use devices which have hardware H.264 but require CPU-based decoding for WebM). If that's not true for you, a high-quality open-source H.265 encoder is a great tool.

Here are the current support stats — note the distinct lack of a common format newer than MPEG-4 even before you consider hardware support:

https://caniuse.com/#feat=mpeg4

https://caniuse.com/#feat=hevc

https://caniuse.com/#feat=webm

https://caniuse.com/#feat=av1


A lot of your argument is based on H.264, which is not part of the question. H.265 is not extremely common and also hard to CPU decode.


I’m aware of that but did you notice where I mentioned iOS and linked to https://caniuse.com/#feat=hevc at the bottom? That was why I mentioned the lack of a common option beyond MPEG-4 — iOS has H.265 support in hardware so if you want a newer codec that’s your only option for a fairly large group of people, especially in the United States, and it’s where battery life matters a great deal.

https://www.scientiamobile.com/growing-support-of-hevc-or-h-... has it at 80% of iOS and 60% of Android, which is a LOT better than 0% for AV1, and might even allow only doing 2 formats if enough of the devices with WebM also support HEVC.


I think you're better off going with VP9 as your next-gen codec for now rather than HEVC despite the lack of iOS support.

That's what Twitch is doing, for example. They say VP9 is a particularly good fit for them because they're getting a 25% bitrate reduction over H.264 in live streaming, they don't want the licensing headaches of HEVC, and because most of their viewers are using Firefox or Chrome on the desktop.

Apple joined the Alliance for Open Media so they'll be adding AV1 support eventually. It'd be good if they also added VP9 support, especially because some of their laptops already have hardware decoding for VP9 in the CPU. Most Android phones have VP9 support so iOS joining in would be great.


If you can ship a custom decoder, don’t care about battery life, and have enough CPU headroom, yes, VP9 is an option. For everyone else, until AV1 hardware ships it’d be better to stick with H.264 than use VP9 if you can’t figure out a way to make H.265 work in your business.


> don’t care about battery life

VP9 has broader decoder support than H.265. Even when you don't have a hardware decoder, the software decoding for VP9 is not bad. I play VP9 video in VLC on my iPhone 7 and I have survived to tell the tale. It's not all doom and gloom.


Yes, an iPhone CPU is usually fast enough not to drop frames but if you care about battery life it’s not competitive. Again, I’m not saying that VP9 is bad but rather that anyone serious needs to support both formats until AV1 has MPEG-4 levels of pervasiveness, especially if you’re publishing on the web.


No, if you're publishing on the web using VP9 is the better choice. VP9 is supported in more browsers and there's no point having the licensing hassle of H.265. And as Netflix found, VP9 outperforms H.265 by 12% with the right encoder:

https://medium.com/netflix-techblog/performance-comparison-o...

So the sensible strategy is H.264 for devices that don't support VP9 and VP9 everywhere else.


>Apple joined the Alliance for Open Media so they'll be adding AV1 support eventually.

They joined a organisation when their logo isn't even used.

Compared to something the fully support and wants to move forwards with VVC [1]

[1] https://www.mc-if.org


Apple joined AOMedia at the highest membership level. If Apple didn't care about AOM then it would have been simpler not to join at all.


For personal media libraries, patents don’t matter while quality, size, and hardware support do. So HEVC > VP9.


It isn't a question of patents, it's a question of licensing. VP9 and AV1 patents are licensed under royalty-free terms and free for all use-cases. Royalty-free formats are better for the health of the internet and the computing industry generally. I'll always use royalty-free formats where I can. It's just simpler.


Better? There isn't AV1 hardware decoding, encoding takes much much longer than h.264/h.265. We are only now starting to be able to target devices with h.265 reliability.


It is surely better in the long term. AV1 encoders need a lot of work still to be usable though in real time, which is critical for something like WebRTC.

For example, rav1e still needs multithreading support: https://github.com/xiph/rav1e/issues/132

And once encoders are ironed out, hardware support is also needed.


Can you post a link to some data so we can compare both, please?



I read a few times the results at texpion (the first 4 results from that DDG query), and I must say that I'm confused. I was expecting to see some particularly good results in favor of AV1, but the "Conclusions" section in page 1 state that:

* both x264 and x265 take less time to encode and the produce smaller files (lossless compression)

* AV1 takes 160x-190x more time than x264 (lossless compression)

* AV1 can't take advantage of multi-threading the same way x264 and x265 can. Neither for encoding nor for decoding.

* AV1 requires 3.5x-6x more CPU power to decode than x264

I just fail to see how AV1 is "all around better". Can you share another link? Maybe this time more specific so I can read the data you used to get to that conclusion.


> AV1 can't take advantage of multi-threading the same way x264 and x265 can. Neither for encoding nor for decoding.

Dav1d is an AV1 decoder which takes advantage of multi-threading:

https://code.videolan.org/videolan/dav1d

https://medium.com/@ewoutterhoeven/dav1d-0-1-0-release-the-f...

BitMovin and Intel both have AV1 encoders which can take advantage of multiple cores. BitMovin demonstrated live AV1 video almost two years ago:

https://bitmovin.com/bitmovin-supports-av1-encoding-vod-live...

https://bitmovin.com/constantly-evolving-video-landscape-dis...

https://github.com/OpenVisualCloud/SVT-AV1


The article (nor my comment) doesn't say that AV1 can't do multithreading. What the article at texpion says is that it can't make CPU cores sustain hight loads the same way x264/x265 does.

That said, I have no idea whatsoever about video codecs; I'm just saying what I understood from the link I was provided with.


dav1d can make use of the CPU at sustained high loads if you want it to. It can do over 30 frames a second on even a single core of an Apple A12X, and over 100 frames on an A12x using multiple cores.


AV1 isn't mature enough to make it worth sinking a bunch of time into optimizing an encoder for yet. There will eventually be multi-threaded AV1 encoders but the format needs to stabilize first. There is no guarantee that a file you encode today will play on anything in the future. It is in no way ready for primetime, maybe in another 3-5 years it will be at the enthusiast/pirate stage and commercial usage in 7-10 years.

Also, do bear in mind that the primary use-case here is for Youtube to cut down their bandwidth consumption, and other similar users who don't mind doing a one-time encode that either takes a while, or needs to be run with process-level parallelism (different files at the same time). Multi-threaded encoders are not really a primary design goal here.

The format is practically worthless until there is a hardware decoder for it, nobody will software-decode AV1 on their smartphones or laptops. Nobody is going to design a hardware encoder until the standard stabilizes. Google can do all the Youtube encodes they want but they are going to have to keep developing and serving H264 for a long, long time.


> nobody will software-decode AV1 on their smartphones or laptops

Dav1d (https://code.videolan.org/videolan/dav1d) runs fine on my laptop for 1080p AV1 decoding. Your mileage may vary.

> Nobody is going to design a hardware encoder until the standard stabilizes.

NGCodec will release a hardware encoder this year. They use FPGAs for their encoders. Twitch uses NGCodec for their live VP9 streaming:

https://ngcodec.com/news/2019/1/7/ngcodec-announces-av1-supp...

https://blog.twitch.tv/how-does-vp9-deliver-value-for-twitch...


It might run fine, but it won’t compare to the hardware decoder when running on battery.


I have an insufficiency of pearls to clutch.


I thought they nailed down the bitstream already?

"On 25 June 2018, a validated version 1.0.0 of the specification was released. On 8 January 2019 a validated version 1.0.0 with Errata 1 of the specification was released."


The bitstream is an important milestone but it normally takes years of optimization for a codec to really hit peak quality. There are many decisions to be made during the encoding process which have an impact on file size, quality for specific content types, required decoder resources, etc. The first generation implementations often produce files which are notably larger because they're doing the simplest, most conservative encoding process and subsequent tools may have far more sophisticated optimizations.

If it's more familiar, think of it as similar to saying that the x86 or ARM instruction set has been finalized: the first generation will work but over time every component will be optimized to improve certain areas and there will be tailored versions for specific applications (e.g. video codecs used in chat have hard latency requirements while Netflix can afford a LOT of optimization time for a file which will be streamed a billion times).


I agree with all of that. My point is that AV1 is at the point of maturity where we need to sink maximum effort into improving the encoders, and what you're saying seems to support that.


In the end AV1 will likely have about the same level of adoption as VP9 ... pretty much googe/youtube only. Allowing Google to monopolize video standards is not something that the hardware world is going to accept. HEVC already has near universal hardware support. Why bother with supporting AV1?



Google, Netflix, Amazon... so basically overwhelming majority of the video on the Internet. Left is broadcast video, where the HEVC support can be relegated to set-top boxes.

So yes, hardware world is going to accept AV1.


VP9 is the same generation as HEVC and performs about the same so the only reasons to favor one over the other would be financial and client support. For many providers this means 3 copies: VP9 for Android, HEVC for iOS/Mac, and H.264 for older devices.

AV1 is newer and should offer improvements over both previous generation codecs. Hardware support will be a significant factor but it’s looking as if in a few years you’ll have the option to use one better format on most modern devices.


The lack of hardware support in my home theater ecosystem makes AV1 a non-starter for me.


By the same criteria, HEVC is non-starter for me. Unfortunately for me, the world around me didn't stop developing, just to fulfill my wishes.

Expect the same with AV1.


h.264 is still completely viable, it's just a matter of storage. I have no burning desire to go back and re-rip my older blu-rays in HEVC.


> The lack of hardware support in my home theater ecosystem makes AV1 a non-starter for me.

What is a home theater ecosystem? If you're using a HTPC with Kodi or Windows, you'll either soon have good realtime AV1 decoding, or already have it.

It takes time, but it is coming along faster than other rollouts over the last couple decades. Microsoft had a usable AV1 decoder in the windows store basically a few weeks after the bitstream was frozen; and as dav1d is hooked in to more applications, the decode performance on commodity hardware (which can be cooled passively) is becoming good enough to be practical.


I use my Nvidia Shield as well as the built in "smart" functionality on my TV, and a Roku Premiere+ in the basement. I also use an iPad Air to watch TV shows on the treadmill. Since a large chunk of the TV and movies I watch are ripped from disks I bought, I need to stick to formats these devices support. Apple put out a surprisingly competent software HEVC decoder for my iPad Air, and everything else here supports it natively.


Am I the only one who came to TFA looking for a screaming ThinkPad with a 7-row keyboard? <snaps fingers>

(downvotes—rly? That really was my first thought when I saw the title ...)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: