top | item 38131981

(no title)

rb2k_ | 2 years ago

In the past (with h265 / h264 at least), hardware encoding always ended up with visibly worse quality (and often even bigger file sizes) compared to a software encoder like x264/x265.

Do you happen to know if that's still the case?

(I guess for use-cases such as live streaming it doesn't matter that much, but for video that ends up in some archive, it's probably less acceptable)

discuss

order

blihp|2 years ago

That's usually the case as the hardware encoders tend to make tradeoffs in the direction of lower transistor count / faster frame processing while software encoders have the luxury of going for higher quality.

FreezyLemon|2 years ago

Yes, a YouTuber named EposVox released a video on AV1 hardware encoding when the first Intel dGPUs with support for it released: https://www.youtube.com/watch?v=ctbTTRoqZsM

Later on in the video, there are some graphs comparing Intel's AV1 encoder to SVT-AV1 at different speed presets. Even one of the faster presets (9) will comfortably stay above AV1 quality according to VMAF, and if you don't need real-time speeds you can lower the preset to get further ahead of the hardware encoder. (BTW: That video is >1 year old now, and SVT-AV1 had some significant updates in the meantime too. So the software side is probably looking better now.)

bick_nyers|2 years ago

It's around 5% (maybe 10%?) larger file sizes for same visual quality at the moment. For archival I think that's fine, as storage is cheap, it can still be a problem when you pay for outbound bandwidth to users.

adgjlsfhk1|2 years ago

hardware encoding gives up a little quality and filesize, but hardware encoding of AV1 will generally beat software encoding of X264 on all axes.