Interesting charts, but this is all completely meaningless without image quality comparisons. I can easily use 50% less bandwidth than Netflix's H264 streams as well, with H264 even, by just cranking up the compression & dropping the bitrate.
Presumably nothing jumped out at the author as being worse, but come on how can you have a whole section on why AV1's regression on Bojack is actually a good thing because the quality is way higher, and then not show any quality comparisons?
Taking screenshot of netflix on my device results in black square, I dont know if this applies on lower levels of Widevine but if it doesn't then the quality will be much lower as Netflix do not serve 720+ video unless there is protected DRM path
Fascinating, thank you for this analysis! Currently pining for release of an updated Apple TV, which will have an SoC capable of hardware AV1 decode.
It'd be great to hear from someone at Netflix about the unexpected Bojack Horseman results. I'd bet that Netflix just isn't yet taking advantage of AV1 features designed especially for this kind of animation and synthetic content.
> the unexpected Bojack Horseman results. I'd bet that Netflix just isn't yet taking advantage of AV1 features designed especially for this kind of animation and synthetic content.
While the percentages look scary, it's only a slight difference (60kbps!) and still around 1mbps average, but with a significant quality boost (very crisp lines and near perfect quality). I bet Netflix could encode at nearly half that bitrate and stay similar to HEVC in quality, but I'm pleased they seem to have made a good tradeoff here.
It's actually quite amazing the quality that AV1 delivers at such low bitrates across the board. I've said it before, but AV1 is almost magical. Which I think is behind the lack of enthusiasm for VVC/h266; is anyone even using that? I've yet to actually see it in the wild.
> Device Support: Hardware decoding for AV1 isn’t on every device yet.
By now - it should be in most devices that's aren't outdated by even average standards. And it's worth mentioning that for devices that don't have hardware decoding, dav1d does an excellent job of decoding it on the CPU.
The problem is more with hardware encoding. That's indeed only present in only recent generations (or a couple) of hardware and even with that, AMD for example have an aspect ratio limitation bug in their AV1 hardware encoder (which requires adding black bands to work around) that's only fixed in RDNA 4 which is not available in their APUs, so it won't be fixed in APUs until their UDNA is used for them (they didn't fix it in RDNA 3.5 chips).
Was once out in a remote area on an 800 kbps DSL connection. YouTube couldn't stream, Prime Video couldn't stream. Netflix worked fine. Years later, I remain impressed at their uniqueness.
Okay, so AV1 has lower bitrate. I can encode any video format at arbitrary bitrates, but that metric is not useful on its own. An article about how AV1 requires less bits for the same or improved perceptual quality would have been far more interesting.
Perhaps the most important question which I have yet to see anyone pointing out. Those bit rate numbers are appalling! 2-4Mbps average bitrate? For a services you are paying for? I knew Netflix has gotten bad but this is worse than I thought. Even some high end YouTube Content does better than that. It should be 8Mbps minimum. And at this bitrate the difference between H.264 and AV1 wont be so obvious.
>Those bit rate numbers are appalling! 2-4Mbps average bitrate?
While those may sound low, what I'm thinking is Netflix didn't see any benefit in perceptual quality (or VMAF scores) but sending more bits down the pipe and increasing their bandwidth bill.
Very good read, love some of the humor in this article. It helped me get through to the end!!
Also, If anyone was wondering where AV1 stands in comparison to VP8 and VP9... I just looked it up after a few years of not paying attention and I guess Google donated VP8 and VP9 to the alliance for open media foundation (AOMedia) in 2015 and they created AV1 and released it in 2018.
Yeah, AV1 is primarily based on what Google was working on for their own successor to VP9, what would have been VP10, with technology contributions from Mozilla/Xiph's Daala and Cisco's Thor codecs.
Not doubting your experience, but I can't see a single ad using Safari on iPhone.
I was really impressed with my setup, but after disabling content blockers (Firefox Focus), and turning off using Mullvad's free DNS proxy service, still nothing!
Perhaps the author turned the ads off since you visited?
For mobile, I don't know who outside of Netflix is delivering AV1. If they are, I expect them to be leveraging the hardware AV1 decoders for battery life instead of employing a software only solution like dav1d. Saying that, I think Netflix was using dav1d solution where it had a benefit (e.g. low quality cellular networks)
One of the quite expensive paid plans, as the free one has to have "Created with Datawrapper" attribution at the bottom. I would guess they've vibe-coded their way to a premium version without paying, as the alternative is definitely outside individual people's budgets (>$500/month).
Inspecting the page, I can see some classes "dw-chart" so I looked it up and got to this: https://www.datawrapper.de/charts. Looks a bit different on the page, but I think that's it.
I feel like this was copy edited by ChatGPT and it really grates me. I couldn’t help but lose focus after I started seeing telltale signs of AI.
While the topic matter is interesting, I feel like obviously synthetic content falls into the “that which was not worth writing, is not worth reading either” trap.
If the authors tone is extremely ChatGPT-esque, I apologize in advance.
Intolerably ChatGPT-esque. Which is a shame, it seems like a nifty little DIY experiment.
I think what stands out to me is this cartoonishly punchy, faux-dramatic framing.
That, and specialist terms that seem to be thrown in there in an empty way, just to signal subject-matter expertise that’s not even expected of a DIYer’s experiment report:
> It’s a multi-decade, billion-dollar street fight over bytes and pixels, waged in the esoteric battlegrounds of DCT blocks and entropy coding
This didn't trip my AI detector; I instinctively skimmed to look at the numbers and conclusions. Your comment made me go back up to the top and read the opening paragraphs and I see what you are saying. It is always painful to realize you are reading AI product. I think it is less of a problem with this blog post because it is just presenting a handful of tables of numbers and a few graphs, but it seems I am already unconsciously training myself to ignore florid AI writing.
Wow, this entire thread is some direct and very valuable feedback. Thank you to everyone who weighed in. I hear you all loud and clear!
To be transparent, I was experimenting with a more "punchy," narrative style to weave in some wit and humor. I didn't want the writing to feel dry and was aiming for a flow that was more entertaining. In retrospect, I clearly overshot the mark and ended up with something that feels inauthentic and distracts from the main point.
The experiment and the data are what I was most excited to share, and the writing shouldn't obscure that. Based on this feedback, I'll revise the article to be more direct, cut the fluff, and let the numbers do the talking.
Seriously, I appreciate the reality check! This is a great lesson in "know your audience." :)
I would 100000% rather read the author's own writing even if English is their 10th language
Rather than this inflated slop that look like I am trying to reach word count in a paper and one sentence becomes 15 useless ones
Edit: This is not so much commentary on AI than it is the core of your post is a few tables. Just post the tables and one or two sentence of conclusion and that is all ! It is so tedious to read through dozens of paragraph of autogenerated unnecessary nonsense -- that contribute nothing of value to the data
Well, use the chatgpt-based compression system for this articl about compression.
By that I mean they might have used ChatGPT to expand this article from simple bullet points and now you can use ChatGPT to summarize this article into succint bullet points for quick digestion.
I agree that this text in its current style is very hard to read. Feels like the text was ballooned up to 3 or 4 times its original length with pointless "side content"? Lots of distracting noise basically. AI or not AI, this is not very good.
… and so I'll continue to stick with AVC, thanks! :-)
Came to say pretty much the same thing. This slop is unreadable for me at this point.
I keep getting a paragraph or two into something, read one of the terrible "It's not just word - it's massive hyperbole!" sentences, see that there are several more in subsequent paragraphs and can't continue.
However bad the author's original writing that generate this output was, it can't be as awful as this.
kllrnohj|5 months ago
Presumably nothing jumped out at the author as being worse, but come on how can you have a whole section on why AV1's regression on Bojack is actually a good thing because the quality is way higher, and then not show any quality comparisons?
galaxy_gas|5 months ago
CharlesW|5 months ago
It'd be great to hear from someone at Netflix about the unexpected Bojack Horseman results. I'd bet that Netflix just isn't yet taking advantage of AV1 features designed especially for this kind of animation and synthetic content.
adzm|5 months ago
While the percentages look scary, it's only a slight difference (60kbps!) and still around 1mbps average, but with a significant quality boost (very crisp lines and near perfect quality). I bet Netflix could encode at nearly half that bitrate and stay similar to HEVC in quality, but I'm pleased they seem to have made a good tradeoff here.
It's actually quite amazing the quality that AV1 delivers at such low bitrates across the board. I've said it before, but AV1 is almost magical. Which I think is behind the lack of enthusiasm for VVC/h266; is anyone even using that? I've yet to actually see it in the wild.
alfalfasprout|5 months ago
shmerl|5 months ago
By now - it should be in most devices that's aren't outdated by even average standards. And it's worth mentioning that for devices that don't have hardware decoding, dav1d does an excellent job of decoding it on the CPU.
The problem is more with hardware encoding. That's indeed only present in only recent generations (or a couple) of hardware and even with that, AMD for example have an aspect ratio limitation bug in their AV1 hardware encoder (which requires adding black bands to work around) that's only fixed in RDNA 4 which is not available in their APUs, so it won't be fixed in APUs until their UDNA is used for them (they didn't fix it in RDNA 3.5 chips).
keane|5 months ago
encom|5 months ago
mijkal|5 months ago
ksec|5 months ago
Perhaps the most important question which I have yet to see anyone pointing out. Those bit rate numbers are appalling! 2-4Mbps average bitrate? For a services you are paying for? I knew Netflix has gotten bad but this is worse than I thought. Even some high end YouTube Content does better than that. It should be 8Mbps minimum. And at this bitrate the difference between H.264 and AV1 wont be so obvious.
singhkays|5 months ago
>Those bit rate numbers are appalling! 2-4Mbps average bitrate?
While those may sound low, what I'm thinking is Netflix didn't see any benefit in perceptual quality (or VMAF scores) but sending more bits down the pipe and increasing their bandwidth bill.
ElijahLynn|5 months ago
Also, If anyone was wondering where AV1 stands in comparison to VP8 and VP9... I just looked it up after a few years of not paying attention and I guess Google donated VP8 and VP9 to the alliance for open media foundation (AOMedia) in 2015 and they created AV1 and released it in 2018.
MrRadar|5 months ago
singhkays|5 months ago
https://news.ycombinator.com/item?id=45446589
mattkrick|5 months ago
red369|5 months ago
I was really impressed with my setup, but after disabling content blockers (Firefox Focus), and turning off using Mullvad's free DNS proxy service, still nothing!
Perhaps the author turned the ads off since you visited?
Spare_account|5 months ago
singhkays|5 months ago
adadtttt|5 months ago
Spare_account|5 months ago
dilyevsky|5 months ago
singhkays|5 months ago
For mobile, I don't know who outside of Netflix is delivering AV1. If they are, I expect them to be leveraging the hardware AV1 decoders for battery life instead of employing a software only solution like dav1d. Saying that, I think Netflix was using dav1d solution where it had a benefit (e.g. low quality cellular networks)
https://netflixtechblog.com/netflix-now-streaming-av1-on-and...
newman314|5 months ago
input_sh|5 months ago
One of the quite expensive paid plans, as the free one has to have "Created with Datawrapper" attribution at the bottom. I would guess they've vibe-coded their way to a premium version without paying, as the alternative is definitely outside individual people's budgets (>$500/month).
nirewen|5 months ago
singhkays|5 months ago
1oooqooq|5 months ago
jpalawaga|5 months ago
While the topic matter is interesting, I feel like obviously synthetic content falls into the “that which was not worth writing, is not worth reading either” trap.
If the authors tone is extremely ChatGPT-esque, I apologize in advance.
alwa|5 months ago
I think what stands out to me is this cartoonishly punchy, faux-dramatic framing.
That, and specialist terms that seem to be thrown in there in an empty way, just to signal subject-matter expertise that’s not even expected of a DIYer’s experiment report:
> It’s a multi-decade, billion-dollar street fight over bytes and pixels, waged in the esoteric battlegrounds of DCT blocks and entropy coding
landl0rd|5 months ago
> uses slopbot 9000 to explode his point into ten times the "prose"
> mfw
z0r|5 months ago
janice1999|5 months ago
singhkays|5 months ago
To be transparent, I was experimenting with a more "punchy," narrative style to weave in some wit and humor. I didn't want the writing to feel dry and was aiming for a flow that was more entertaining. In retrospect, I clearly overshot the mark and ended up with something that feels inauthentic and distracts from the main point.
The experiment and the data are what I was most excited to share, and the writing shouldn't obscure that. Based on this feedback, I'll revise the article to be more direct, cut the fluff, and let the numbers do the talking.
Seriously, I appreciate the reality check! This is a great lesson in "know your audience." :)
galaxy_gas|5 months ago
Rather than this inflated slop that look like I am trying to reach word count in a paper and one sentence becomes 15 useless ones
Edit: This is not so much commentary on AI than it is the core of your post is a few tables. Just post the tables and one or two sentence of conclusion and that is all ! It is so tedious to read through dozens of paragraph of autogenerated unnecessary nonsense -- that contribute nothing of value to the data
whycome|5 months ago
coumbaya|5 months ago
binaryturtle|5 months ago
… and so I'll continue to stick with AVC, thanks! :-)
bashtoni|5 months ago
I keep getting a paragraph or two into something, read one of the terrible "It's not just word - it's massive hyperbole!" sentences, see that there are several more in subsequent paragraphs and can't continue.
However bad the author's original writing that generate this output was, it can't be as awful as this.
throwawaymaroon|5 months ago
[deleted]
throwawaymaroon|5 months ago
[deleted]