top | item 23895022

(no title)

tfe | 5 years ago

I’d like to point out that this still has deficiencies compared to Vimeo or YouTube, who transcode the source file into multiple bitrates and adaptively serve an appropriate quality based on the viewer’s available bandwidth and screen size.

This manifests as buffering on slower connections, where YouTube or Vimeo would just downgrade the user to a lower bitrate transparently (or nearly transparently).

An end user today will expect that behavior and have very little tolerance for buffering if their connection is unable to smoothly play the one bitrate the creator published (no matter how fast the CDN is).

Edit: I’m aware that it’s possible to do adaptive bitrate streaming outside of using Vimeo or YouTube (as several commenters have explained below) however this isn’t what TFA describes and I think it’s important to note this deficiency in the author’s described approach to serving their own video.

discuss

order

guu|5 years ago

If you want this functionality you can encode the video into chunks at multiple quality levels using video2hls[0] and host them anywhere you like[1].

[0]: https://github.com/vincentbernat/video2hls

[1]: https://ryanparman.com/posts/2018/serving-bandwidth-friendly...

poxrud|5 years ago

You can also use ffmpeg to do this, but figuring out the correct command line options is not easy.

simlevesque|5 years ago

HLS is half of the answer, that will only work of half of all devices. You need HLS + MPEG-DASH.

martin-adams|5 years ago

What you're referring to is HLS, HTTP Live Streaming - where HTTP-based adaptive bitrate streaming.

BunnyCDN which the author is recommending does support the HLS protocol. What isn't stated is if there's any specific steps to achieve that.

You're right though, the author didn't factor this as a requirement to their needs.

mikeryan|5 years ago

Just want to point out all CDNs support HLS for on-demand assets. It’s just static http files. With HLS you just encode your file into multiple different nitrate files which are then split into small 5-10 second chunks and then wrapped in a text playlist for consumption. The protocol is still just static assets over http though.

command_tab|5 years ago

Also of note: HLS supports byte range addressing, so you can create the various stream files as single .ts files rather than a collection of segments per stream. A client can use the Range HTTP header to select the window of bytes it wants for the stream/bandwidth slot it wants. This mode is supported from something like iOS 5 and up, and ffmpeg has flags to produce such streams.

lucideer|5 years ago

Everyone is replying to you mentioning HLS, which (as your edit mentions) isn't described in the article.

Which is... a bit shocking. From the title of the article, I assumed the content would be related to hosting video. HLS was the first thing that came to mind and I assumed it'd be mentioned up top. I clicked to see if it listed any HTTP-related considerations I wasn't aware of in addition to HLS.

Turns out it's just an advertisement for a (very non-video-specific) CDN service, with some lines tacked on at the end telling you to use Handbrake (oddly, the cli, rather tham ffmpeg???)

unethical_ban|5 years ago

It is a tutorial for stringing together a bunch of tools to transcode a video to multiple formats, upload it to S3, and pull that to a cheap CDN (that does get complimented heavily).

And it spawned a great discussion about self-hosting.

strunz|5 years ago

Handbrake CLI is great and much more user friendly than ffmpeg

poxrud|5 years ago

You can use HLS with the described CDN, since it's just http.

vbezhenar|5 years ago

I wonder if that really what users expect? I hate auto-downgrading, I always set quality to the highest option and I'm happy to do other work while video is downloading and buffering.

simias|5 years ago

If I'm watching a movie I probably want to quality to remain reasonably high, if I'm watching a Youtube video of people talking or something where visuals don't really matter I'll take 480p over buffering.

Youtube is pretty decent for that, you can either let it figure out what format to use, or force the resolution. That's a good compromise IMO.

ryandrake|5 years ago

I've pretty much switched over to youtube-dl for video viewing, as I find it to be superior to the web interface in most ways. I get the highest-quality file every time, instead of what their AI thinks I want. I can easily request audio-only or video-only. Download once, watch over and over without using more bandwidth each view. No suggested videos. No recommendations. No comments.

nicoburns|5 years ago

It can be. Especially users on mobile connections which can have quite low bandwidth caps.

kvz|5 years ago

Adaptive delivery could easily be added by converting the files with a video encoding service. Transloadit’s community plan is free and should have enough traffic included for these use cases. Full disclose: I’m a transloadit founder

0xbkt|5 years ago

I hope you can enable doing the vice versa some day. I mean HLS/DASH to single file containers like MP4. It seems that you can't pull segments from remote URLs in M3U file for the time being.

Already__Taken|5 years ago

I'd too like to point out if you're serving any video with text in it i.e. programming, this is a worthless feature to worry about.

I've turned off too many 720p-only talks because it's unreadable.

cxr|5 years ago

> I’d like to point out that this still has deficiencies compared to Vimeo or YouTube, who transcode the source file into multiple bitrates

This may be generally true, but for any screencast (including the author's use cases), automatically downgrading to a lower quality video based on connection speed is a bug, not a feature.

gingerlime|5 years ago

Would cloudflare stream solve this problem though?

tfe|5 years ago

Looks like it would!

emilfihlman|5 years ago

>This manifests as buffering on slower connections, where YouTube or Vimeo would just downgrade the user to a lower bitrate transparently (or nearly transparently).

As a user, I _never_ want to watch shit quality content and _MUCH_ prefer buffering to it.

Multicomp|5 years ago

I admit I too wish I could buffer video.

Back in the day, you could open a tab, hit pause, go back to whatever you are reading, and by the time you finish that article and maybe grab a cup of coffee, the video is fully loaded and you can watch at your leisure.

These days, all the websites are so smart that they realise that you are not on the tab and load nothing so even though the page is been open for 20-minutes, you still have the lovely privilege of sitting there staring at the throbber every so often, thanks to your work insisting on full VPN tunnelling.

vrotaru|5 years ago

Looks like we can assume a few things about the implied audience.

Highly (or highishly) technical people ready to pay for video courses. One can infer from that a they have a good enough connection.

Is not like everybody should be Facebook, and have his video content available for all devices and bit rates.

smart_jackal|5 years ago

This can be easily solved by presenting an option set like "slow/fast/HD" to the user, then stream accordingly. Open source has always been about lots of choices anyway!

fomine3|5 years ago

Just my anecdote, I'm annoying that non-live video service uses adaptive bitrate. I prefer waiting spinner or degrade quality manually than automatically downgrading to awful quality.

account42|5 years ago

> An end user today will expect that behavior

The average end user maybe. Not all endusers - for most videos I would rather wait than get a lower quality.

trianglem|5 years ago

Maybe I don’t understand this, but unless something is encrypted, why does the downsampling have to happen at the source?

Alternatively, is there a way to encrypt something, such that you can apply a transform to the encrypted data to downsample it without knowing the contents of the file?