Amazing to read about the continued re-invention of this part of Netflix's internal systems.
I worked with this team (and its predecessors) during my time at Netflix. They achieved several "holy grails" of video encoding: a perceptual quality metric (VMAF), optimal bitrate selection per 2 second video chunk, and then optimal video chunking to be scene based rather than a fixed 2 seconds. Doing any of that in a research lab would be a challenge, but pulling it off at Netflix's scale is epic.
You might need some background on how adaptive video streaming works to fully grok this article.
But this is also just a story about a massive refactoring of a large, critical system. How many companies have you worked for that aggressively pursued refactoring/re-engineering their central systems? At most other places, I've seen risk aversion, fear, and mismanagement conspire to kill innovation. Not so at Netflix.
Pornhub serves more diverse data across the world with a much simpler pipeline. They are the ones who should be emulated. Netflix needs a bit of housecleaning to get rid of this faction in the company.
I'm curious about grain encoding - did you work on that at all? My friend heard they were doing a grain extraction layer that was re-added client side. Feel free to contact devin@techcrunch if you have any interesting insight.
I‘m not impressed by the quality Netflix gives me when on my phone, all while paying for a 4K plan. Streaming a locally hosted 4K file looks just way better. Apple TV+ is approaching that level of quality, so it’s definitely doable.
>They achieved several "holy grails" of video encoding: a perceptual quality metric (VMAF), optimal bitrate selection per 2 second video chunk, and then optimal video chunking to be scene based rather than a fixed 2 seconds.
Both being extremely hard problem but I wouldn't say they are "holy grails" level. The latter part is currently being offered by other encoding services. While the first one, VMAF, is much better than all other metric we were previously using, still has many short comings. I was actually hoping all the AI / LLM model we could have something better than VMAF by now.
'Long release cycles: The joint deployment meant that there was increased fear of unintended production outages as debugging and rollback can be difficult for a deployment of this size. This drove the approach of the “release train”. Every two weeks, a “snapshot” of all modules was taken, and promoted to be a “release candidate”. This release candidate then went through exhaustive testing which attempted to cover as large a surface area as possible. This testing stage took about two weeks. Thus, depending on when the code change was merged, it could take anywhere between two and four weeks to reach production.'
I guess I'm just old, but I prefer the delay with a couple of weeks of testing versus pushing to prod and having the customer test the code.
Netflix is a company that works with media people. You either understand what that implies or you don't.
And remember, this is the backend for video encoding. Issues in this aren't necessarily user visible.
The big benefit to their velocity is responsiveness. Apparently the backend team understands their customer and the timelines that the customer wants, and adjusted appropriately.
Just dealing with ads would have been problematic, because those tend to be straight 1080 or 4k with stereo. Nothing fancy, but I'll bet they didn't fit inside the chunk size they were expecting since ads are usually 30 seconds or less. And they don't need the dynamic encoding etc that normal titles do.
I wonder how much benefit dynamic encoding brings in space reduction?
What's interesting to me about this is that some companies seem to struggle to get even one reliable deployment process in place. In this case they were able to actively select the right process for the right job, even if it isn't the one they're normally geared toward using.
It's not necessarily anything Earth shattering, but it may be an issue at some smaller places with fewer resources.
Besides the debate about microservices being good or bad, it's clear that netflix developers are very passionate about what they do. To me this seems to play a big role in the success of a software product.
If you want to encode video, use ffmpeg. Netflix serves static movies, so encoding is going to be relatively rare and can probably be done on whatever computing resources are already available. Quality-wise, the ffmpeg/x264/x265 people probably are doing a good job already.
If you want to serve video, serve it with HLS or similar from static files stored on a CDN with a bunch of bitrate profiles. Here the problem is more creating or finding the CDN that anything to do with video.
Can't quite figure out what the purpose of all the stuff in the article could be (maybe to justify the jobs of the people doing the work?)
Disclaimer, not working for Netflix, but working in this area.
Ultimately it will still likely to be ffmpeg or their internal fork of it?
But they are talking about per-title/per-shot optimisation here and not apply a blanket quality profile for every single video.
Netflix also produce their own series and allow them to optimise their encoding further.
They also have a Video Validation Services and Video Quality Services that do automatic quality check of the encoded video as the post indicated.
Is it complex, yes it is. But is it overly complex, maybe not?
>Can't quite figure out what the purpose of all the stuff in the article could be
Netflix encoding is actually much more complex. Both per Scene and per TiTle encoding, optimal bitrate selection etc are a lot more than what ffmpeg offers if you do it manually.
Arguably you are right though Netflix could ignore all of that and just brute force the problem with much higher cost in terms of encoding, storage and bandwidth. But i guess at their scale it make sense to do all these optimisation.
You know I read articles like that all the time but the user experience of all these app gets worse. The time to first video frame on Netflix is not great to say the least. The rich metadata seems also only to be used internally...
My pet peeve with Netflix is that it occasionally forgets where I last paused my video. I don't often watch Netflix, so when I do, it's annoying that I can't resume from where I left off
Wait, who am I supposed to believe here?!? Prime Video tore down their micro services in favor of a monolith just last year! Which trillion dollar globocorp is my tiny, insignificant company supposed to emulate?
These are conversations just like talking about code editors and so on. No one is right or wrong. But if I had to choose - micro services hard harder to control and secure. And vi? not for me ;)
Story time: I worked at Facebook and had to work with someone who came from Netflix. He was one of those people who, when he went to a new company, simply tried to reinvent everything he came from with no care or consideration given to what's already there.
FB very much does not use microservices. The closest is in infra but the www layer is very much a massive monolith, probably too massive but that's another story. They've done some excellent engineering to make the developer experience pretty good, like you can commit to www and have it just push to prod within a few hours automatically (unless someone breaks trunk, which happens).
Anyway, this person tried to reinvent everything as microservices and it pretty much just confirmed every preconceived notion (and hatred) or microservices that I already had.
You create a whole bunch of issues with orchestration, versioning and deployment that you otherwise don't have. That's fine if you gain a huge benefit but often you just don't get any benefit at all. You simply get way more headaches in trying to debug why things aren't working.
One of the key assumptions built into FB code that was broken is RYW (read your write). FB uses an in-memory write-through grraph database. On any given www request any writes you make will be consistent when you read them within that request. Every part of FB assumes this is true.
This isn't true as soon as you cross an RPC boundary... much like you will with any microservices. So this caused no end of problems and the person just wouldn't hear this when it was identified as an issue before anything was done. So th enet effect was 2 years spent on a migration that ultimately was cancelled.
Don't be that guy. When you go into a code base, realize that things are the way they are for a reason. It might not be a good reason. But there'll be a reason. Breaking things for the sake of reinventing the world how you think it should've been done were you starting from zero is just going to be a giant waste of everybody's time.
As for Netflix video processing, they're basically encoding several thousand videos and deploying those segments to a VPN. This is nothing compared to, say, the video encoding needed for FB (let alone Youtube). Also, Netflix video processing is offline. This... isn't a hard problem. Netflix does do some cool stuff like AI scene detection to optimize encoding. But microservices feels like complete overkill.
Would be interested to also know how they handle per-title audio. With stereo -> 5.1 -> 7.1 and the sides and wide layout variants, how do they think about this during the inspection and encoding process? Being completely naive to Netflix's source media, and assuming it comes in a variety for formats and media, it seems like there are decision to make there. Though audio obviously has a much lower bandwidth burden, one would think there could still be QoE gains (and bandwidth savings) by doing things that AV1 can do with different scenes in something like OPUS.
Okay, now that that's done, can someone at Netflix please figure out how to use their multiple data centres worth of distributed clusters to serve more than 4 or 5 subtitle languages please?
If anyone at Netflix would like some assistance, I've previously consulted in the areas of large-scale compression optimisation, and I'm sure we can get those 100KB text files down to under 20KB!
I'll help build distributed Kubernetes buzzword-compliant architectures, if that helps anyone get internal promotions as a part of this pan-cultural effort of inclusivity.
Does anyone have any reading material on the reliability of systems that use microservices? I've had a bit of basic probability rattling around in the back of my head that makes me suspect microservices are in general less reliable. I'd be interested in seeing a real-world analysis.
My thinking goes like this, with some simplifying assumptions. Let's say you have a monolith with 99% uptime that you rearchitect into 5 microservices, each with 99% uptime, and if any one of those services goes down your whole system is down. Let's also assume for the sake of simplicity that these microservices are completely independent, although they are almost assuredly not.
From basic probability, 99% uptime means there is some chunk of time t for which P(monolith goes down) = 1%. But
P(microservice system goes down) = P(service A down or service B down or...) = P(service A down) + P(service B down) + ... = 5%
In reality P(microservice system goes down) < 5% because they aren't independent and the chunk of time in which service A can go down will overlap that of service B. But still, that means the upper bound of the whole system going down is higher than for a monolith.
But microservices are pretty popular, and I'm sure someone has thought along these lines before. One potential rebuttal is that each microservice is in fact more reliable than the monolith, although from what I've seen in my career I am skeptical that's truly the case.
Where's the hole in my reasoning? (Or maybe I'm right. That would be fine too.)
In general, one of the goals of microservices should be that if one of the five services goes down, the other four should be able to operate in some capacity still.
In practice, this can make the math quite a bit messier, but I don't think it necessarily has been worse overall from my perspective.
So instead of having your system be up or down 99% of the time in a monolith, you'll have it fully up 95% of the time (using your numbers), but of that 5% of downtime, 20% of the time one of your products will be running slowly, or 10% of the time some new feature you launched won't work for specific customers in some specific region, etc.
At my company it makes things like SLA/SLO guarantees for "our services" pretty complicated in that it's hard to define what uptime truly means, but overall I think the five microservice approach, when done well, should have less than 1% of complete downtime, at the cost of more partial downtime
1) Retries. When one replica of a microservice is down, the calling service can retry, get service from an up replica, and the outage is routed around
2) Queues. Microservices lend themselves to queue and worker patterns where downtime on individual services has less effect on overall service availability
4) Outages have narrower impact. One microservice losing access to its database breaks the functionality that relies on that microservice; other functionality runs fine.
5) Changes have smaller blast radius. Most outages are caused by changes; changes in monoliths that cause outages are more likely to take the whole system offline (eg stack overflows and infinite loops crash processes). Changes that cause outages in microservices can’t knock other services offline.
I did about something same. If there are 5 services each can fail independently (unlikely but lets assume) and uptime is 99%. Then P(All up at same time) = 0.99^5 which gives any one is down at any moment is 1- 0.99^5 ~5%. So this increase the failure 5 times of original 1% downtime. And with 100s of micro services in overall architecture with some indirect connections between many of them I think this number could possibly go much higher.
Further at least where I work it is clearly that failure rate is higher than 5%. But with cottage industry of observability tools, cloud native solutions ..blah..blah.. telling basic maths to people in responsible positions is sure shot way to get fired. I am already being marked as someone opposed to progress so I can basically take my statistics and shove up mine. There is million times more data about reliability of micro services and they all can't be wrong.
I don’t know if our architecture truly qualifies as microservices but in my experience one of the advantages is that the system is able to limp along in a degraded state much more effectively when one service goes down whereas it’s a lot easier to bring the whole system to its knees with a single change in a monolith.
This suggests an addition to your model which is that not all outages are equally costly.
- micro services don’t always block the pipeline; often the failing one can catch up later
- scaling can happen for each micro service
- removing faulty components from the main path means the key services are less likely to crash
- you haven’t explained why feature X is more likely to crash in a micro service than a monolith, eg, you’re assuming components A and B have 0.5% crash in a monolith but 1% when run independently
Your model ignores that most of your crashes come from the same code paths between the two models; only a small contribution is to the crash is from hosting.
> Processing Ad creatives posed some new challenges: media formats of Ads are quite different from movie and TV mezzanines that the team was familiar with, and there was a new set of media processing requirements related to the business needs of Ads.
Nice to see them rearchitecture their service around enshittification.
Make my nextflix better? How about cheaper? Did it deliver better content? Is this the work product of 2000 engineers focused on delivering me the worst content in the best way possible? What exactly am I getting for my 12, 20 ... wait what the hell is netflix charging now for their garbage content...
1000? 2000?d engineers at netflix and this is the article we get, this is their flex?
And then I always think, Netflix only has ~3,600 movies... My friend's Plex server has 4x that (in just movies). I'm also often underwhelmed by Netflix's engineering posts
The way I see is Netflix is mostly sprawling implementation of mediocre Java stack which is typical of large IT department in F500 companies. But their relentless tech marketing and extremely high pay for work which is a wrapper around enterprise IT has created an aura of sophistication and cutting edge in many people's mind.
I doubt the backed engineers have any say over the content side of the company so blaming them for that isn't really reasonable. And while it may not make it cheaper for you or improve the user facing interface again another team, it probably made it easier for them to maintain and debug and administrate, which is something all sysadmins and engineers should respect.
Don't be underwhelmed. It definitely makes your Netflix better: whatever you watch can be encoded better, which enhances quality and lowers the chance of a rebuffer interrupting your experience. And the improved encoding efficiency frees up money that can be spent on content production.
But you can also just enjoy the story of developer achievement!
I'm happy to pay them for the occasional good content, that I'll then torrent (because fuck smart TVs), but ... their app/client/website and their system seems to just work. I'm sure there are many things to optimize, etc, etc.. but probably they could reduce their development (and ops) budget by 70-80% if they would stop fucking with the system.
though, of course, that'd require a drastically different mindset, different people, etc.
I mean, it depends? Obviously Netflix has extremely different priorities than 99.99% of the software in the world. Scale of operations is much different as well.
It’s available in most of the countries in the world, in a lot of varying devices, requiring a ton of different video processing pipelines, content delivery networks, infrastructure and etc. Even very “straightforward things” like downloading for offline viewing can be a significant effort to implement. Now think of audio sync, post processing, sub delivery, localization, partnerships and etc., you can see how you would need a ton of engineering effort to achieve it. Just the scale makes it much nuanced perspective during implementation.
You and me can dislike whatever content they’re delivering, but it’s very obvious how there are millions and millions of people who still enjoy it.
[+] [-] trunnell|2 years ago|reply
I worked with this team (and its predecessors) during my time at Netflix. They achieved several "holy grails" of video encoding: a perceptual quality metric (VMAF), optimal bitrate selection per 2 second video chunk, and then optimal video chunking to be scene based rather than a fixed 2 seconds. Doing any of that in a research lab would be a challenge, but pulling it off at Netflix's scale is epic.
You might need some background on how adaptive video streaming works to fully grok this article.
But this is also just a story about a massive refactoring of a large, critical system. How many companies have you worked for that aggressively pursued refactoring/re-engineering their central systems? At most other places, I've seen risk aversion, fear, and mismanagement conspire to kill innovation. Not so at Netflix.
[+] [-] ilrwbwrkhv|2 years ago|reply
[+] [-] devindotcom|2 years ago|reply
[+] [-] manmal|2 years ago|reply
[+] [-] ksec|2 years ago|reply
Both being extremely hard problem but I wouldn't say they are "holy grails" level. The latter part is currently being offered by other encoding services. While the first one, VMAF, is much better than all other metric we were previously using, still has many short comings. I was actually hoping all the AI / LLM model we could have something better than VMAF by now.
[+] [-] entropicdrifter|2 years ago|reply
[+] [-] bjornlouser|2 years ago|reply
I guess I'm just old, but I prefer the delay with a couple of weeks of testing versus pushing to prod and having the customer test the code.
[+] [-] mannyv|2 years ago|reply
And remember, this is the backend for video encoding. Issues in this aren't necessarily user visible.
The big benefit to their velocity is responsiveness. Apparently the backend team understands their customer and the timelines that the customer wants, and adjusted appropriately.
Just dealing with ads would have been problematic, because those tend to be straight 1080 or 4k with stereo. Nothing fancy, but I'll bet they didn't fit inside the chunk size they were expecting since ads are usually 30 seconds or less. And they don't need the dynamic encoding etc that normal titles do.
I wonder how much benefit dynamic encoding brings in space reduction?
[+] [-] atomicfiredoll|2 years ago|reply
It's not necessarily anything Earth shattering, but it may be an issue at some smaller places with fewer resources.
[+] [-] highspeedbus|2 years ago|reply
[+] [-] imjonse|2 years ago|reply
[+] [-] b33j0r|2 years ago|reply
More stories lately are about “why we went pack to monoliths and building with borland C++”
Not long ago it was more likely “how microservices solved everything at our company, and why only morons disagree.”
So are we moving towards or away from microservices? Both. We’re maturing to use the right tools for the system.
[+] [-] sirsinsalot|2 years ago|reply
Surely not! I want my dogmatic clickbait and LinkedIn-style grandstanding thank you very much!
How else will I stay on the hedonic treadmill of staying up-to-date with a new framework or architecture every 3 months?!
[+] [-] chrishare|2 years ago|reply
[+] [-] devit|2 years ago|reply
If you want to encode video, use ffmpeg. Netflix serves static movies, so encoding is going to be relatively rare and can probably be done on whatever computing resources are already available. Quality-wise, the ffmpeg/x264/x265 people probably are doing a good job already.
If you want to serve video, serve it with HLS or similar from static files stored on a CDN with a bunch of bitrate profiles. Here the problem is more creating or finding the CDN that anything to do with video.
Can't quite figure out what the purpose of all the stuff in the article could be (maybe to justify the jobs of the people doing the work?)
[+] [-] phantomathkg|2 years ago|reply
Ultimately it will still likely to be ffmpeg or their internal fork of it? But they are talking about per-title/per-shot optimisation here and not apply a blanket quality profile for every single video.
Netflix also produce their own series and allow them to optimise their encoding further.
They also have a Video Validation Services and Video Quality Services that do automatic quality check of the encoded video as the post indicated.
Is it complex, yes it is. But is it overly complex, maybe not?
[+] [-] ddorian43|2 years ago|reply
Using per-title-encoding, allows you to have less renditions for a given video, having better cache hits.
Another example is pre-fetching data to the CDN nodes(when a new episode or season of a popular show comes out).
Its just extra optimizations that come up and are worth it at scale.
[+] [-] ksec|2 years ago|reply
Netflix encoding is actually much more complex. Both per Scene and per TiTle encoding, optimal bitrate selection etc are a lot more than what ffmpeg offers if you do it manually.
Arguably you are right though Netflix could ignore all of that and just brute force the problem with much higher cost in terms of encoding, storage and bandwidth. But i guess at their scale it make sense to do all these optimisation.
[+] [-] Zetobal|2 years ago|reply
[+] [-] bodantogat|2 years ago|reply
[+] [-] vergessenmir|2 years ago|reply
Disney is already finding out that a streaming platform isn't cheap to run and hard to do efficiently.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] SebFender|2 years ago|reply
[+] [-] eterpstra|2 years ago|reply
https://thenewstack.io/return-of-the-monolith-amazon-dumps-m...
[+] [-] SebFender|2 years ago|reply
[+] [-] cletus|2 years ago|reply
FB very much does not use microservices. The closest is in infra but the www layer is very much a massive monolith, probably too massive but that's another story. They've done some excellent engineering to make the developer experience pretty good, like you can commit to www and have it just push to prod within a few hours automatically (unless someone breaks trunk, which happens).
Anyway, this person tried to reinvent everything as microservices and it pretty much just confirmed every preconceived notion (and hatred) or microservices that I already had.
You create a whole bunch of issues with orchestration, versioning and deployment that you otherwise don't have. That's fine if you gain a huge benefit but often you just don't get any benefit at all. You simply get way more headaches in trying to debug why things aren't working.
One of the key assumptions built into FB code that was broken is RYW (read your write). FB uses an in-memory write-through grraph database. On any given www request any writes you make will be consistent when you read them within that request. Every part of FB assumes this is true.
This isn't true as soon as you cross an RPC boundary... much like you will with any microservices. So this caused no end of problems and the person just wouldn't hear this when it was identified as an issue before anything was done. So th enet effect was 2 years spent on a migration that ultimately was cancelled.
Don't be that guy. When you go into a code base, realize that things are the way they are for a reason. It might not be a good reason. But there'll be a reason. Breaking things for the sake of reinventing the world how you think it should've been done were you starting from zero is just going to be a giant waste of everybody's time.
As for Netflix video processing, they're basically encoding several thousand videos and deploying those segments to a VPN. This is nothing compared to, say, the video encoding needed for FB (let alone Youtube). Also, Netflix video processing is offline. This... isn't a hard problem. Netflix does do some cool stuff like AI scene detection to optimize encoding. But microservices feels like complete overkill.
[+] [-] dmillar|2 years ago|reply
[+] [-] jiggawatts|2 years ago|reply
If anyone at Netflix would like some assistance, I've previously consulted in the areas of large-scale compression optimisation, and I'm sure we can get those 100KB text files down to under 20KB!
I'll help build distributed Kubernetes buzzword-compliant architectures, if that helps anyone get internal promotions as a part of this pan-cultural effort of inclusivity.
[+] [-] VyseofArcadia|2 years ago|reply
My thinking goes like this, with some simplifying assumptions. Let's say you have a monolith with 99% uptime that you rearchitect into 5 microservices, each with 99% uptime, and if any one of those services goes down your whole system is down. Let's also assume for the sake of simplicity that these microservices are completely independent, although they are almost assuredly not.
From basic probability, 99% uptime means there is some chunk of time t for which P(monolith goes down) = 1%. But
P(microservice system goes down) = P(service A down or service B down or...) = P(service A down) + P(service B down) + ... = 5%
In reality P(microservice system goes down) < 5% because they aren't independent and the chunk of time in which service A can go down will overlap that of service B. But still, that means the upper bound of the whole system going down is higher than for a monolith.
But microservices are pretty popular, and I'm sure someone has thought along these lines before. One potential rebuttal is that each microservice is in fact more reliable than the monolith, although from what I've seen in my career I am skeptical that's truly the case.
Where's the hole in my reasoning? (Or maybe I'm right. That would be fine too.)
[+] [-] dmattia|2 years ago|reply
In practice, this can make the math quite a bit messier, but I don't think it necessarily has been worse overall from my perspective.
So instead of having your system be up or down 99% of the time in a monolith, you'll have it fully up 95% of the time (using your numbers), but of that 5% of downtime, 20% of the time one of your products will be running slowly, or 10% of the time some new feature you launched won't work for specific customers in some specific region, etc.
At my company it makes things like SLA/SLO guarantees for "our services" pretty complicated in that it's hard to define what uptime truly means, but overall I think the five microservice approach, when done well, should have less than 1% of complete downtime, at the cost of more partial downtime
[+] [-] jameshart|2 years ago|reply
1) Retries. When one replica of a microservice is down, the calling service can retry, get service from an up replica, and the outage is routed around
2) Queues. Microservices lend themselves to queue and worker patterns where downtime on individual services has less effect on overall service availability
4) Outages have narrower impact. One microservice losing access to its database breaks the functionality that relies on that microservice; other functionality runs fine.
5) Changes have smaller blast radius. Most outages are caused by changes; changes in monoliths that cause outages are more likely to take the whole system offline (eg stack overflows and infinite loops crash processes). Changes that cause outages in microservices can’t knock other services offline.
[+] [-] geodel|2 years ago|reply
Further at least where I work it is clearly that failure rate is higher than 5%. But with cottage industry of observability tools, cloud native solutions ..blah..blah.. telling basic maths to people in responsible positions is sure shot way to get fired. I am already being marked as someone opposed to progress so I can basically take my statistics and shove up mine. There is million times more data about reliability of micro services and they all can't be wrong.
[+] [-] jolux|2 years ago|reply
This suggests an addition to your model which is that not all outages are equally costly.
[+] [-] zmgsabst|2 years ago|reply
- micro services don’t always block the pipeline; often the failing one can catch up later
- scaling can happen for each micro service
- removing faulty components from the main path means the key services are less likely to crash
- you haven’t explained why feature X is more likely to crash in a micro service than a monolith, eg, you’re assuming components A and B have 0.5% crash in a monolith but 1% when run independently
Your model ignores that most of your crashes come from the same code paths between the two models; only a small contribution is to the crash is from hosting.
[+] [-] ksec|2 years ago|reply
[+] [-] belval|2 years ago|reply
Nice to see them rearchitecture their service around enshittification.
[+] [-] zer00eyz|2 years ago|reply
Make my nextflix better? How about cheaper? Did it deliver better content? Is this the work product of 2000 engineers focused on delivering me the worst content in the best way possible? What exactly am I getting for my 12, 20 ... wait what the hell is netflix charging now for their garbage content...
1000? 2000?d engineers at netflix and this is the article we get, this is their flex?
I am underwhelmed.
[+] [-] munro|2 years ago|reply
[+] [-] matt_heimer|2 years ago|reply
[+] [-] geodel|2 years ago|reply
[+] [-] throwaway894345|2 years ago|reply
[+] [-] smegger001|2 years ago|reply
[+] [-] trunnell|2 years ago|reply
But you can also just enjoy the story of developer achievement!
[+] [-] pas|2 years ago|reply
so it will make things better, pinky-promise!
I'm happy to pay them for the occasional good content, that I'll then torrent (because fuck smart TVs), but ... their app/client/website and their system seems to just work. I'm sure there are many things to optimize, etc, etc.. but probably they could reduce their development (and ops) budget by 70-80% if they would stop fucking with the system.
though, of course, that'd require a drastically different mindset, different people, etc.
[+] [-] phailhaus|2 years ago|reply
> While it is still early days, we have already seen the benefits of the new platform, specifically the ease of feature delivery.
[+] [-] Tainnor|2 years ago|reply
[+] [-] kredd|2 years ago|reply
It’s available in most of the countries in the world, in a lot of varying devices, requiring a ton of different video processing pipelines, content delivery networks, infrastructure and etc. Even very “straightforward things” like downloading for offline viewing can be a significant effort to implement. Now think of audio sync, post processing, sub delivery, localization, partnerships and etc., you can see how you would need a ton of engineering effort to achieve it. Just the scale makes it much nuanced perspective during implementation.
You and me can dislike whatever content they’re delivering, but it’s very obvious how there are millions and millions of people who still enjoy it.
[+] [-] matthewmacleod|2 years ago|reply
I’m not being wide.
[+] [-] franze|2 years ago|reply
[deleted]
[+] [-] issafram|2 years ago|reply
/s if anyone took this seriously