Disclaimer: Former Technical Solutions Engineer for GCP, aka Support for Customers. Also Former Engineer on YouTube Caching.
To get it out of the way, I do not agree that it should've taken a journalist to get involved to have this situation solved.
However, I'd like to prompt Hacker News with how would you handle receiving support requests from a product that has >2.7B users. Almost all of which are non-directly revenue generating, across hundreds of different languages, in every conceivable location in the world.
It's an extremely hard problem to solve, but I don't think anyone has got it right. I'll be playing devil's advocate in the comments. Keep me busy for my flights.
In this case, an entire channel was shut down (with no opportunity for appeal) on account of a single instance of someone zoom-bombing a live stream with pornography. Presumably the decision was entirely automated and there was no human in the loop.
It doesn't matter how many users you have. This "solution" seems like swatting a fly with a nuclear weapon. Why not just take down the offending video until the user takes corrective action? YouTube can clearly identify the offending video out of the non-offending ones, so that's not a technical problem. And it can be done entirely with automation, so it wouldn't need humans. Further, they obviously can tell that the user does not have a history or track record of this kind of activity. Why do these tech companies always go straight to the "no recourse ban hammer"?
>> However, I'd like to prompt Hacker News with how would you handle receiving support requests from a product that has >2.7B users. Almost all of which are non-directly revenue generating, across hundreds of different languages, in every conceivable location in the world.
2.7B users is a lot, but how many of those are established content creators (like say - more than 10k subscribers) that are banned on a daily basis? How many people would it take to review those cases?
> However, I'd like to prompt Hacker News with how would you handle receiving support requests from a product that has >2.7B users. Almost all of which are non-directly revenue generating, across hundreds of different languages, in every conceivable location in the world.
One way: dollars.
Another way: don't provide services you can't support.
The only way it's actually going to happen, and when it does we'll shockingly find it was always possible and companies just didn't feel like doing it: regulation (which will just result in a mix of the first two options)
I agree completely that it's a really hard problem, but this isn't an unusual case in a non-english language. Accounts are going to be hacked all the time, and all the evidence should be easily available to verify the claim that an attacker uploaded the porn, not the original account holder. There are plenty of other platforms with monstrous support requirements and while none do it perfectly or maybe even well, the popular perception of Google is that they have a policy from on high to not even try. This is across all Google products by the way, ReLogic (developers of Terraria) were locked out of their gmail and canceled their Stadia port over it [1]. I was trying to find a specific story related to GCP, but all I keep running into is people complaining about support [2] [3] [4] These are especially bad because it's affecting customers who are contributing to a very high margin part of the business, and in some cases customers have a line item for the support. Obviously bad experiences get talked about orders of magnitude more than good ones, but Google really is well known for this.
The article mentions that the channel was taken down because a hacker in the live Zoom meeting (being streamcast into YouTube) played porn. YouTube could have simply blocked that single YT video while retaining the rest of the channel.
If multiple instances of users hacking Zoom meetings came to light, Google could simply block Zoom from streamcasting videos into YouTube until they fixed their shit.
Is there any reason that governments should allow that business model to exist? Why can't the government require a reasonable level of support, and if that restricts how big a service can get then that is just fine.
It's a social problem, not a technical one. Facilitate a hierarchy of trust among users who more-or-less volunteer to moderate at the lower trust levels and engage on unjustly-banned users' behalf with YouTube staff at higher trust levels.
The automation should be setting flags on videos. Users should have preferences for opting in or out of flags with reasonable defaults. If there is a jurisdictional requirement in a users location YouTube sets the preference to disabled according to the law and shows a link to the regional law so users understand.
Hence abuse is a local thing too. One can be getting flagged in one region but not in another. ‘Abuse’ amounts to getting certain flags auto-applied in some locations or whatever. Should not affect the account itself though.
Theres so many stories of legit YouTube creators being just nuked by Google and not being able to get any help unless they are huge channels or the media gets involved. Its really pathetic and sad.
I suspect that on both sides of the ocean there's parties clamouring for breaking up Google; Youtube as its own company would make sense, but that's assuming it's financially and technologically healthy enough on its own.
qmarchi|1 year ago
To get it out of the way, I do not agree that it should've taken a journalist to get involved to have this situation solved.
However, I'd like to prompt Hacker News with how would you handle receiving support requests from a product that has >2.7B users. Almost all of which are non-directly revenue generating, across hundreds of different languages, in every conceivable location in the world.
It's an extremely hard problem to solve, but I don't think anyone has got it right. I'll be playing devil's advocate in the comments. Keep me busy for my flights.
ryandrake|1 year ago
It doesn't matter how many users you have. This "solution" seems like swatting a fly with a nuclear weapon. Why not just take down the offending video until the user takes corrective action? YouTube can clearly identify the offending video out of the non-offending ones, so that's not a technical problem. And it can be done entirely with automation, so it wouldn't need humans. Further, they obviously can tell that the user does not have a history or track record of this kind of activity. Why do these tech companies always go straight to the "no recourse ban hammer"?
oidar|1 year ago
2.7B users is a lot, but how many of those are established content creators (like say - more than 10k subscribers) that are banned on a daily basis? How many people would it take to review those cases?
vundercind|1 year ago
One way: dollars.
Another way: don't provide services you can't support.
The only way it's actually going to happen, and when it does we'll shockingly find it was always possible and companies just didn't feel like doing it: regulation (which will just result in a mix of the first two options)
tossandthrow|1 year ago
This would cut into margins, but maybe it is not possible to run hyper scale companies only managed by a couple of engineers.
And maybe we should not accept that profit seeking people want to do that anyways.
some_random|1 year ago
[1] https://www.techspot.com/news/88563-re-logic-cancels-terrari...
[2] https://www.reddit.com/r/googlecloud/comments/m3hi63/whats_g...
[3] https://www.reddit.com/r/googlecloud/comments/1ey0rx8/gcp_su...
[4] https://www.reddit.com/r/googlecloud/comments/owt679/how_doe...
fakedang|1 year ago
The article mentions that the channel was taken down because a hacker in the live Zoom meeting (being streamcast into YouTube) played porn. YouTube could have simply blocked that single YT video while retaining the rest of the channel.
If multiple instances of users hacking Zoom meetings came to light, Google could simply block Zoom from streamcasting videos into YouTube until they fixed their shit.
gooosle|1 year ago
levkk|1 year ago
mikequinlan|1 year ago
criddell|1 year ago
The risk is of bad incentives when providing support becomes profitable...
CamperBob2|1 year ago
Better yet, pay them. It is, after all, work.
Brian_K_White|1 year ago
Do you have any other impossible conundrums I can clear up before coffee?
prmoustache|1 year ago
papageek|1 year ago
ksynwa|1 year ago
unknown|1 year ago
[deleted]
clord|1 year ago
Hence abuse is a local thing too. One can be getting flagged in one region but not in another. ‘Abuse’ amounts to getting certain flags auto-applied in some locations or whatever. Should not affect the account itself though.
oidar|1 year ago
edm0nd|1 year ago
Do better Youtube/Google.
amelius|1 year ago
unknown|1 year ago
[deleted]
xmuslims|1 year ago
anoncow|1 year ago
deadbabe|1 year ago
Cthulhu_|1 year ago