(no title)
qwertylicious | 6 months ago
#1. Facebook did everything they could to evaluate Flo as a company and the data they were receiving, but they simply had no way to tell that the data was illegally acquired and privacy-invading.
#2. Facebook had inadequate mechanisms for evaluating their partners, and that while they could have caught this problem they failed to do so, and therefore Facebook was negligent.
#3. Facebook turned a blind eye to clear red flags that should've caused them to investigate further, and Facebook was malicious.
Personally, given Facebook's past extremely egregious behaviour, I think it's most likely to be a combination of #2 and #3: inadequate mechanisms to evaluate data partners, and conveniently ignoring signals that the data was ill-gotten, and that Facebook is in fact negligent if not malicious. In either case Facebook should be held liable.
pc86 is taking the position that the issue is #1: that Facebook did everything they could, and still, the bad data made it through because it's impossible to build a system to catch this sort of thing.
If that's true, then my argument is that the system Facebook built is too easily abused and should be torn down or significantly modified/curtailed as it cannot be operated safely, and that Facebook should still be held liable for building and operating a harmful technology that they could not adequately govern.
Does that clarify my position?
decisionsmatter|6 months ago
You could argue that Facebook should be more explicit in asking developers to self-certify and label their data correctly, or not send it at all. You could argue that Facebook should bolster their signal detection when it receives data from a new apps for the first time. But to argue that a human at Facebook blindly built a system to ingest data illegally without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents (which it did, that Flo sent to them). This case is very squarely #1 in your example and maybe a bit of #2.
shkkmo|6 months ago
Yes, it did. When Facebook built the system and allowed external entities to feed it unvetted information without human oversight, that was a choice to process this data.
> without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents
This seems like a giant assumption to make without evidence. Given the past bad behavior from Meta, they do not deserve this benefit of the doubt.
If those systems exist, they clearly failed to actually work. However, the court documents indicate that Facebook didn't build out systems to check if stuff is health data until afterwards.
unknown|6 months ago
[deleted]
Capricorn2481|6 months ago
What everyone else is saying is what they did is illegal, and they did it automatically, which is worse. What you're describing was, in fact, built to do that. They are advertising to people based on the honor system of whoever submits the data pinky promising it was consensual. That's absurd.
ryandrake|6 months ago
To use an extreme example, if someone posts CSAM through Facebook and says "It's not CSAM, trust me bro" and Facebook publishes it, then both the poster and Facebook have done wrong and should be in trouble.
pixl97|6 months ago
In some crimes actus reus is what matters. For example if you're handling stolen goods (in the US) the law can repossess these goods and any gains from them, even if you had no idea they were stolen.
Tech companies try to absolve themselves of mens rea by making sure no one says anything via email or other documented process that could otherwise be used in discovery. "If you don't admit your product could be used for wrong doing, then it can't!"
changoplatanero|6 months ago
qwertylicious|6 months ago