top | item 44901400

(no title)

qwertylicious | 6 months ago

Yeah, sorry, no, I have to disagree.

We're seeing this broad trend in tech where we just want to shrug and say "gee wiz, the machine did it all on its own, who could've guessed that would happen, it's not really our fault, right?"

LLMs sharing dangerous false information, ATS systems disqualifying women at higher rates than men, black people getting falsely flagged by facial recognition systems. The list goes on and on.

Humans built these systems. Humans are responsible for governing those systems and building adequate safeguards to ensure they're neither misused nor misbehave. Companies should not be allowed to tech-wash their irresponsible or illegal behaviour.

If Facebook did indeed built a data pipeline and targeting advertising system that could blindly accept and monetize illegally acquired without any human oversight, then Facebook should absolutely be held accountable for that negligence.

discuss

order

pc86|6 months ago

What does the system look like where a human being individually verifies every pieces of data fed into an advertising system? Even taking the human out of the loop, how do you verify the "legality" of one piece of data vs. another coming from the same publisher?

None of your example have anything to do with the thing we're talking about, and are just meant to inflame emotional opinions rather than engender rational discussion about this issue.

qwertylicious|6 months ago

That's not my problem to solve?

If Facebook chooses to build a system that can ingest massive amounts of third party data, and cannot simultaneously develop a system to vet that data to determine if it's been illegally acquired, then they shouldn't build that system.

You're running under the assumption that the technology must exist, and therefore we must live with the consequences. I don't accept that premise.

Edit: By the way, I'm presenting this as an all-or-nothing proposition, which is certainly unreasonable, and I recognize that. KYC rules in finance aren't a panacea. Financial crimes still happen even with them in place. But they represent a best effort, if imperfect, attempt to acknowledge and mitigate those risks, and based on what we've seen from tech companies over the last thirty years, I think it's reasonable to assume Facebook didn't attempt similar diligence, particularly given a jury trial found them guilty of misbehaviour.

> None of your example have anything to do with the thing we're talking about, and are just meant to inflame emotional opinions rather than engender rational discussion about this issue.

Not at all. I'm placing this specific example in the broader context of the tech industry failing to a) consider the consequences of their actions, and b) escaping accountability.

That context matters.

const_cast|6 months ago

> What does the system look like where a human being individually verifies every pieces of data fed into an advertising system?

Probably what it looked like 20 years ago.

Also, relaredly, if there's no moral or ethical way to conduct your business model, that doesn't mean that you're off the hook.

The correct outcome is your business model burns to the ground. That's why I don't run a hitman business, even though it would be lucrative.

If mass scale automated targeted advertising cannot be done ethically, then it cannot be done at all. It shouldn't exist.