(no title)
guessmyname | 19 days ago
Microsoft, Google, Facebook, and other large tech companies have had image recognition models capable of detecting this kind of content at scale for years, long before large language models became popular. There’s really no excuse for hosting or indexing these images as publicly accessible assets when they clearly have the technical ability to identify and exclude explicit content automatically.
Instead of putting the burden on victims to report these images one by one, companies should be proactively preventing this material from appearing in search results at all. If the technology exists, and it clearly does, then the default approach should be prevention, not reactive cleanup.
Dylan16807|19 days ago
If you're saying they shouldn't index any explicit images, you're talking about something very different from the article.
drdaeman|19 days ago
But I fail to make sense of it either way. Either the nuance of lack of consent is missing, or Google is blamed for not doing what they just did from the very first version.
whatevermom5|19 days ago
osmsucks|19 days ago
FrankBooth|19 days ago
ahofmann|19 days ago