They're both underdefining what "intimate images" means and using the term images instead of photos. So this means they want this to apply to everything that can be represented visually even if it has nothing to do with anything that happened in reality. Which means they don't care about actual harms. The way they're using the word 'harm' seems be to more in line with the word 'offend'. So now in the UK if there is an offensive image (like a painting) posted on a web site (or other internet protocol) they are going to be, " treated with the same severity as child sexual abuse and terrorism content,". That's wild. And dangerous. This policy will do far more damage than any painting or other non-photo images would.
noobermin|12 days ago
[1] https://www.theguardian.com/society/2026/feb/18/tech-firms-m...
iMark|12 days ago
SamoyedFurFluff|12 days ago
vr46|12 days ago
Manuel_D|12 days ago
Fictional content is also covered by this law. How do we determine what fictional content counts as an intimate image of a real person? What if the creator of an AI image adds a birthmark that the real life subject doesn't have, is that sufficient differentiation to no longer count as an intimate image of a real person? What if they change the subject's eye color, too?
actionfromafar|12 days ago
Though I wonder if not existing frameworks around slander and libel could be made to address the brave new world of AI augmented abuse.
pjc50|12 days ago
femiagbabiaka|12 days ago
Ray20|12 days ago
[deleted]