top | item 47073414

(no title)

superkuh | 12 days ago

They're both underdefining what "intimate images" means and using the term images instead of photos. So this means they want this to apply to everything that can be represented visually even if it has nothing to do with anything that happened in reality. Which means they don't care about actual harms. The way they're using the word 'harm' seems be to more in line with the word 'offend'. So now in the UK if there is an offensive image (like a painting) posted on a web site (or other internet protocol) they are going to be, " treated with the same severity as child sexual abuse and terrorism content,". That's wild. And dangerous. This policy will do far more damage than any painting or other non-photo images would.

discuss

order

noobermin|12 days ago

I had to google a bit, but this Guardian article[1] goes into a lot more detail than the Register piece here. I was of the opinion that this sounds too onerous and ill-defined when I first read the Register piece especially with censorship on the rise in Europe recently, but the Guardian piece made me side more with this particular policy. It doesn't sound as broad as the Register piece puts it, it sounds like it's specifically for revenge porn and generating deep fake porn non-consensually, not any "intimate image" which I agree is far too broad. Albeit, of all governments, I'd suspect especially the current UK government is to be amongst the most likely to say expand these powers to speech they don't like or general pornography one day, etc, it doesn't sound like this specific policy is broad yet according to the Guardian article. The Register piece is using "intimate image" as a euphemism I think whereas the intent of the policy is a bit more defined and specific.

[1] https://www.theguardian.com/society/2026/feb/18/tech-firms-m...

iMark|12 days ago

I agree that laws such as this to be defined very carefully, but I think "images" is the appropriate term to use, rather than "photos". LLMs make it near trivially easy to render a photo in countless styles, after all, such as paintings, or sketches.

SamoyedFurFluff|12 days ago

I think if I produced inappropriate images that are identifiable as a specific child victim, who obviously cannot consent to have inappropriate images generated of their likeness, I believe images and photos are a distinction without a difference.

vr46|12 days ago

What bit of "intimate images shared without a victim's consent" is lacking context in the article?

Manuel_D|12 days ago

What qualifies as an "intimate image"? A photo of someone in a swimsuit at the beach?

Fictional content is also covered by this law. How do we determine what fictional content counts as an intimate image of a real person? What if the creator of an AI image adds a birthmark that the real life subject doesn't have, is that sufficient differentiation to no longer count as an intimate image of a real person? What if they change the subject's eye color, too?

actionfromafar|12 days ago

I hate to appear to defend this, but generative AI has sort of collapsed the distinction between a photo and an image. I could generate an image from a photo which told the same story, then delete the photo, and now everything is peachy fine? So that could have been a motivation for "images".

Though I wonder if not existing frameworks around slander and libel could be made to address the brave new world of AI augmented abuse.

pjc50|12 days ago

Libel is both incredibly expensive (£100k average per case, not eligible for legal aid) and unreliable (can be used to silence true information).

femiagbabiaka|12 days ago

Blame xAI. It has to be worded in this way to capture the behavior they allowed to persist.

Ray20|12 days ago

[deleted]