top | item 47168275

(no title)

nickandbro | 5 days ago

These image gen models are getting so advanced and life like that increasingly the general public are being duped into believing AI images are actually real (ex Facebook food images or fake OF models). Don't get me wrong I will enjoy the benefits of using this model for expressing myself better than ever before, but can't help feeling there's something also very insidious about these models too.

discuss

order

WarmWash|5 days ago

It's more likely than not that every single person who uses the internet has viewed an AI image and taken it as real by now.

The obvious ones stand out, but there are so many that are indiscernible without spending lots of time digging through it. Even then there are ones that you can at best guess it's maybe AI gen.

WD-42|5 days ago

People will continue to retreat into walled, trusted networks where they can have more confidence in the content they see. I can’t even be sure I’m responding to a real person right now.

tokai|5 days ago

Maybe not an actual argument for anything, but even before these image models everyone that used the internet had seen a doctored image they believed to be real. There was a reason that 'i can tell by the pixels' was a meme.

versk|5 days ago

At the point now where basically any photo that isn't shared by someone I trust or a reputable news organisation is essentially unverifiable as being real or not

The positive aspect of this advance is that I've basically stopped using social media because of the creeping sense that everything is slop

yen223|4 days ago

At least some of the comments here are likely AI-generated

yieldcrv|5 days ago

people only notice when they are prompted to look for AI or scrutinize AI

a lot of these accounts mix old clips with new AI clips

or tag onto something emotional like a fake Epstein file image with your favorite politician, and pointing out its AI has people thinking you’re deflecting because you support the politician

Meanwhile the engagement farmer is completely exempt from scrutiny

Its fascinating how fast and unexpected the direction goes

kevincox|5 days ago

I actually think this was a good thing. Manipulating images incredibly convincingly was already possible but the cost was high (many hours of highly skilled work). So many people assumed that most images they were seeing were "authentic" without much consideration. By making these fake images ubiquitous we are forcing people to quickly learn that they can't believe what they see on the internet and tracking down sources and deciding who you trust is critically important. People have always said that you can't believe what you see on the internet, but unfortunately many people have managed without major issue ignoring this advice. This wave will force them to take that advice to heart by default.

slfnflctd|5 days ago

I remember telling my parents at a young age that I couldn't be sure Ronald Reagan was real, because I'd only ever seen him on TV and never in real life, and I knew things on TV could be fake.

That was the beginning of my journey into understanding what proper verification/vetting of a source is. It's been going on for a long time and there are always new things to learn. This should be taught to every child, starting early on.

arkmm|5 days ago

I used to also have this optimistic take, but over time I think the reality is that most people will instead just distrust unknown online sources and fall into the mental shortcuts of confirmation bias and social proof. Net effect will be even more polarization and groupthink.

ByThyGrace|4 days ago

> By making these fake images ubiquitous we are forcing people to quickly learn

That's quite the high opinion on the self-improvement ability of your Average Joe. This kind of behavior only comes with an awareness, previously learned, and an alertness of mind. You need the population at large to be able to do this. How if not, say, teaching this at schools and waiting for the next generation to reach adulthood, would you expect this to happen?

manuelabeledo|5 days ago

> By making these fake images ubiquitous we are forcing people to quickly learn that they can't believe what they see on the internet and tracking down sources and deciding who you trust is critically important.

Has this thought process ever worked in real life? I know plenty of seniors who still believe everything that comes out of Facebook, be AI or not, and before that it was the TV, radio, newspapers, etc.

Most people choose to believe, which is why they have a hard time confronting facts.

0x457|5 days ago

When it comes to graphic content on the internet I usually consume it's for entertainment purposes. I didn't care where it came from before and don't care today either. Low quality content exists in both categories, a bit easier to spot in AI generated, so it's actually a bonus.

lm28469|5 days ago

I feel like there is one or two generations of people who are tech savy and not 100% gullible when it comes to online things. Older and younger generations are both completely lost imho, in a blind test you wouldn't discern a monkey from a human scrolling tiktok &co

anigbrowl|4 days ago

And if they don't?

Your post seems a little naive to me, a lot of people are just not interested in putting in the work or confronting their own confirmation bias, and there's an oversupply of bad actors who will deliberately generate fake imagery for either deception or exhaustion. Many people are just not on quest for truth and are more interested in the activation potential of images or allegations than in the factual reliability.

toraway|4 days ago

In reality: millions of boomers are scrolling FB this very minute reacting to the most obviously fake rage/surprise/love bait AI slop you've ever seen.

whynotmaybe|5 days ago

>fake OF models

Soon many real OF models will be out of job when everyone will be able to produce content to their personal taste from a few prompts.

sodacanner|5 days ago

People already have access to every form of niche pornography they could dare to imagine (for absolutely free!), I really doubt that 'personal taste' is the part that makes OF models their money. They'll be fine.

dfxm12|5 days ago

I don't think so. Talking to people in this space, I've found out about broad camps. There are probably more:

-They simply aren't into real women/men (so you couldn't even pay a model to do what they're looking for).

-They want to play out fantasies that would be hard to coordinate even if you could pay models (I guess this is more on the video side of things, but a string of photos can put be together into a comic)

-They want to generate imagery that would be illegal

Based on this, I would guess fetish artists (as in illustrators) are more at risk than OF models. However, AI isn't free. Depending on what you're looking for, commissions might be cheaper still for quite a while...

mjr00|5 days ago

Even ignoring the model censorship making high quality sexual imagery/videos not possible, this is a crazy take. You think OF models are making money because it's the only way to see a nude man/woman with particular characteristics on the internet?

You're completely misunderstanding what the product being sold is.

sekai|5 days ago

> Soon many real OF models will be out of job when everyone will be able to produce content to their personal taste from a few prompts.

net positive to society

baal80spam|5 days ago

And this can't come soon enough.

pousada|5 days ago

You can’t really because these powerful models are censored. You can create lewd pictures with open models but they aren’t nearly as good or easy to use.

coldtea|5 days ago

And they might have to gasp! get an honest job!

Havoc|5 days ago

Don’t think the demand for real OF is going anywhere

derwiki|5 days ago

How do you know they’re real right now?

neogodless|5 days ago

> Facebook food images or fake OF models

What in the world is a fake OF model?

Does "OF" stand for "of food"?

bena|5 days ago

It stands for "OnlyFans" a website originally for creators to engage directly with their audiences but quickly became a website where women sold explicit pictures of themselves to subscribers.

vunderba|5 days ago

Jaded, but if I knew there was a possibility of a bunch of incriminating footage of me (images, video, etc.) out there in the pre-AI days, I would do my absolute best to flood the internet with as many related deepfakes (including of myself) as possible.

techpression|5 days ago

Oh we’ve seen nothing yet of the chaos that generative ai will unleash on the world, looking at Meta platforms it’s already a multi million dollar industry of selling something or someone that doesn’t exist. And that’s just the benign stuff.

dfxm12|5 days ago

This has been true for a while with digital art, photoshop, etc. Over time, people's BS detectors get tuned. I mean, scrolling by quickly in a feed, yeah, you might miss if an image is "real" or not, but if you see a series of photos side by side of the same subject (like an OF model), you'll figure it out.

Also, using AI will not allow you to better express yourself. To use an analogy, it will not put your self-expression into any better focus, but just apply one of the stock IG filters to it.

itintheory|5 days ago

> a series of photos side by side of the same subject

Cameras are now "enhancing" photos with AI automatically. The contents of a 'real' photo are increasingly generated. The line is blurring and it's only going to get worse.

pancakeguy|5 days ago

Surely this is a problem that we will never be able to solve.

fortyseven|5 days ago

It's shitty, but I think it's almost as bad that people are calling everything AI. And I can't even blame them, despite how infuriating it is. It's just as insidious that even mundane things literally ARE AI now. I've seen at least twice now (that I'm aware of) where some cute, harmless, otherwise non-outrageous animal video was hiding a Sora watermark. So the crazy shit is AI. The mundane shit is AI. You wonder why everyone is calling everything AI now. :P

switchbak|5 days ago

It seems like a low level paranoia - now I find myself double checking that the youtube video I'm watching isn't some AI slop. All the creators use Getty b-rolls and increasingly AI generated stuff so much that it's not a far stretch to have the voice and script all be auto generated too.

I suppose if the AI was able to tell me a true and compelling story, I might not even mind so much. I just don't want to be spoon fed drivel for 15 minutes to find it was all complete made up BS.