top | item 39746256

Show HN: Fake or real? Try our AI image detector

22 points| aymandfire | 1 year ago |trial.nuanced.dev

Hey HN! We're Ayman and Dylan, co-founders of Nuanced (https://www.nuanced.dev/). We want to share a tool we’re working on to detect fake and real images: https://trial.nuanced.dev/demo/.

The UI is bare-bones but you’ll get the idea. Drag or upload an image and our tool will display the probabilities with which it thinks that the image might be AI-generated or not. If you want, you can click “No, it’s AI” to confirm that the image was AI-generated, or “No, it’s real” to confirm that the image was not AI-generated.

Why we’re working on this: as AI-generated images continue to blur the line between real and artificial and their adoption and quality rises, so too does the risk for fraud and misinformation. Not being able to trust what you see online threatens whatever level of "realness" or authenticity online material has. Companies like dating apps, news sites, and trust and safety teams have a growing need to distinguish AI-generated images from authentic ones.

The models we built are trained on various architectures, such as Dalle-3, Midjourney, and SDXL, with continuous integration of data from the latest AI image generators. Our technology can detect deepfakes and verify user profile images, documents, IDs, or media images. Additionally, it can detect fake or counterfeit products, services, or experiences being marketed on e-commerce platforms.

We hope it’s fun and would be very interested in any cases it gets wrong, as well as whatever else you’d like to ask or say!

48 comments

order

egypturnash|1 year ago

I uploaded a drawing I had sitting around my desktop and it was 75% confident that it was AI. It's one I did myself in Adobe Illustrator so I guess I can say I'm 100% confident that it was AI, but a different expansion of that abbreviation.

btown|1 year ago

The real AI was the Adobe software that Figma obsoleted along the way :)

evbogue|1 year ago

AI == An Illustrator!?

You got me thinking down the path that sometimes when people say AI they mean something very different than what I thought AI meant.

andoando|1 year ago

I was thinking about this the other day. The issue is if you have an imperfect test to detect fakes, it gives even more credibility to fakes that pass the test.

If there are no tests however, then were left to question the validity of everything

michaelbuckbee|1 year ago

What makes it even more of an issue is that's comparatively easy to generate a 1000 images of a scene and push them all through until you get one that happens to line up in such a way as to pass through the detection (as compared to having to physically paint 1000 scenes).

8A51C|1 year ago

Continuing this thought, consider a photo may be genuine but the actual scene is faked. Will pass all tests.

rcfox|1 year ago

Does any image that's been touched by AI count as fake? For instance, if you took a real photo and asked AI to widen it by 1 pixel, you could argue that this is a new "fake" image generated by AI, but it's 99.9% real. What about something that's been AI-upscaled, like with DLSS?

john_noumenonic|1 year ago

It shouldn't classify the example you explained as AI generated, but we are looking at expanding some functionality for similar use cases to that. W.r.t AI-upscaling, the current model isn't looking for it specifically, since many AI generated images may have been upscaled at some point without us necessarily being able to denote it as such when labeling the data.

htrp|1 year ago

Cool hack to get some Human Feedbacked data.

Unfortunately your system doesn't seem to be able to upload an image.

https://trial.nuanced.dev/demo/upload_progress has an event stream that polls every 2 seconds or so but doesn't seem to return any success criterion.

jprete|1 year ago

Data collection for adversarial training was also my first thought. The same training data used for classifying images as AI-generated can also be used to train the AI to generate images that fool more people.

john_noumenonic|1 year ago

Are you able to use the image upload box to select a file or drag-and-drop images?

bee_rider|1 year ago

I think iPhones do some pretty complex image processing, and other brands of phones go even farther. I believe there have been instances of phones adding in completely new details (although maybe I hallucinated that detail).

What’s the expected result and also what should we put as a “true” answer if we take a picture with our phones and upload it?

john_noumenonic|1 year ago

In the case of pictures taken with your phone, the "true" answer would be "real" as it's not a synthetically generated image, just some post-processing/cleaning-up. The overall classification is going to be affected by how much of the image has been altered, if it's only a small part, it shouldn't effect the overall outcome too much.

ceejayoz|1 year ago

Samsung devices, for example, will add detail to what it thinks is the Moon.

https://www.theverge.com/2023/3/13/23637401/samsung-fake-moo...

> The test of Samsung’s phones conducted by Reddit user u/ibreakphotos was ingenious in its simplicity. They created an intentionally blurry photo of the Moon, displayed it on a computer screen, and then photographed this image using a Samsung S23 Ultra. As you can see below, the first image on the screen showed no detail at all, but the resulting picture showed a crisp and clear “photograph” of the Moon. The S23 Ultra added details that simply weren’t present before. There was no upscaling of blurry pixels and no retrieval of seemingly lost data. There was just a new Moon — a fake one.

simlevesque|1 year ago

I tried with this eclipse photo: https://www.cnet.com/a/img/resize/535a36e2cb72f06e9b3dc04254...

It said 92% AI. Do you have any stats about how often it gets it right ?

john_noumenonic|1 year ago

We're generally seeing an accuracy of 90% on test sets, with its distribution primarily being images generated by diffusion models and the "real" images that are more ground level imagery. We'll have to take astrophotography into account!

yogorenapan|1 year ago

It’s actually pretty accurate for the images I had on hand. It only failed once you started giving it artwork rather than photos.

Low quality/distorted images also come out as AI

john_noumenonic|1 year ago

That's great to know actually, especially the low-quality/distorted images bit.

GaggiX|1 year ago

I like how confident the model is when you provide an anime image, the model always thinks it's AI even though I only provided images created by humans, I don't think I've ever seen a worse AI detector in my life. I hope that this is not a real demo of the product.

rfrey|1 year ago

> don't think I've ever seen a worse AI detector in my life. I hope that this is not a real demo of the product.

"I tried it on anime images and it didn't work well on that class" would have been sufficient.

koito17|1 year ago

Heh, I tried the exact same thing. I uploaded sketches from my favorite artists. Specifically, sketches produced between 2012 and 2016. All of them were identified as AI with greater than 50% probability.

Of course, if one uploads recent sketches, one could be cynical and claim the artist traced over AI-generated image. But I have never seen this done in practice

aleksandrm|1 year ago

I fed it a bunch of real images and it failed on all of them.

xnx|1 year ago

s/try/train/

irobeth|1 year ago

seconding this, it failed to correctly identify an AI image which had the DALL-E watermark clearly visible

what plans are there to guard against people intentionally poisoning your training data by miscategorizing the images they upload for classification?

dvh|1 year ago

Very poor accuracy, I only have it ai photos and they were randomly 60% real or ai. It's practically useless.

jewelry|1 year ago

Failed stable diffusion test. Obviously a good idea but no the tech doesn't deliver.

moofight|1 year ago

Given a random set of realistic-looking real and AI images, we have found that humans usually score in the 65-80% accuracy rate. You can give it a try here: https://sightengine.com/ai-or-not

bena|1 year ago

I was pretty dead on with photos of people. Especially if they're in color.

And it's not just a hand thing. There's often an element of surreal excess or a kind of uncanny valley/plasticy thing going on. If I had to point something out, it would be skin. AI seems to be bad at generating skin, it has a slightly cartoony look to it. If I were to venture a guess, it's because of the number of photos out there filtered to shit.

I was the worst at macro(?) landscape photography. I think that's what it is. Whatever it is when you essentially take a picture from far away, but zoom in and focus so the foreground and background are both in focus. That's close to 50/50.