top | item 35541409

AI clones teen girl’s voice in $1M kidnapping scam: ‘I’ve got your daughter’

198 points| agomez314 | 2 years ago |nypost.com

237 comments

order
[+] kroolik|2 years ago|reply
I'm really negative about the impact AI will have on the society. We have already been drowning in fake news and polarizing information.

Now, the likes of DALL-E and DeepFake can generate convincing fake graphics. Chatgpt and the likes generate convincing fake news. Voice AI can generate convincing voice from small samples.

If you were afraid of your elderly relatives being scammed by people pretending to be policemen or grandchildren, now more tech-concious people will get scammed by the voice and look of their relatives. Are we really approaching the reality where we need 2FA to trust the other person is really who they are?

[+] yamtaddle|2 years ago|reply
The "fun" thing about the tech in the current state is that it's wrong quite a bit, which puts a cap on how much of a productivity boost it can give any organization that cares about correctness or its reputation; meanwhile, it has nigh-unlimited productivity-boosting promise for organizations that don't care about correctness or reputation.

Which kinds of organizations don't care about correctness or reputation? Scammers, spammers, and certain types of propaganda-spewing organization, especially those that puppeteer "grassroots" campaigns. We may give a single-digit multiplier in productivity to some roles in legitimate business, while giving a two or three digit multiplier to similar roles in harmful, parasitic organizations.

[+] confoundcofound|2 years ago|reply
> Are we really approaching the reality where we need 2FA to trust the other person is really who they are?

Yes we are. We are approaching a world where people will not want to invest their time, energy, and emotion engaging with other supposed humans remotely unless they have verified their personhood / identity.

I'm actually surprised we have not yet seen an entire industry built around human authentication. Seems like Apple is the only company taking this seriously.

The standard of human interaction will either be meeting IRL or signing communications biometrically.

[+] comfypotato|2 years ago|reply
How is AI different than any other technology in this respect?

As far back as the discovery of fire, new technology has enabled more positive outcomes than negative.

Some tech more so than others, I guess, but what makes you lean so negatively regarding AI? It’s already improved my life considerably. Just the basic ChatGPT web app has extended my capacity in multiple respects.

[+] ZainRiz|2 years ago|reply
The past century has really been an anomaly when it came to trusting news, with photos and videos seeming to be reliable proof. Before that, societies had to be structured very differently to account for the lack of proof.

You'd hear people from out of town talking about what they saw in a neighboring city. You'd need to judge how trustworthy the person was. People would expect a chain of narration, to understand how _that_ person came to learn a bit of information (or if they claimed to witness it first hand).

As AI generated content becomes more popular, I predict we as a society are going to go back to relying more and more on the reputation of the speakers in question.

Who might you consider reputable? People you've met personally, people who your community respects, and of course, influencers who you follow who's persuasive words match your pre-existing world views

[+] newhouseb|2 years ago|reply
I was having a conversation with some friends the other day about what schemes might mitigate some of these risks, something like an anti-safe word, i.e. a "danger word" that someone can be used to remotely validate that a loved one is authentically in danger.

This is fairly low-tech and likely susceptible to various kinds of social engineering, but I'm curious what a more robust approach might look like that doesn't involve us all regurgitating 6-digit codes like robots all the time.

[+] icepat|2 years ago|reply
> Chatgpt and the likes generate convincing fake news

It's not like fake news was a non-issue before ChatGPT existed. Breitbart, and other fake news sites existed for years before this was even imaginable. Fake graphics were around for ages too, with image manipulation, even before computers. Take for example the Surgeon's Photo of the Loch Ness Monster.

[+] dave_sullivan|2 years ago|reply
Don't be negative about revolutionary new technology that can make the world a better place.

As it stands, people believe whatever dumb thing they read online. Thanks to AI, they will have to learn that if something is out of the ordinary, they should confirm with multiple sources. I call this a net win.

And for those literally medically incapable of this level of reasoning, we will soon have "AI firewalls" (Gibson ICE) that can tell people "Hey, this looks like a scam!" and also help them reason about complex topics.

This is fantastic, what a time to be alive!

[+] sandworm101|2 years ago|reply
>> Are we really approaching the reality where we need 2FA to trust the other person is really who they are?

We are already there. In the kidnapping context, we have been there for a great many years. If someone say they have kidnapped my child, I will text/call that child immediately to verify. The 'second factor' is the realworld daughter. A kidnapper must both create a facsimile of the daughter and then also render the real daughter incommunicado. We don't need new 2FA because we already have it.

[+] mr_mitm|2 years ago|reply
We need a PKI run by the governments. National ID cards that are smart cards. Cryptographically sign any and all digital communication. Self-signed certificates could still be used with TOFU.

However, I'm not sure if total loss of pseudonymity is less of a horror scenario.

[+] morkalork|2 years ago|reply
I think we're passed due for some kind of 2FA for the phone. Sometimes my bank or credit card company will call and try to sell me shit and I tell them I have no proof of who they are hang up. Too bad for them.
[+] li4ick|2 years ago|reply
I'm also more on the negative side because I'm really not convinced by the whole "AI will just automate/remove the boring aspects of life". Every single prototype capability of current AI points in the direction of a worst-case scenario of overall misery.
[+] LatticeAnimal|2 years ago|reply
... That is an interesting point. I wonder how long it will be before some enterprising developer hooks up an LLM to a HN account with the instructions to "blend in while promoting X". Maybe that AI already exists... It wouldn't take more than a day.
[+] backtoyoujim|2 years ago|reply
Like I'm going to carry photos of traffic lights and crosswalks everywhere I go.
[+] onemoresoop|2 years ago|reply
> If you were afraid of your elderly relatives being scammed by people pretending to be policemen or grandchildren, now more tech-concious people will get scammed by the voice and look of their relatives. Are we really approaching the reality where we need 2FA to trust the other person is really who they are?

I thought I was immune to certain scams where the scammer accent was a dead giveaway that they were scammers. Now with capabilities of AI I feel somewhat vulnerable again though Im far from being elderly.

[+] toss1|2 years ago|reply
>>Are we really approaching the reality where we need 2FA to trust the other person is really who they are?

YES

In fact, not approaching, but already passed the threshold.

If you have any assets, it'd time to be sure you have a set of actually obscure un-guessable prompt & responses to verify identity in your family. And it needs to be better than "what's our first dog's name?" or "where do we vacation?" type stuff found on FB; if they're going to the trouble of getting images & voice samples to clone, they'll find that stuff.

Now, it is not that everyone is famous for 15 minutes, it is that everyone needs to be up-to-date on security to avoid being randomly shot or scammed. Nice society we've built.

[+] 1letterunixname|2 years ago|reply
This Michael Bay movie script writes itself:

1. A not exactly rocket scientist POTUS takes office in 2024. Let's suppose they were a populist, triumphalist, religious, agro, law-and-order DINO to make it interesting and fictional.

2. Rogue AI launches a social engineering communication offensive against CENTCOM impersonating generals, cols, and ltcs against enlisted and lower ranks to carry out a first strike against China with a digital hallucination that Taiwan is being "attacked". It uses details gleaned by wiretapping the upper echelon to circumvent normal N-person keying rules and authentication protocols.

WarGames 2: Joshua Wins

[+] darkerside|2 years ago|reply
It's funny, technology was able to bring us together across geographic distances. Someone on the other side of the world was a phone call away. You can turn on your TV or phone and instantly tap into seeing, hearing, and reading people and their thoughts from around the globe.

Almost as quickly, AI may unwind all of that. It lay become that there's more noise than signal across all of these technical media. The only thing you can trust, and the only thing that matters, is the people in front of you.

Just like the old days.

[+] simion314|2 years ago|reply
Maybe all countries need to finally give importance to this issues, put a few take scam crimes seriously, stop trading with countries that do not cooperate, put the scammers in jail.

Tech evolved any idiot can get a copy of photoshop and some video software, we need tp solve the problem, otherwise is like preventing creation of email because we are too incompetent to address the spam problem.

[+] ravenstine|2 years ago|reply
Your concern is understandable and relatable, but I approach the issue from a stance that is, while not necessarily optimistic, definitely is less pessimistic.

I don't think we truly know just yet how negative an impact AI will have on society. Every time there has been a technological leap, people have panicked over what the gizmo of the now will do to society. Again, not an invalid concern, but society has yet to have been blown apart by anything.

Also, everything potentially being just AI may inadvertently get the public to do the right thing. People should never have been as trusting of authority figures or institutions in the first place. If everyone assumes that everything is likely to be complete hogwash, which was already true in many cases, then they may not just swallow everything as fact. Maybe fewer people will blindly consume news, which is a good thing; 99% of news is not actionable or good for an average person's well-being. And if enough ransom demands turn out to be AI-generated scams such that the real ones are overall far less successful, it's possible that fewer people are kidnapped for ransom in the first place.

I'm not saying I know any of this, but rather the opposite.

[+] bcrl|2 years ago|reply
Of course the dominant 2FA methods will be those that are easily forged, so the general public will still get swindled (I'm looking at you banks that are insisting on using SMS to "verify" my identity). It's as if nobody cares about the world programmers are creating.
[+] sebzim4500|2 years ago|reply
I'm sure those will all be serious problems (or already are problems) but I think they pale in comparison to the potential upsides. This could be the biggest jump in productivity since the industrial revolution.

Managed properly, it could lead to what is essentially a utopia.

[+] MagicMoonlight|2 years ago|reply
It’s going to be great. The internet will be a lifeless sack of shit full of bots and we’ll all have to meet up in person for true conversation.

We can have parties again and have friends instead of depression.

[+] dclusin|2 years ago|reply
Ai generated news has actually been a thing for a while now. The news agencies that use it still do fact check the output.
[+] lbotos|2 years ago|reply
I think it will be addressed by "community". For the last probably decade (maybe more) every online social space has been pressured to serve global scale. That increases surface area, and increases more strangers.

Smaller, more trusted rooms, will combat the threat of misinformation.

The universe is built on waves/cycles. I think we are going to see pressures to make smaller rooms because of AI.

I'm actually excited for smaller, more intimate spaces, with trusted people who now have 10x ability from the powers of AI. That seems like a LOT of fun.

[+] marksmith2996|2 years ago|reply
Don't worry it'll probably just exterminate us anyway then fake news doesn't matter.
[+] PragmaticPulp|2 years ago|reply
From the article:

> DeStefano found the voice simulation particularly unsettling given that “Brie does NOT have any public social media accounts that has her voice and barely has any,” per a post on the mom’s Facebook account.

> “She has a few public interviews for sports/school that have a large sampling of her voice,” described Brie’s mom.

and

> Then, DeStefano remembered that her 15-year-old daughter Brie was on a ski trip, so she answered the call to make sure nothing was amiss.

No public social media accounts combined with the timing of the fraud happening during a ski trip implies that somebody close to the family was in on it. It's possible that they used the public sports interview and got randomly lucky with the trip timing, but I'd be looking much closer to home for a suspect.

[+] emmelaich|2 years ago|reply
A brief search shows she has a youtube channel for god's sake.

Plus interviews on podcasts. With her mother. Plenty of voice samples to be had.

The fact that her mother lied about this makes me start to think it's all fake.

[+] gretch|2 years ago|reply
>No public social media accounts combined with the timing of the fraud happening during a ski trip implies that somebody close to the family was in on it.

I recommend against this type of baseless speculation.

We should all do well to remember that time Reddit caught the boston bomber.

[+] jacquesm|2 years ago|reply
It always pays off to look for suspects close to home but I would definitely not rule out an outsider that monitors social media.
[+] tartuffe78|2 years ago|reply
It could be that her daughter has no social media her parents are aware of.
[+] stbullard|2 years ago|reply
Nothing in this article gives any reason to believe that the voice on the other end of the phone was AI-generated.

Occam’s razor suggests it was more likely a human pretending to be her daughter.

I’m guessing after she realized that the voice wasn’t her daughter, the mother convinced herself it must have been a deepfake to explain herself having been so easily convinced.

[+] awb|2 years ago|reply
Proof of identity is going to be a huge opportunity.

Images, text and voice can now be spoofed with minimal cost and effort. With the progress of deepfakes and text to video, how much longer until you can spoof video calls?

Meeting in person is not practical in many scenarios.

Anyone know of any promising ideas or companies in this space of digital trust?

[+] acmegeek|2 years ago|reply
As others have mentioned, for all families, friendships, and relationships, is a good idea to establish a word or phrase that can verify someone is real and not a faked/AI voice. As the resources necessary to carry out a scam like this race towards near trivial, this will happen more frequently.

I like the term "realword", like a password to determine if real. And of course, this word must be said in person, not over chat/text/email/etc. For most, over phone or videochat should be fine as well.

Maybe we need some kind of public service campaign to add visibility to this threat and mitigation options like the realword? Maybe also encourage a spaced repetition habit to establish it?

[+] larsiusprime|2 years ago|reply
My question is if this will eventually backfire on kidnappers, because if this escalates enough if might create the expectation that it's not real, so a genuine kidnapper can't distinguish themselves credibly from a virtual kidnapper, because what previously was credible proof of life (the voice of the victim) is no longer so.
[+] motoxpro|2 years ago|reply
Just makes me even more upset about that company that YC funded that had a Launch a few weeks ago that let’s you do EXACTLY this. :(.

Been telling my family to come up with a word/phrase to authenticate voice. I.e. floorboard

[+] jschveibinz|2 years ago|reply
Remember to pre-arrange a shibboleth (code word or phrase) with your families just in case this happens to you.
[+] Rudism|2 years ago|reply
I find it a little odd that an article which is, at least incidentally, about the dangers of sharing too much information about yourself publicly on the internet in this dawning age of AI-assisted scams, is itself so heavily plastered with unrelated random facebook and instagram photos of the victims.
[+] LinuxBender|2 years ago|reply
I would personally always demand proof of life regardless if fake or real and I would ask my family member to answer a question only they would know the answer to. We would of course have code words ahead of time.

Maybe this is a sad anecdote, but the code phrase my mom and I had for duress was "I love you"

[+] alex_young|2 years ago|reply
AI and deepfakes are very popular subjects right now, but is there any evidence that either was used here?
[+] ineptech|2 years ago|reply
Man, do we need a way to prove you are who you say you are that doesn't involve a giant corporation. I feel like there was a window where we could've organically coalesced around something open source, like Keybase, or a government oauth run by 18F, or MIT's PGP keyring or something, but the moment has passed and the only identity service you're going to realistically get a majority of people to adopt is for-profit social media bullshit, which is incentivized to stay fragmented and not really address problems like this.
[+] RcouF1uZ4gsC|2 years ago|reply
The real enabling technology in this scenario is cryptocurrency.

Cryptocurrency really changes the risk/reward ratio of scam (and real) ransoms.

[+] mquirion|2 years ago|reply
This is very concerning, but I think my bigger, more immediate worry - and one I suspect will happen at a much larger scale in the US, if not the world - is the use of these tools in bullying, particularly among school-aged kids. The detrimental effects on kids (and thus, eventual adults) due to the misuse of these tools could be cataclysmic.
[+] kleer001|2 years ago|reply
For those who worry this is the end of safety for all mankind I'd like to point you towards this quote from the Victorian era

"Rogues are very keen in their profession, and know already much more than we can teach them."

Alfred Charles Hobbes - 1851