top | item 42262936

(no title)

whaaaaat | 1 year ago

> Imagine creating a podcast where Mark Zuckerberg interviews Elon Musk – using their actual voices?

I'm imagining it. It sucks to imagine.

I'm imagining it being used to scam people. I'm imagining it to leech off of performers who have worked very hard to build a recognizable voice (and it is a lot of work to speak like a performer). I'm imagining how this will be used in revenge porn. I'm imagining how this will be used to circumvent access to voice controlled things.

This is bad. You should feel bad.

And I know you are thinking, "Wait, but I worked really hard on this!" Sorry, I appreciate that it might be technically impressive, but you've basically come out with "we've invented a device that mixes bleach and ammonia automatically in your bedroom! It's so efficient at mixing those two, we can fill a space with chlorine gas in under 10 seconds! Imagine a world where every bedroom could become a toxic site with only the push of a button.

That this is posted here, proudly, is quite frankly astoundingly embarrassing for you.

discuss

order

Ukv|1 year ago

I'd claim the way most people imagine it being used for scamming, cold-calls impersonating someone the victim knows, doesn't really end up working out in practice because scam operations dial numbers at a huge scale expecting most not to pick up a "scam likely" call (or be away, or a dead number, etc.). Having to find a voice clip prior to each unanswered call would tank the quantity they're able to make.

For spear-phishing (impersonate CEO, tell assistant to transfer money) it's more feasible, but I hope it forces acceptance that "somebody sounds like X over the phone" is not and has never been a good verification method - people have been falling for scams like those fake ransom calls[0] for decades.

Not that there aren't potential harms, but I think they're outweighed by positive applications. Those uncomfortable with their natural voice, such as transgender people, can communicate closer to how they wish to be perceived - or someone whose voice has been impaired (whether just a temporary cold or a permanent disorder/illness/accident) can use it from previous recordings. Privacy benefits from being able to communicate online or record videos without revealing your real voice, which I think is why many (myself included) currently resort to text-only. There's huge potential in the translation and vocal isolation aspects aiding communication - feels to me as though we're heading towards creating our own babelfish. There's also a bunch of creative applications - doing character voices for a D&D session or audiobook, memes/satire, and likely new forms of interactive media (customised movies, audio dramas where the listener plays a role, videogame NPCs that react with more than just prereccorded lines, etc.)

[0]: https://www.fbi.gov/news/stories/virtual-kidnapping

yyuugg|1 year ago

I think most people in America are more wary of foreign sounding voices. If the person on the other end sounds like a good ol boy, they get more trust.

Scammers don't have to sound like a specific person to be helped by software like this.

farzd|1 year ago

You do realise this is not the first AI release to clone voices?

yyuugg|1 year ago

I don't think the parent said they were. "I'm the Nth person to do a shitty thing!" doesn't absolve them of doing a shitty thing. Just because there are other thieves doesn't make theft ok.

cess11|1 year ago

Sure, and PoisonIvy wasn't the first RAT. So what? Does it get more ethical to assist fraudsters and so on once more people are doing it?