From an indie game dev standpoint, I can probably say a sentence or two in a given way using my standard headset microphone.. and something like this would allow for clean voice lines fairly easily, as long as they don't need to stress too much emotion... But for a $0 game, that would still be beneficial. Imagine all the 2D Zelda/FF like games that don't get played today because people would rather listen to dialogue than read.
Of course, there's also the preservation of the voice of a loved one. I would probably pay to hear my father's voice again but there"s probably only one or two VHS tapes with his voice on it.
James Earl Jones, presumably hedging against his eventual demise, has allowed his voice to be used for things like the Star Wars franchise [0].
Small, independent film makers can now use a skeleton crew to voice parts.
I can't imagine it would be anything other than a niche service, but hearing the voice and, potentially, interacting with a chatbot/LLM with the voice of a passed love one.
This is off the top of my head. I would also guess that this technology is a stepping stone for other weird, interesting and profoundly helpful uses.
If you've ever done voice prompt recordings for a phone system, voice cloning would be super helpful for doing one off tweaks, especially if you have to record a bunch. Instead of rerecording 20 messages, which can sometimes take hours, you can use a clone of your own voice to make the necessary modifications. My friend does a lot of recordings as part of his job and when I showed him the Adobe voice editing preview he got really excited. It has the potential to make tweaks a lot easier, less time consuming, and reduce voice strain.
Unifying a voice in tutorial videos so that the difference in voice does not distract the learner.
Auto non-toxic rephrasing of online chat in video games, let people hear their voice but paraphrase what they said in a manner that doesn't turn the platform into a cesspit.
Cloning your own voice so that you can turn a script into audio without 50 takes and then having to remove a million Ums and errs.
> Auto non-toxic rephrasing of online chat in video games, let people hear their voice but paraphrase what they said in a manner that doesn't turn the platform into a cesspit.
Person A used to be able to speak, but lost their voice in a accident/because of reason Y. Luckily, there is surviving audio/video with their voice on it, so a text-to-voice with their own voice could be created for them to use.
My pastor has an injured, vocal cord that makes him sound gritty at times. A technology like this applied to old copies of his speaking might make him sound like he used to. I don’t know if he’d use something like that since we mostly rely on the Spirit of Christ to open hearts to the truth.
Outside public speakers, there’s probably other people whose lost their voice or have trouble vocalizing who might want to sound like their old selves. This could help them.
Disclaimer: I think these techs will more often do damage than good. I’m just brainstorming an answer to your question.
... and put 99% of voice actors out of business. We'll eventually end up with every TV show, movie, and, video game being voiced by Ryan Gosling and Beyonce because market research.
The real answer is yes, I could probably come up with some contrived examples, like I lost my voice in a freak LLM accident and now want to clone my old voice. But this doesn't (you don't?) really need a net benefit reason to figure it out and publish it. Because why? I assume, because "this shouldn't exist!" which is just a more palatable wa to phrase "won't someone think of the children".
Society doesn't benefit from ignorance, so given it can exist, what's the problem with it existing? Why does it need a practical reason? Because people will do bad things with it? Duh, but I'd rather everyone know then just the bad guys
My question wasn't to imply that I don't think a given technology should or shouldn't exist.
I was curious to see if anyone could name at the top of their head some practical use cases that they feel net out the potential harms of cloning and misusing someone else's voice.
There's some nice and certainly practical examples, but I don't feel any of them would net out the harms.
Perhaps there's a use case that we can't even comprehend yet that would though!
By this logic there shouldn’t be regulation on anything, because the bad guys will have it any way.
While you can’t make it go away, you can disincentivize propagation and use which can be the difference between thousands of cases of scams/extortions and millions. Until there’s a stronger argument for voice cloning models (talking to a dead loved one is creepy and not a positive argument) then we shouldn’t encourage tools with overwhelmingly nefarious utility.
To at least give us something as a consolation for all the havoc all sorts of deep fakes will wreak on societies. It's like asking what a knife can be used for other than murder. It's a valid question.
Imagine being able to handle translations live and hearing the persons voice translated as if they were speaking to you in your native language with their own voice is a big one
ldoughty|2 years ago
Of course, there's also the preservation of the voice of a loved one. I would probably pay to hear my father's voice again but there"s probably only one or two VHS tapes with his voice on it.
abetusk|2 years ago
Small, independent film makers can now use a skeleton crew to voice parts.
I can't imagine it would be anything other than a niche service, but hearing the voice and, potentially, interacting with a chatbot/LLM with the voice of a passed love one.
This is off the top of my head. I would also guess that this technology is a stepping stone for other weird, interesting and profoundly helpful uses.
[0] https://www.theverge.com/2022/9/24/23370097/darth-vader-jame...
dqv|2 years ago
Lerc|2 years ago
Auto non-toxic rephrasing of online chat in video games, let people hear their voice but paraphrase what they said in a manner that doesn't turn the platform into a cesspit.
Cloning your own voice so that you can turn a script into audio without 50 takes and then having to remove a million Ums and errs.
grayhatter|2 years ago
that feels very orwellian
paradox460|2 years ago
diggan|2 years ago
nickpsecurity|2 years ago
Outside public speakers, there’s probably other people whose lost their voice or have trouble vocalizing who might want to sound like their old selves. This could help them.
Disclaimer: I think these techs will more often do damage than good. I’m just brainstorming an answer to your question.
goodluckchuck|2 years ago
Certainly entertainment. Movies / TV. It opens a new opportunity for videogames with generative characters.
stale2002|2 years ago
Alexa, siri, and similar, are all common place.
Another huge usecase would be anything to do with voice acting. Either in video games, cartoons, or the like.
This would completely democratize voice acting material, and would empower anyone to be able to do this for cheap.
mattlondon|2 years ago
grayhatter|2 years ago
The real answer is yes, I could probably come up with some contrived examples, like I lost my voice in a freak LLM accident and now want to clone my old voice. But this doesn't (you don't?) really need a net benefit reason to figure it out and publish it. Because why? I assume, because "this shouldn't exist!" which is just a more palatable wa to phrase "won't someone think of the children".
Society doesn't benefit from ignorance, so given it can exist, what's the problem with it existing? Why does it need a practical reason? Because people will do bad things with it? Duh, but I'd rather everyone know then just the bad guys
jamespattn|2 years ago
I was curious to see if anyone could name at the top of their head some practical use cases that they feel net out the potential harms of cloning and misusing someone else's voice.
There's some nice and certainly practical examples, but I don't feel any of them would net out the harms.
Perhaps there's a use case that we can't even comprehend yet that would though!
lbrunson|2 years ago
While you can’t make it go away, you can disincentivize propagation and use which can be the difference between thousands of cases of scams/extortions and millions. Until there’s a stronger argument for voice cloning models (talking to a dead loved one is creepy and not a positive argument) then we shouldn’t encourage tools with overwhelmingly nefarious utility.
johnnyworker|2 years ago
To at least give us something as a consolation for all the havoc all sorts of deep fakes will wreak on societies. It's like asking what a knife can be used for other than murder. It's a valid question.
cchance|2 years ago
shinycode|2 years ago
userbinator|2 years ago
kushie|2 years ago
unknown|2 years ago
[deleted]