top | item 42239609

Show HN: I made a tool for voice cloning

13 points| sekdek | 1 year ago |anyvoice.app

11 comments

order
[+] sekdek|1 year ago|reply
Got tired of writing 100-char-long variables in java during my 9-5, so I decided to relax with js and created a website where you can clone voices and make them say any text.

Under the hood, it uses the OS solution xtts-v2

What do you think?

[+] FreezerburnV|1 year ago|reply
As someone with aspirations of voice acting, and who generally believes in consent for usage of voices, stuff like this raises my hackles. I know the cat is out of the bag and there’s legal stuff happening right now around this, but I don’t like seeing more of them pop up. I know you’re doing this for fun and to relax, so I don’t want to be a jerk and assume bad faith or anything, but wanted to be honest about my opinions around this tech.
[+] trod1234|1 year ago|reply
I happen to fall into the same group as the previous poster who is a voice actor.

I know you did this as a challenge for fun but in fairness you should keep those challenges to yourself and not publicize them when the technologies involved are a pandora's box or ethically dubious.

I find it really hard to think of a single plausible use for this type of technology that is beneficial.

Nearly all uses directly or indirectly cause harm except maybe as voice restoration (for someone mimicking their own voice, whose vocal cords were damaged), and that would only help a limited number of people in these situations.

Used in lieu of voice actors, the demand for jobs in that sector go to zero collapsing the economic cycle/market eventually (non-market socialism->failure).

Same thing for any type of acting really, where it may violate their publicity or other rights.

Then there's the extreme where people use this technology to ransom victims in faux kidnappings.

If you created a public facing website for this, I hope you ran it by a lawyer and have an ironclad liability waiver (which I'm not sure any waiver's actually exist in this area of law that would be sufficiently defensible). Inevitably at some point there is always the question of negligence that comes up after harms have been done.

You don't want to be named in criminal lawsuits as a co-conspirator or otherwise, when your service/application is used by criminals to commit crimes.

The juice isn't worth the squeeze imo.

[+] saaaaaam|1 year ago|reply
The Trump example on the homepage sounds nothing like Trump.
[+] trod1234|1 year ago|reply
The underlying content is largely irrelevant.

What is relevant is what is advertised, and for what purpose. In many cases this alone may be sufficient to show intent/malice/negligence, especially when it breaks law. I don't imagine OP has written consent forms from these people authorizing the use of their voice for this purpose.

This would inevitably come first before even examining content.

If the voice example isn't similar enough, then that may be grounds to indicate fraud. Either way you cut it, law is broken, and there is a lot of applicable law that touches on deepfake technologies.