top | item 24258415

(no title)

HackedBunny | 5 years ago

My quick test of DeepSpeech, as posted on Facebook back in April:

--

I just ran a speech-to-text converter on a very clear clip of former Doctor Who actor Tom Baker talking in an interview.

The DeepSpeech converter uses the very latest AI deep-learning advancements to 'listen' to the audio and output the spoken words as text.

After 3 long minutes of running it on a 30-second clip, it printed out its interpretation:

"hooloomooloo how booboorowie i have a honeymoon"

discuss

order

twoslide|5 years ago

It's supposed to work on sentence long audio (4 - 5 seconds), they suggest chunking your audio first: https://discourse.mozilla.org/t/longer-audio-files-with-deep...

magicalhippo|5 years ago

Also there's essentially two parts to this, the neural net is used for speech-to-characters, and then a language model is used to convert the character stream to words.

I found that the language model they supplied was trained data that did not contain the words I needed, and got significantly improved results when making my own language model using the kenlm[1] tools.

[1]: https://kheafield.com/code/kenlm/

donw|5 years ago

I didn’t know Tom Baker was Welsh.

fxtentacle|5 years ago

When I tried it out with English and German phone recordings, it was working competitively. I would have ranked it better than Amazon but worse than Google.

Did you maybe not convert your WAV to the correct sampling rate?

bmn__|5 years ago

Experiment is worthless for drawing conclusions if it's not reproducible by other people.

Besides the hypothesis that DS sucks, the software could also very well be just fine and you made methodological errors.