I'm deaf. Something close to standard Canadian English is my native language. Most native English speakers claim my speech is unmarked but I think they're being polite; it's slightly marked as unusual and some with a good ear can easily tell it's because of hearing loss.
Using the accent guesser, I have a Swedish accent. Danish and Australian English follow as a close tie.
It's not just the AI. Non-native speakers of English often think I have a foreign accent, too. Often they guess at English or Australian. Like I must have been born there and moved here when I was younger, right? I've also been asked if I was Scandinavian.
Interestingly I've noticed that native speakers never make this mistake. They sometimes recognize that I have a speech impediment but there's something about how I talk that is recognized with confidence as a native accent. That leads me to the (probably obvious) inference that whatever it is that non-native speakers use to judge accent and competency, it is different from what native speakers use. I'm guessing in my case, phrase-length tone contour. (Which I can sort of hear, and presumably reproduce well, even if I have trouble with the consonants.)
AI also really has trouble with transcribing my speech. I noticed that as early as the '90s with early speech recognition software. It was completely unusable. Even now AI transcription has much more trouble with me than with most people. Yet aside from a habit of sometimes mumbling, I'm told I speak quite clearly, by humans.
> AI also really has trouble with transcribing my speech. I noticed that as early as the '90s with early speech recognition software. It was completely unusable.
I don't know what your transcription use cases are, but you may be able to get an improvement by fine-tuning Whisper. This would require about $4 in training costs[1], and a dataset with 5-10 hours of your labeled (transcribed) speech, which may be the bigger hurdle[2].
1. 2000 steps took me 6 hours on an A100 on Collab, fine-tuning openai/whisper-large-v3 on 12 hours of data. I can shar my notebook/script with you if you'd like.
2. I am working on a PWA that makes it simple for humans to edit initial, automated transcriptions with mistakes for feeding the correct dataset back into the pipeline for fine-tuning, but its not ready yet
I'm also deaf, and I took 14 years of speech therapy. I grew up in Alabama. The only way you would know I'm from the South is because of the pin-pen merger[1]. Otherwise, you'd think I grew up in the American Midwest, due to how my speech therapy went. Almost nobody picks up on it, unless they are linguists that already knew about the pin-pen merger.
Wow, I'm not deaf, but almost everything you mentioned applies to me too. I've never met anyone else who has experienced this before, yet all of your following points apply exactly to me:
> standard Canadian English is my native language
> Most native English speakers claim my speech is unmarked
> Non-native speakers of English often think I have a foreign accent, too. Often they guess at English or Australian. Like I must have been born there and moved here when I was younger, right?
> They sometimes recognize that I have a speech impediment but there's something about how I talk that is recognized with confidence as a native accent.
At least 2 or 3 times a year, someone asks me if I'm British, but me and my parents were born in Canada, and I've never even been to England, so I'm not really sure why some people think that I have a British accent. Interestingly, the accent checker guesses that my accent is
American English 89%
Australian English 3%
French 3%
Yep, I'm also deaf (since age 6), went through a lot of speech therapy, and have a very pronounced deaf accent. I live in the midwestern US (specifically, Ohio) and at least once a year I get asked where I'm from - England being the most common guess, but I've also had folks ask if I'm Scottish or Australian.
AI struggles massively with my accent. I've gotten the best results out of Whisper Large v2 and even that is only perhaps 60% accurate. It's been on my todo list to experiment with using LLMs to try to clean it up further - mostly so I can do things like dictate blog post outlines to my phone on long car rides - but I haven't had as much time as I'd like to mess around with it.
> Your accent is Dutch, my friend. I identified your accent based on subtle details in your pronunciation. Want to sound like a native English speaker?
I'm British; from Yorkshire.
When letting it know how it got it wrong there's no option more specific than "English - United Kingdom". That's kind of funny, if not absurd, to anyone who knows anything of the incredible range of accents across the UK.
I also think the question "Do you have an accent when speaking English?" is an odd one. Everyone has an accent when speaking any language.
I agree there is no such thing as a "British accent", though I'm lucky that my mockney lilt is considered to be one, but Dutch, Danish and Yorkshire are very similar for historical reasons so it's somewhat understandable for you to be detected as Dutch in this app.
I find Danes speaking Danish to sound like a soft Yorkshire accent, and the vowels that Yorkies use are better written in Danish, like phøne.
> I also think the question "Do you have an accent when speaking English?" is an odd one. Everyone has an accent when speaking any language.
Sure, I agree. But look at it from the perspective of a foreigner living in an English-speaking country, which is probably their target demographic.
We know that as soon as we open our mouth the locals will instantly pigeonhole us as "a foreigner". No matter how good we might be in other areas, we will never be one of "them". The degree of prejudice that may or may not exist against us doesn't matter as much as the ever present knowledge that the locals know that we are not one of them, and the fear of being dismissed because of that.
Nobody likes to stand out like that, particularly when it so clearly puts you at a disadvantage. That sort of insecurity is what this product is aimed at.
Yeah it's the same for having just one accent "German". Swiss, Austrians but also north vs middle vs south Germans do still sound different - even when they talk English.
It's quite offensive. English is my native tongue, I got a perfect IELTS score, and one of my parents was an English professor. But my accent makes me less than "native".
The Australian-Vietnamese continuum is well-explained by Australia being the geographically nearest region which can supply native English language teachers to English language learners in Vietnam, rather than by any intrinsic phonetic resemblance between Vietnamese and Australian English.
> This voice standardization model is an in-house accent-preserving voice conversion model.
Not sure this model works really well. As a french/spanish native speaker, I can immediately recognize an actual French or Spanish person speaking in english, but the examples here are completly foreign to me. If I had to guess where the "french" accent was from I would have guessed something like Nigeria. For example spanish have a very distinct way of pronouncing "r" in english that is just not present here. I would have been unable to correctly guess French or Spanish for the ~10 examples present in each language (mayyybe 1 for French).
It's probably an artifact of them lumping together all varieties/dialects of a given language. I don't speak Spanish, but I know that the R is one of the things that's different in e.g. Argentina.
For sure the voice standardization model is not perfect, but it was important for us to do especially for the voice privacy. It’s still pretty early tech.
Since our own accents generally sound neutral to ourselves, I would love someone to make an accent-doubler - take the differences between two accents and expand them, so an Australian can hear what they sound like to an American, or vice-versa
I've found that when I'm listening to recordings of me my accent really sticks out to me in a way that's completely inaudible when listening to myself live. This happens with both English and my native German.
What does it mean mono-tonal and what is an expressive ebook? I assume you are not American born? I had been of the understanding that rythm was more important than the exact sounds in comprehension.
I just got a project running whereby I used python + pdfplumber to read in 1100 pdf files, most of my humble bundle collection. I extracted the text and dumped it into a 'documents' table in postgresql. Then I used sentence transformers to reduce each 1K chunk to a single 384D vector which I wrote back to the db. Then I averaged these to produce a document level embedding as a single vector.
Then I was able to apply UMAP + HDBSCAN to this dataset and it produced a 2D plot of all my books. Later I put the discovered topic back in the db and used that to compute tf-idf for my clusters from which I could pick the top 5 terms to serve as a crude cluster label.
It took about 20 to 30 hours to finish all these steps and I was very impressed with the results. I could see my cookbooks clearly separated from my programming and math books. I could drill in and see subclusters for baking, bbq, salads etc.
Currently I'm putting it into a 2 container docker compose file, base postgresql + a python container I'm working on.
This is fascinating in theory, but I'm confused in practice.
When I play the different recordings, which I understand have the accent "re-applied" to a neutral voice, it's very difficult to hear any actual differences in vowels, let alone prosody. Like if I click on "French", there's something vaguely different, but it's quite... off. It certainly doesn't sound like any native French speaker I've ever heard. And after all, a huge part of accent is prosody. So I'm not sure what vocal features they're considering as "accent"?
I'm also curious what the three dimensions are supposed to represent? Obviously there's no objective answer, but if they've listened to all the samples, surely they could explain the main constrasting features each dimension seems to encode?
Apparently Persian and Russian are close. Which is surprising to say the least. I know people keep getting confused about how Portuguese from Portugal and Russian sound close yet the Persian is new to me.
Idea: Farsi and Russian both have simple list of vowel sounds and no diphtongs. Making it hard/obvious when attempting to speak english, which is rife with them and many different vowel sounds
Did research on accent, pronunciation improvement, phoneme recognition, kaldi ecosystem, etc … nothing really changed in the public domain past few years. There’s no even accurate open source dataset. All self claimedccc manually labelled dataset with 10k+ hours was partly done with automation. Next issue, model models operates in different latent space often with 50ms chunks while pronunciation assessment requires much better accuracy. Just try to say B loud - silent part gathering energy in the lips, loud part, and everything what resonates after. Worst part there are too many ml papers from the last year students or junior phd folks claiming success or fake improvements, etc
The article itself is just a vector projection in 3d space … the actual reality is much complex.
Any comments on pronunciation assessment models are greatly appreciated
You are right and I don't think incentives exist to solve the issues you describe, because currently many of the building blocks people are building are aligned to erase subtleaccent differences: the neural codecs, transcription systems such as whisper want to output clean/compressed representations of their inputs.
It would be interesting to do a wider test like this but instead of trying to clump people together into "American English" and "British English" it would be interesting if the data point was "in which city do people speak like you do?" and create a geographic map of accents.
I'm from the south of Sweden and I've had my "accent" made fun of by people from Malmö just because I grew up outside of Helsingborg, because the accent changes that much in just 60 kilometers.
Fascinating! How did you decouple the speaker-specific vocal characteristics (timbre, pitch range) from the accent-defining phonetic and prosodic features in the latent space?
We didn't explicitly. Because we finetuned this model for accent classification, the later transformer layers appear to ignore non-accent vocal characteristics. I verified this for gender for example.
When people mention a single "British accent", in 99% of the cases it's just a more widely understood shorthand for Received Pronunciation. I don't see how that's bad or wrong, considering how common it is in education.
I mean, if you want to be like that, you could generalize that statement to "the fact that they believe there to be a single `$LANGUAGE_OR_REGION` accent means this can be quickly discounted as nonsense". Other languages, and other varieties of English, have regional variation as well, after all--although in the case of other languages, I'll grant that the accents of, say, two German speakers from different regions might not be as distinct from each other in English as they are in German.
At any rate, I was looking forward to finding out what the accent oracle thought of my native US English accent, which sounds northern to southerners and southern to northerners, but I guess it'd probably just flag it as "American".
Very nice viz. it reminds me of the visualizations people used to do of the mnist data set in the days when the quintessential ML project was “training a hand writing digits classifier”:
https://projector.tensorflow.org/
I am curious - why UMAP not t-SNE? (See https://pair-code.github.io/understanding-umap/) When I saw the vis, there is a a collection of lines, which look as an artifact. t-SNE (typically) gives more "organic" results of blobs, provided you set perplexity high enough.
Also, while I admire examples of instances, it would be interesting to the map or original laguages - which is close to which, in terms of their English accents.
Why do the voices all sound so similar? I'm not talking about accent, I'm talking about the pitch, timbre, and other qualities of the voice themselves. For instance, all the phrases I heard sounded like they were said by a medium-set 45 year old man. Nothing from kids, the elderly, or people with lower / higher-pitch voices. I assume this expected from the dataset for some reason, but am really curious about that reason. Did they just get many people with similar vocal qualities but wide ranges of accents?
> By clicking or tapping on a point, you will hear a standardized version of the corresponding recording. The reason for voice standardization is two-fold: first, it anonymizes the speaker in the original recordings in order to protect their privacy. Second, it allows us to hear each accent projected onto a neutral voice, making it easier to hear the accent differences and ignore extraneous differences like gender, recording quality, and background noise. However, there is no free lunch: it does not perfectly preserve the source accent and introduces some audible phonetic artifacts.
> This voice standardization model is an in-house accent-preserving voice conversion model.
All the accents sound like somebody from... somewhere in the third world...? but with a small trace of the named accent.
I don't know if that's intended - maybe the different recordings are not supposed to sound like their label but like a foreigner who learned English while around people with that accent?
It's second choice was the place I live, and third place was where I'm from, so not too bad overall. I have been told I have a very ambiguous accent though.
Tried again and this time it got me. Second place is still Swedish.
Looking at the UMAP visualisation, there is a South African cluster overlapping with a Swedish cluster, so makes sense I guess.
It would be really cool if it could highlight the parts of the speech that gave you away your accent. It guesses mine correctly most of the time (though not the first time I tried), but also lets me know my accent is pretty light.
It got me, native English speaker with British accent.
I was hoping it might drill down into regional accents though, there is a huge variety in the UK. I have a Midlands accent which can occasionally confuse non-native speakers.
Good question! It's likely because there are lots of different accents of Spanish that are distinct from each other. Our labels only capture the native language of the speaker right now, so they're all grouped together but it's definitely on our to-do list to go deeper into the sub accents of each language family!
Not sure, could be the large number of Spanish dialects represented in the dataset, label noise, or something else. There may just be too much diversity in the class to fit neatly in a cluster.
Also, the training dataset is highly imbalanced and Spanish is the most common class, so the model predicts it as a sort of default when it isn't confident -- this could lead to artifacts in the reduced 3d space.
Yeh, we would've loved to see that too. It's on our roadmap for sure. Same for some of the other languages with a large amount of unique accents like e.g. French, Chinese, Arabic, etc...
Fascinated by the cluster of Australian, British and South African. As an Australian living in UK, I hear an enormous difference between these accents - even just in the British ones, the Yorkshireman and the Geordie stick out like a sore thumb to me - the narcissism of small differences perhaps. Interestingly, my partner, who is from England, often says, of various Australians we hear (either on TV or my friends), that they sound British to her. I, meanwhile, can pick an Australian from very few words. What are we hearing differently? It is a mystery to me.
This is a fascinating look at how AI interprets accents! It reminds me of some recent advancements in speech recognition tech, like Google's Dialect Recognition feature, which also attempts to adapt to different accents. I wonder how these models could be improved further to not just recognize but also appreciate the nuances of regional
retrac|4 months ago
Using the accent guesser, I have a Swedish accent. Danish and Australian English follow as a close tie.
It's not just the AI. Non-native speakers of English often think I have a foreign accent, too. Often they guess at English or Australian. Like I must have been born there and moved here when I was younger, right? I've also been asked if I was Scandinavian.
Interestingly I've noticed that native speakers never make this mistake. They sometimes recognize that I have a speech impediment but there's something about how I talk that is recognized with confidence as a native accent. That leads me to the (probably obvious) inference that whatever it is that non-native speakers use to judge accent and competency, it is different from what native speakers use. I'm guessing in my case, phrase-length tone contour. (Which I can sort of hear, and presumably reproduce well, even if I have trouble with the consonants.)
AI also really has trouble with transcribing my speech. I noticed that as early as the '90s with early speech recognition software. It was completely unusable. Even now AI transcription has much more trouble with me than with most people. Yet aside from a habit of sometimes mumbling, I'm told I speak quite clearly, by humans.
Hearing different things, as it were.
overfeed|4 months ago
I don't know what your transcription use cases are, but you may be able to get an improvement by fine-tuning Whisper. This would require about $4 in training costs[1], and a dataset with 5-10 hours of your labeled (transcribed) speech, which may be the bigger hurdle[2].
1. 2000 steps took me 6 hours on an A100 on Collab, fine-tuning openai/whisper-large-v3 on 12 hours of data. I can shar my notebook/script with you if you'd like.
2. I am working on a PWA that makes it simple for humans to edit initial, automated transcriptions with mistakes for feeding the correct dataset back into the pipeline for fine-tuning, but its not ready yet
Jemaclus|4 months ago
[1]https://www.acelinguist.com/2020/01/the-pin-pen-merger.html
chupchap|4 months ago
gucci-on-fleek|4 months ago
> standard Canadian English is my native language
> Most native English speakers claim my speech is unmarked
> Non-native speakers of English often think I have a foreign accent, too. Often they guess at English or Australian. Like I must have been born there and moved here when I was younger, right?
> They sometimes recognize that I have a speech impediment but there's something about how I talk that is recognized with confidence as a native accent.
At least 2 or 3 times a year, someone asks me if I'm British, but me and my parents were born in Canada, and I've never even been to England, so I'm not really sure why some people think that I have a British accent. Interestingly, the accent checker guesses that my accent is
which is pretty close to correct.shade|4 months ago
AI struggles massively with my accent. I've gotten the best results out of Whisper Large v2 and even that is only perhaps 60% accurate. It's been on my todo list to experiment with using LLMs to try to clean it up further - mostly so I can do things like dictate blog post outlines to my phone on long car rides - but I haven't had as much time as I'd like to mess around with it.
gertlex|4 months ago
ehnto|4 months ago
This is probably because some states in Aus use Queens English passed down from the colonies.
TomNomNom|4 months ago
> Your accent is Dutch, my friend. I identified your accent based on subtle details in your pronunciation. Want to sound like a native English speaker?
I'm British; from Yorkshire.
When letting it know how it got it wrong there's no option more specific than "English - United Kingdom". That's kind of funny, if not absurd, to anyone who knows anything of the incredible range of accents across the UK.
I also think the question "Do you have an accent when speaking English?" is an odd one. Everyone has an accent when speaking any language.
walthamstow|4 months ago
I find Danes speaking Danish to sound like a soft Yorkshire accent, and the vowels that Yorkies use are better written in Danish, like phøne.
david-gpu|4 months ago
Sure, I agree. But look at it from the perspective of a foreigner living in an English-speaking country, which is probably their target demographic.
We know that as soon as we open our mouth the locals will instantly pigeonhole us as "a foreigner". No matter how good we might be in other areas, we will never be one of "them". The degree of prejudice that may or may not exist against us doesn't matter as much as the ever present knowledge that the locals know that we are not one of them, and the fear of being dismissed because of that.
Nobody likes to stand out like that, particularly when it so clearly puts you at a disadvantage. That sort of insecurity is what this product is aimed at.
nedt|4 months ago
f7f3|4 months ago
Measter|4 months ago
I'm also British, from Devon.
suddenlybananas|4 months ago
Yeah I was disappointed when I realised this post was about foreign accents and not regional accents in English across the world.
AprilArcus|4 months ago
oscar120|4 months ago
sailingparrot|4 months ago
Not sure this model works really well. As a french/spanish native speaker, I can immediately recognize an actual French or Spanish person speaking in english, but the examples here are completly foreign to me. If I had to guess where the "french" accent was from I would have guessed something like Nigeria. For example spanish have a very distinct way of pronouncing "r" in english that is just not present here. I would have been unable to correctly guess French or Spanish for the ~10 examples present in each language (mayyybe 1 for French).
vintermann|4 months ago
ilyausorov|4 months ago
gmurphy|4 months ago
wkjagt|4 months ago
ema|4 months ago
jclulow|4 months ago
zman0225|4 months ago
I'd suggest training a little less on audio books.
djmips|4 months ago
johnwatson11218|4 months ago
Then I was able to apply UMAP + HDBSCAN to this dataset and it produced a 2D plot of all my books. Later I put the discovered topic back in the db and used that to compute tf-idf for my clusters from which I could pick the top 5 terms to serve as a crude cluster label.
It took about 20 to 30 hours to finish all these steps and I was very impressed with the results. I could see my cookbooks clearly separated from my programming and math books. I could drill in and see subclusters for baking, bbq, salads etc.
Currently I'm putting it into a 2 container docker compose file, base postgresql + a python container I'm working on.
kallistisoft|4 months ago
crazygringo|4 months ago
When I play the different recordings, which I understand have the accent "re-applied" to a neutral voice, it's very difficult to hear any actual differences in vowels, let alone prosody. Like if I click on "French", there's something vaguely different, but it's quite... off. It certainly doesn't sound like any native French speaker I've ever heard. And after all, a huge part of accent is prosody. So I'm not sure what vocal features they're considering as "accent"?
I'm also curious what the three dimensions are supposed to represent? Obviously there's no objective answer, but if they've listened to all the samples, surely they could explain the main constrasting features each dimension seems to encode?
afiodorov|4 months ago
CGMthrowaway|4 months ago
ilyausorov|4 months ago
Turkish and Persian seem to be the nearest neighbors.
zehaeva|4 months ago
binary132|4 months ago
adeptima|4 months ago
The article itself is just a vector projection in 3d space … the actual reality is much complex.
Any comments on pronunciation assessment models are greatly appreciated
oezi|4 months ago
bikeshaving|4 months ago
https://accent-explorer.boldvoice.com/script.js?v=5
ilyausorov|4 months ago
agrnet|4 months ago
3abiton|4 months ago
blixt|4 months ago
I'm from the south of Sweden and I've had my "accent" made fun of by people from Malmö just because I grew up outside of Helsingborg, because the accent changes that much in just 60 kilometers.
epolanski|4 months ago
I've tried to do the accent oracle test few times and it catches me being Italian with a 90%+ confidence.
The interesting thing is that if I try to fake a more english accent like American...it tells me I'm polish.
Which is odd because I don't really have a polish accent and don't speak it that well. I sound Italian even in Polish.
https://start.boldvoice.com/accent-oracle
tmshapland|4 months ago
oscarfree|4 months ago
bashtoni|4 months ago
tavavex|4 months ago
TurkTurkleton|4 months ago
At any rate, I was looking forward to finding out what the accent oracle thought of my native US English accent, which sounds northern to southerners and southern to northerners, but I guess it'd probably just flag it as "American".
sota_pop|4 months ago
stared|4 months ago
Also, while I admire examples of instances, it would be interesting to the map or original laguages - which is close to which, in terms of their English accents.
pinkmuffinere|4 months ago
dwohnitmok|4 months ago
> By clicking or tapping on a point, you will hear a standardized version of the corresponding recording. The reason for voice standardization is two-fold: first, it anonymizes the speaker in the original recordings in order to protect their privacy. Second, it allows us to hear each accent projected onto a neutral voice, making it easier to hear the accent differences and ignore extraneous differences like gender, recording quality, and background noise. However, there is no free lunch: it does not perfectly preserve the source accent and introduces some audible phonetic artifacts.
> This voice standardization model is an in-house accent-preserving voice conversion model.
efskap|4 months ago
JakeLester|4 months ago
Is there a way to subscribe to these blog posts for auto-notification?
nosrepa|4 months ago
nmeofthestate|4 months ago
fnands|4 months ago
It wrongly pegged me as Swedish.
It's second choice was the place I live, and third place was where I'm from, so not too bad overall. I have been told I have a very ambiguous accent though.
fnands|4 months ago
dereknelson|4 months ago
dcreater|4 months ago
oscarfree|4 months ago
tananan|4 months ago
lynchdt|4 months ago
oscarfree|4 months ago
physicsguy|4 months ago
I was hoping it might drill down into regional accents though, there is a huge variety in the UK. I have a Midlands accent which can occasionally confuse non-native speakers.
glandium|4 months ago
ahoka|4 months ago
ahstilde|4 months ago
ilyausorov|4 months ago
oscarfree|4 months ago
Also, the training dataset is highly imbalanced and Spanish is the most common class, so the model predicts it as a sort of default when it isn't confident -- this could lead to artifacts in the reduced 3d space.
dmevich1|4 months ago
kazinator|4 months ago
diegolas|4 months ago
ilyausorov|4 months ago
ccheever|4 months ago
alex-moon|4 months ago
double_espresso|4 months ago
mertbozkir|4 months ago
ilyausorov|4 months ago
capestart|4 months ago
[deleted]
zaouiamine|4 months ago