top | item 39272240

(no title)

hanselot | 2 years ago

We developed a tool that lets you kind of understand other people if the audio quality is sufficient. With further training we can openly communicate with each other without fear of misunderstanding.

Obviously it has to be racist...

discuss

order

boringuser2|2 years ago

It's not unexpected that it is biased to the language of its creators. I'd expect similar results from Chinese AI, etc.

The question is it reasonable for English speakers with a generally European heritage to be the stewards of niche world cultures?

That seems like quite the burden, which is the opposite of strength.

kgeist|2 years ago

>The question is it reasonable for English speakers with a generally European heritage to be the stewards of niche world cultures?

Something better than nothing? It's very unlikely that a tribe in Africa or Polynesia is going to create their own ML model anytime soon.

Although if you read ML papers, it's usually pretty diverse. The article is hinting at colonialism, especially as experienced by the Maori, i.e. largely "English speakers with West European/British heritage, also male". However, the original Whisper paper has 6 authors out of which 2 are East Asian, 2 are of Jewish heritage, 1 is a native Russian speaker, 1 woman. I'm not sure how they're related to colonialism in NZ.

The article reads like the author of the article has what's called "gatekeeper syndrome" in my country, i.e. they're just upset someone trains ML models on Maori language without their official approval/oversight because the author feels their foundation is entitled to speak for all the Maori.

8note|2 years ago

I wouldn't consider the ai models to be the stewards of the English language. The people who speak English are. When you hear a bot speak in broken English, you recognize that it's broken English.

The stewards on the language will similarly hear the bot as broken, and people who don't know the language won't know the difference.