top | item 35886579

(no title)

knolax | 2 years ago

Trying to learn pronunciation through some sort of visual language annotation is one the most counterproductive ways you could approach it. Pronunciation varies subtly from person to person and even from situation to situation, all this information can only really be conveyed from actually listening to people speak, where as most systems for transcribing pronunciation have to optimize for regularity. The end result is that it only conveys the minimum amount of phonetic information needed to distinguish between morphemes. If you add more information then the categories become more and more subjective and harder to distinguish. For example try to do some IPA transcriptions for a language you do speak, or listen to trained linguists try to pronounce words in non-native language.

Think of it as trying to compress several kilobytes of information down to several bytes of information and then trying to reconstruct the original data all in the CPU when you have dedicated hardware several orders of magnitude more powerful and which uses a non-compatible black box compression scheme.

discuss

order

No comments yet.