(no title)
lambdadmitry | 3 years ago
Something I didn't appreciate enough is that we don't actually hear sounds when we hear people speaking. We hear phonemes, which are clusters of physical sounds that make semantic difference in the language. The clusters themselves aren't fixed either, they are very loose and mostly defined through what they are not — i.e., the difference that we perceive in "lip" and "leap" is not absolute, the actual sounds might easily overlap between speakers, but we adjust to the particular accent/speaker using the fact that they probably still have two separate phonemes there.
It works very well until one starts to learn a second language that might have not just different "clusters", but a different number of them. My first language is Russian, and in Russian there are just fewer semantically meaningful vowels; I honestly thought that the word "milk", молоко, has three roughly equivalent sounds, whereas in English that'd probably be heard as two or three distinct vowels ([məɫɐˈko]). Similarly, Russian "soft" sounds like м in мята are widely heard as having "j" in them, "m-ya-ta", while native speakers just don't hear that.
Phonetics training helps to start actually hearing all those sounds, to adjust our inbuilt clustering and start perceiving things that natives do. You suddenly start understanding native accents much better, and gain a new appreciation for the language and its beauty.
It's just as much about perception as it is about accent.
No comments yet.