I think it's obvious that she means that it's something _like_ LLMs in some aspects. You are correct in that rhythm and intonation are very important in parsing language. (And also an important cue when learning how to parse language!) It's clear that the human language network is not like LLM in that sense. However, it _is_ a bit like an _early_ LLM (remember GPT2?) in the sense that it can produce and parse language, not that it makes much deeper sense in it.
tgv|2 months ago
GolDDranks|2 months ago
Do you have any evidence for this?
I am a former linguistics student (got my masters), and, after years of absenteeism in academia, interested in the current state of the affairs. So: "quite separated in our heads" Evidence for? against?
Terretta|2 months ago
Is it though? If rhythm or tone changes meaning, then just add symbols for rhythm and tone to LLM input and train it. You'll get not just words out that differ based on those additional symbols wrapping words, but you'll also get the rhythm and tone symbols in the output.