top | item 41909155

(no title)

OneManyNone | 1 year ago

Counterpoint: What progress has generative linguistics made in the same amount of time that deep learning has been around? It sure doesn't seem to be working well.

Also, the racecar example is because of tokenization in LLMs - they don't actually see the raw letters of the text they read. It would be like me asking you to read this sentence in your head and then tell me which syllable would have the lowest pitch when spoken aloud. Maybe you could do it, but it would take effort because it doesn't align with the way you're interpreting the input.

discuss

order

foldr|1 year ago

>What progress has generative linguistics made in the same amount of time that deep learning has been around? It sure doesn't seem to be working well.

Working well for what? Generative linguistics has certainly made progress in the past couple of decades, but it's not trying to solve engineering problems. If you think that generative linguistics and deep learning models are somehow competitors, you've probably misunderstood the former.

jampekka|1 year ago

Also being able to count number of letters of a word is not required for language capability in the Chomskian sense at least.