* one, applicable to current language models (which ChatGPT is one of them), claim that they "they fail to capture several syntactic constructs and semantics properties" and "their linguistic understanding is superficial". It gives an example, "they tend to incorrectly assign the verb to the subject in nested phrases like ‘the keys that the man holds ARE here", which is not the kind of mistake that ChatGPT makes.
* Another claim, is that "when text generation is optimized on next-word prediction only" then "deep language models generate bland, incoherent sequences or get stuck in repetitive loops". Only this second claim is relative to next-word prediction.
nextaccountic|3 years ago
* one, applicable to current language models (which ChatGPT is one of them), claim that they "they fail to capture several syntactic constructs and semantics properties" and "their linguistic understanding is superficial". It gives an example, "they tend to incorrectly assign the verb to the subject in nested phrases like ‘the keys that the man holds ARE here", which is not the kind of mistake that ChatGPT makes.
* Another claim, is that "when text generation is optimized on next-word prediction only" then "deep language models generate bland, incoherent sequences or get stuck in repetitive loops". Only this second claim is relative to next-word prediction.
abecedarius|3 years ago