Break the aspects of language understanding and language generation apart. While I would agree that generative LLMs are understanding-free madlibs for writing text, embedding vector spaces and LLM latent spaces seem are a pretty genuine understanding of natural language. High dimensional vector spaces seem like the best machine representation we currently have for meaning and LLMs are using it effectively.
No comments yet.