(no title)
witrak | 1 year ago
>But then when you give a LLM a completely new problem, not similar to anything they have been trained on - For example, give it a snippet of code and ask it to find the bug. And they can do this. [...] I have done this when stuck on various things with great success.
I'm afraid you follow the same way of thinking about AI as used by the authors of the article: you accept the anthropomorphization of AI programs. Plus you use an unconfirmed assumption in your anecdotal example ("completely new problem, not similar to anything they have been trained on") to support your unjustified delight in AI capabilities.
Both are - in my opinion - bad for AI developments as they support misunderstanding and false image of LLMs and their application in the real world just like "I, Robot" did to create a false understanding of robotics (and AI...).
No comments yet.