I never really considered this too deeply, because I've never studied "Agentic AI" before (except for natural language processing). Stallman is making a really good point. ChatGPT doesn't solve the intelligence problem. If ChatGPT was actually able to do that it would be able to make ChatGPT 2.0 on request.
TheDong|2 months ago
What you're talking about is "The Singularity", where a computer is so powerful it can self-advance unassisted until the entire planet is paperclips. There is no one claiming that ChatGPT has reached or surpassed that point.
Human-like intelligence is a much lower bar. It's easy to find arguments that ChatGPT doesn't show it (mainly it being incapable of learning actively, and with there being many ways to show it doesn't really understand what it's saying either), but a Human cannot create ChatGPT 2.0 on request, so it follows to reason a human-like intelligence doesn't necessarily have to be able to do so either.
IanCal|2 months ago
> There are systems which use machine learning to recognize specific important patterns in data. Their output can reflect real knowledge (even if not with perfect accuracy)—for instance, whether an image of tissue from an organism shows a certain medical condition, whether an insect is a bee-eating Asian hornet, whether a toddler may be at risk of becoming autistic, or how well a certain art work matches some artist's style and habits. Scientists validate the system by comparing its judgment against experimental tests. That justifies referring to these systems as “artificial intelligence.”
This is nowhere near arguing that it should be able to make new versions of itself.