Did you read the article?
The main issue with your idea is that an LLM won't know if the algorithm it created is any good, or even if it works at all. If it can't check that it will never know and never get better. You could ask it to generate a number of algorithms and then yourself choose the best one but then you have worked as a team, the LLM did not plan anything.
reaperman|2 years ago
No, I don't think LLM's can "reason and plan". But I do think they can effectively mimic (fake) "reasoning and planning" and still arrive at the same result that actual reasoning and planning would yield, for reasonably common and problems of greater than trivial complexity but less than moderate complexity.
I think pretty much all of our production AI models today are limited by their lack of ability to self-assess and "goal-seek" and mutate themselves themselves to "excel". I'm not 100% sure what this would look like but I can be sure they don't have any real "drive to excel beyond". Perhaps improvements in Reinforcement Learning will uncover something like this, but I think there may need to be a paradigm shift before we invent something like that.