(no title)
macromaniac | 1 year ago
My fear is that we see a similar problem with other generative AI in that it gets stuck in loops on complex problems and is unable to correct itself because the training data covers the problem but not the failure modes.
visarga|1 year ago
When an AI is set up to learn from its own mistakes it might turn out like AlphaZero, who rediscovered the strategy of Go from scratch. LLMs are often incapable of solving complex tasks, but they are greatly helped by evolutionary algorithms. If you combine LLMs with EA you get black box optimization and intuition. It's all based on learning from the environment, interactivity & play. LLMs can provide the mutation operation, or function as judge to select surviving agents, or act as the agents themselves.
3327|1 year ago
[deleted]