top | item 40055763

(no title)

macromaniac | 1 year ago

It's impressive that transformers, diffusion, and human generated data can go so far in robotics. I would have expected simulation would be needed to achieve such results.

My fear is that we see a similar problem with other generative AI in that it gets stuck in loops on complex problems and is unable to correct itself because the training data covers the problem but not the failure modes.

discuss

order

visarga|1 year ago

That's because most models have been trained on data created by humans for humans, it needs data created by AI for itself. Better learn from your mistakes than from the mistakes of others, they are more efficient and informative.

When an AI is set up to learn from its own mistakes it might turn out like AlphaZero, who rediscovered the strategy of Go from scratch. LLMs are often incapable of solving complex tasks, but they are greatly helped by evolutionary algorithms. If you combine LLMs with EA you get black box optimization and intuition. It's all based on learning from the environment, interactivity & play. LLMs can provide the mutation operation, or function as judge to select surviving agents, or act as the agents themselves.

3327|1 year ago

[deleted]