I cannot make out anything from this. what are you trying to say? AI agents's fundamental task is to implement stuff, or in technical terms tool calling, computer use etc. they take input from user in natural language, and everything is perfectly defined in the prompt. The LLM is instructed to ask whenever not sure, so bootstrapping from hallucinations only happens when the implementation of AI agent is poor.
Haeuserschlucht|1 year ago
Layer1(LLM) to Layer2(AgentWeather): Can you tell the weather in chicago?
AgentWeather: I have a tool that's called get_weather. Should I use this?
No -> AgentWeather -> LLM: I cannot tell. LLM -> User: I cannot tell.
Yes -> AgentWeather -> LLM: The air temperature is 33°C. LLM -> User: The air temperature in Chicago is 33°C.
One example out of billions why it could go wrong.
Layer1, 2, ..., n, they are all unsupervised language models and they "communicate" via text that's converted into tokens.
What could go wrong?