top | item 42431903

(no title)

SebaSeba | 1 year ago

When does a LLM customer support bot that is based for example on RAG architecture, become an LLM agent?

discuss

order

wpietri|1 year ago

My take is that if the LLM outputs text for humans to read, that's not an agent. If it's making API calls and doing things with the results, that's an agent. But given the way "AI" has stretched to become the new "radium" [1], I'm sure "agent" will shortly become almost meaningless.

[1] https://en.wikipedia.org/wiki/Radium_fad

zdifferentiator|1 year ago

^^ best definition.

Right now they are "read-only" which I would call a persona

klntsky|1 year ago

The definition of agent is blurry. I prefer to avoid that term because it does not mean anything in particular. These are implemented as chat completion API calls + parsing + interpretation.

beardedwizard|1 year ago

As soon as we admit to ourselves that agent is just another word for context isolation among coordinated llm tasks.

Will agents still matter once models do a better job paying complete attention to large contexts?