(no title)
hooande | 2 years ago
Humans love to think of multi agent systems as being like a team of people. It's much more like a writer imagining different characters and how they would respond. When George RR Martin imagines all 500 characters in Game of Thrones, there is a lot of diversity of perspective and thought there. But all of that is coming from one intelligence and doesn't represent a collaboration in any traditional sense.
famouswaffles|2 years ago
No it's not. It does no good for a Language Model to configure a global persona. It needs to be able to predict text from wildly varying backgrounds and contexts. It's not pretending anymore than anything else it does is pretending.
That's why experiments like the below actually work
Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus? (https://arxiv.org/abs/2301.07543)
Out of One, Many: Using Language Models to Simulate Human Samples (https://arxiv.org/abs/2209.06899)
A perfect LLM would predict Einstein as well as it would predict the dumbass down the street.
Now RLHF does incentivize a more global persona by default but stepping away from that is trivial
malaya_zemlya|2 years ago
RancheroBeans|2 years ago
dragonwriter|2 years ago
No, its not.
Its more like 1,000 individuals sharing one genetic template.
Its just that the experiential context that along with the template makes an individual is very small with LLM instances.