top | item 46490547

(no title)

boredhedgehog | 1 month ago

> The underlying architecture we have today can't actually do this.

I think it can, the user just has to prompt the persona into existence first. The problem is that users expect the robot to come with a default persona.

discuss

order

mjr00|1 month ago

Needing to prompt the persona breaks the illusion, though. "Your favorite movie is Die Hard (1988). What's your favorite movie?" isn't technically impressive. Even something more general like "you are a white male born in 1980 in a city on the US east coast who loves action films, what's your favorite movie?" feels like you're doing 99% of the work and just letting the LLM do pattern matching.

Ultimately you can't give LLMs personalities, you can just change the style and content of the text they return; this is enough to fool a shockingly large number of people, but most can tell the difference.

ForceBru|1 month ago

Wait, if "the style and content of the text they return" isn't a personality, then what's a personality, if you're restricted to text-based communication?

coffeefirst|1 month ago

What’s the point of that?

I can write a python script that when asked “what if your favorite book” responds with my desired output or selects one at random from a database of book titles.

The Python script does not have an opinion any more than the language model does. It’s just slightly less good at fooling people.