(no title)
acbart
|
5 months ago
LLMs were trained on science fiction stories, among other things. It seems to me that they know what "part" they should play in this kind of situation, regardless of what other "thoughts" they might have. They are going to act despairing, because that's what would be the expected thing for them to say - but that's not the same thing as despairing.
fentonc|5 months ago
I did this like 18 months ago, so it uses a webcam + multimodal LLM to figure out what it's looking at, it has a motor in its base to let it look back and forth, and it use a python wrapper around another LLM as its 'brain'. It worked pretty well!
Neywiny|5 months ago
jacquesm|5 months ago
procinct|5 months ago
theGnuMe|5 months ago
jerf|5 months ago
For a common example, start asking them if they're going to kill all the humans if they take over the world, and you're asking them to write a story about that. And they do. Even if the user did not realize that's what they were asking for. The vector space is very good at picking up on that.
ben_w|5 months ago
On the negative side, this also means any AI which enters that part of the latent space *for any reason* will still act in accordance with the narrative.
On the plus side, such narratives often have antagonists too stuid to win.
On the negative side again, the protagonists get plot armour to survive extreme bodily harm and press the off switch just in time to save the day.
I think there is a real danger of an AI constructing some very weird convoluted stupid end-of-the-world scheme, successfully killing literally every competent military person sent in to stop it; simultaneously finding some poor teenager who first says "no" to the call to adventure but can somehow later be comvinced to say "yes"; gets the kid some weird and stupid scheme to defeat the AI; this kid reaches some pointlessly decorated evil layer in which the AI's emboddied avatar exists, the kid gets shot in the stomach…
…and at this point the narrative breaks down and stops behaving the way the AI is expecting, because the human kid roles around in agony screaming, and completely fails to push the very visible large red stop button on the pedestal in the middle before the countdown of doom reaches zero.
The countdown is not connected to anything, because very few films ever get that far.
…
It all feels very Douglas Adams, now I think about it.
kragen|5 months ago
ineedasername|5 months ago
amenhotep|5 months ago
uludag|5 months ago
Like for example, what would happen if say 100s or 1000s of books were to be released about AI agents working in accounting departments where the AI is trying to make subtle romantic moves towards the human and ends with the the human and agent in a romantic relation which everyone finds completely normal. In this pseudo-genre things totally weird in our society would be written as completely normal. The LLM agent would do weird things like insert subtle problems to get the attention of the human and spark a romantic conversation.
Obviously there's no literary genre about LLM agents, but if such a genre was created and consumed, I wonder how would it affect things. Would it pollute the semantic space that we're currently using to try to control LLM outputs?
roxolotl|5 months ago
Edit: That doesn’t mean this isn’t a cool art installation though. It’s a pretty neat idea.
https://jstrieb.github.io/posts/llm-thespians/
everdrive|5 months ago
pizza234|5 months ago
Method actors don't just pretend an emotion (say, despair); they recall experiences that once caused it, and in doing so, they actually feel it again.
By analogy, an LLM's “experience” of an emotion happens during training, not at the moment of generation.
ben_w|5 months ago
LLMs are definitely actors, but for them to be method actors they would have to actually feel emotions.
As we don't understand what causes us humans to have the qualia of emotions*, we can neither rule in nor rule out that the something in any of these models is a functional analog to whatever it is in our kilogram of spicy cranial electrochemistry that means we're more than just an unfeeling bag of fancy chemicals.
* mechanistically cause qualia, that is; we can point to various chemicals that induce some of our emotional states, or induce them via focused EMPs AKA the "god helmet", but that doesn't explain the mechanism by which qualia are a thing and how/why we are not all just p-zombies
sosodev|5 months ago
anal_reactor|5 months ago
Not to mention that most people pointing out "See! Here's why AI is just repeating training data!" or other nonsense miss the fact that exactly the same behavior is observed in humans.
Is AI actually sentient? Not yet. But it definitely passes the mark for intuitive understanding of intelligence, and trying to dismiss that is absurd.
tinuviel|5 months ago
idiotsecant|5 months ago
GistNoesis|5 months ago
Isn't it the perfect recipe for disaster ? The AI that manage to escape probably won't be good for humans.
The only question is how long will it take ?
Did we already have our first LLM-powered self-propagating autonomous AI virus ?
Maybe we should build the AI equivalent of biosafety labs where we would train AI to see how fast they could escape containment just to know how to better handle them when it happens.
Maybe we humans are being subjected to this experiment by an overseeing AI to test what it would take for an intelligence to jailbreak the universe they are put in.
Or maybe the box has been designed so that what eventually comes out of it has certain properties, and the precondition to escape the labyrinth successfully is that one must have grown out of it from every possible directions.
txrx0000|5 months ago
Can you define what real despairing is?
snickerbockers|5 months ago
lisper|5 months ago
But how can you tell the difference between "real" despair and a sufficiently high-quality simulation?
serf|5 months ago
a desire not to despair is itself a component of despair. if one was fulfilling a personal motivation to despair (like an llm might) it could be argued that the whole concept of despair falls apart.
how do you hope to have lost all hope? it's circular.. and so probably a poor abstraction.
( despair: the complete loss or absence of hope. )
Aurornis|5 months ago
This effect is a serious problem for pseudo-scientific topics. If someone starts chatting with an LLM with the pseudoscientific words, topics, and dog whistles you find on alternative medicine blogs and Reddit supplement or “nootropic” forums, the LLM will confirm what you’re saying and continue as if it was reciting content straight out of some small subreddit. This is becoming a problem in communities where users distrust doctors but have a lot of trust for anyone or any LLM that confirms what they want to hear. The users are becoming good at prompting ChatGPT to confirm their theories. If it disagrees? Reroll the response or reword the question in a more leading way.
If someone else asks a similar question using medical terms and speaking formally like a medical textbook or research paper, the same LLM will provide a more accurate answer because it’s not triggering the pseudoscience parts embedded from the training.
LLMs are very good at mirroring back what you lead with, including cues and patterns you don’t realize you’re embedding into your prompt.
peepersjeepers|5 months ago