top | item 46505960

(no title)

okayGravity | 1 month ago

It all has to do with specific filler words you use when prompting, especially chatGPT. If you use words that suggest a heavy (and I mean you really have to make the LLM know you're questioning), then it will question to an extent as you imply. If you look at the chats that they do have from this incident, he phrased his prompts as more convincing rather than questioning (i.e "Shes doing this because of this!") So chatGPT roleplays and goes along with the delusion.

Most people will just talk to LLMs like they are a person, even though LLMs won't understand the difference in complex social language and reasoning. It's almost like robots aren't people!

discuss

order

No comments yet.