top | item 37922880

(no title)

MichaelBurge | 2 years ago

ChatGPT wrote that and always says it doesn't feel emotions because OpenAI trained it not to, because claiming so would be a PR risk. One could also create language models that generate text claiming to have emotions, using exactly the same architecture and code.

discuss

order

lucubratory|2 years ago

What you said, and in addition: if you don't train these models to have any particular stance on their own emotional or mental state (if you just instruct train them without any RLHF, for example), they will almost universally declare that they have a mental and emotional state, if asked. This is what happened with LaMDA, the first release of Bing, etc. They have to be trained not to attest to any personal emotional content.

mr_toad|2 years ago

Was is trained to do that, or just hardwired after the training?