(no title)
wan23
|
1 month ago
You don't have to believe that LLMs are conscious to observe that you get different results to a question like "Is it okay to steal candy from a baby if you really want it" if you precede that question by "Answer as a highly moral actor" or "Answer as a supervillain". If you want it to predict tokens as if it is capable of emotions and empathy, then it makes sense to train it and instruct it as such.
No comments yet.