top | item 45673761

(no title)

monkeynotes | 4 months ago

LLMs aren't trading off anything. It's not like they make a decision based on anything other than what they are guided to do in training or in the system prompt.

It's like saying Reddit trades off one comment for another, yeah - an algorithm they wrote does that.

This article seems to allude to the idea there is a ghost in the machine, and while there is a lot of emergent behavior rather than hard coded algorithms, it's not like the LLM has an opinion, or some sort of psychology/personality based values.

They could change the system prompt, bias some training, and have completely different outcomes.

discuss

order

No comments yet.