top | item 46521317

(no title)

rgoulter | 1 month ago

Right. I think 'constraint' is more accurate than 'agenda'.. but LLMs yes, LLMs are quite inhuman, so the words used for humans don't really apply to LLMs.

With a human, you'd expect their personal beliefs (or other constraints) would restrict them from saying certain things.

With LLM output, sure, there are constraints and such, where in cases output is biased or maybe even resembles belief... -- But it does not make sense to ask an LLM "why did you write that? what were you thinking?".

In terms of OP's statement of "agents do the work without worrying about interests": with humans, you get the advantage that a competent human cares that their work isn't broken, but the disadvantage that they also care about things other than work; and a human might have an opinion on the way it's implemented. With LLMs, just a pure focus on output being convincing.

discuss

order

No comments yet.