top | item 46944254

(no title)

kremi | 21 days ago

> The model can be prompted to talk about competitive dynamics. It can produce text that sounds like adversarial reasoning. But the underlying knowledge is not in the training data. It’s in outcomes that were never written down.

With all the social science research and strategy books that LLMs have read, they actually know a LOT about outcomes and dynamics in adversarial situations.

The author does have a point though that LLMs can’t learn these from their human-in-the-loop reinforcement (which is too controlled or simplified to be meaningful).

Also, I suspect the _word_ models of LLMs are not inherently the problem, they are just inefficient representations of world models.

discuss

order

tovej|21 days ago

LLM's have not "read" social science research and they do not "know" about the outcomes, they have been trained to replicate the exact text of social science articles.

The articles will not be mutually consistent, and what output the LLM produces will therefore depend on what article the prompt most resembles in vector space and which numbers the RNG happens to produce on any particular prompt.

kremi|20 days ago

« Connaître est reconnaître »

I don’t think essentialist explanations about how LLMs work are very helpful. It doesn’t give any meaningful explanation of the high level nature of the pattern matching that LLMs are capable of. And it draws a dichotomic line between basic pattern matching and knowledge and reasoning, when it is much more complex than that.