top | item 45868853

(no title)

kid64 | 3 months ago

It sounds like you get that LLMs are just "next word" predictors. So the piece you may be missing is simply that behind the scenes, your prompt gets "rephrased" in a way that makes generating the response a simple matter of predicting the next word repeatedly. So it's not necessary for the LLM to "understand" your prompt the way you're imagining, this is just an illusion caused by extremely good next-word prediction.

discuss

order

beardyw|3 months ago

In my simple mind "Who is the queen of Spain?" becomes "The queen of Spain is ...".