top | item 47146609

(no title)

aakresearch | 5 days ago

This is a very useful take, thank you. Really helped me to adjust my mental model without "antropomorphising" the machinery. Upvoted.

If I may, I would re-phrase/expand the last sentence of yours in a way that makes it even more useful for me, personally. Maybe it could help other people too. I think it is fair to say that in presence of hints like "Pretend you are X" or "Take a deeper look" the inference mechanism (driven by it's training weights, and now influenced by those hints via "attention math") is not "satisfied" until it pulls more relevant tokens into "working context" ("more" and "relevant" being modulated by the particular hint).

discuss

order

No comments yet.