I think there’s a bit of parroting going around but LLMs are predictive and there’s a lot you can inuit a lot about how they behave just on that fact alone. Sure, calling it “token” prediction is oversimplifying things, but stating that, by their nature, LLMs are guessing at the next most likely thing in the scenario (next data structure needing to be coded up, next step in a process, next concept to cover in a paragraph, etc.) is a very useful mental model.
bt1a|5 months ago
teucris|5 months ago
Agreed - I picked certain words to be intentionally ambiguous eg “most likely” since it provides an effective intuitive grasp of what’s going on, even if it’s more complicated than that.
Uehreka|5 months ago
Of course, if someone is predisposed to incuriosity about LLMs and refuses to use them, they won’t be able to participate in that approach. However I don’t think there’s an alternative.
libraryofbabel|5 months ago
anthem2025|5 months ago
Why not apply that to computers in general and then we can all worship the magic boxes.