(no title)
throw310822 | 4 days ago
Makes sense, but at the same time: subjectively, an LLM is always predicting tokens. Otherwise it's just frozen.
throw310822 | 4 days ago
Makes sense, but at the same time: subjectively, an LLM is always predicting tokens. Otherwise it's just frozen.
Trasmatta|4 days ago
(Some might argue that's basically the human experience anyway, in the Buddhist non self perspective - you're constantly changing and being reified in each moment, it's not actually continuous)
throw310822|4 days ago
My mental image, though, is that LLMs do have an internal state that is longer lived than token prediction. The prompt determines it entirely, but adding tokens to the prompt only modifies it slightly- so in fact it's a continuously evolving "mental state" influenced by a feedback loop that (unfortunately) has to pass through language.