top | item 43914408

(no title)

clysm | 9 months ago

No, it’s not a threshold. It’s just how the tech works.

It’s a next letter guesser. Put in a different set of letters to start, and it’ll guess the next letters differently.

discuss

order

Trasmatta|9 months ago

I think we need to start moving away from this explanation, because the truth is more complex. Anthropic's own research showed that Claude does actually "plan ahead", beyond the next token.

https://www.anthropic.com/research/tracing-thoughts-language...

> Instead, we found that Claude plans ahead. Before starting the second line, it began "thinking" of potential on-topic words that would rhyme with "grab it". Then, with these plans in mind, it writes a line to end with the planned word.

ceh123|9 months ago

I'm not sure if this really says the truth is more complex? It is still doing next-token prediction, but it's prediction method is sufficiently complicated in terms of conditional probabilities that it recognizes that if you need to rhyme, you need to get to some future state, which then impacts the probabilities of the intermediate states.

At least in my view it's still inherently a next-token predictor, just with really good conditional probability understandings.

dontlikeyoueith|9 months ago

> Anthropic's own research showed that Claude does actually "plan ahead", beyond the next token.

For a very vacuous sense of "plan ahead", sure.

By that logic, a basic Markov-chain with beam search plans ahead too.

cmiles74|9 months ago

It reads to me like they compare the output of different prompts and somehow reach the conclusion that Claude is generating more than one token and "planning" ahead. They leave out how this works.

My guess is that they have Claude generate a set of candidate outputs and the Claude chooses the "best" candidate and returns that. I agree this improves the usefulness of the output but I don't think this is a fundamentally different thing from "guessing the next token".

UPDATE: I read the paper and I was being overly generous. It's still just guessing the next token as it always has. This "multi-hop reasoning" is really just another way of talking about the relationships between tokens.