(no title)
windowshopping | 6 hours ago
For a long time, it seemed the answer was it doesn't. But now, using Claude code daily, it seems it does.
windowshopping | 6 hours ago
For a long time, it seemed the answer was it doesn't. But now, using Claude code daily, it seems it does.
ferris-booler|5 hours ago
An enormous amount of research+eng work (most of the work of frontier labs) is being poured into making that 'correct' modifier happen, rather than just predicting the next token from 'the internet' (naive original training corpus). This work takes the form of improved training data (e.g. expert annotations), human-feedback finetuning (e.g. RLHF), and most recently reinforcement learning (e.g. RLVR, meaning RL with verifiable rewards), where the model is trained to find the correct answer to a problem without 'token-level guidance'. RL for LLMs is a very hot research area and very tricky to solve correctly.
fc417fc802|5 hours ago
love2read|4 hours ago
If I were to describe this to a nontechnical person, I would say:
LLMs are big stacks of layers of "understanders" that each teach the next guy something.
Imagine you are making a large language model that has 4 layers. Each layer will talk to it's immediate neighbor.
The first layer will get the bare minimum, in the LLM's of today, that's groups of letters that are common to come up together, called "tokens". This layer will try to derive a bit of meaning to tell the next layer, such as grouping of letters into words.
The next layer may be a little bit more semantic, for example interpreting that the word "hot" immediately followed by the word "dog" maps to a phrase "hot dog".
The layer after that, becoming a bit more intelligent given it's predecessors have already had some chances at smaller interpretations may now try to group words into bigger blobs, such as "i want a hot dog" as one combined phrase rather than a set of separated concepts.
The final layer may do something even more intelligent afterward, like realize that this is a quote in a book.
The point is that each layer tries to add a little meaning for the next layer.
I want to stress this: the layers do not actually correspond to specific concepts the way I just expressed, the point is that each layer adds a bit more "semantic meaning" for the next layer.
antonvs|2 hours ago