top | item 47162896

(no title)

basch | 3 days ago

It's honestly disheartening and a bit shocking how everyone has started repeating the predict the next syllable criticism.

The language model predicts the next syllable by FIRST arriving in a point in space that represents UNDERSTANDING of the input language. This was true all the way back in 2017 at the time of Attention Is All You Need. Google had a beautiful explainer page of how transformers worked, which I am struggling to find. Found it. https://research.google/blog/transformer-a-novel-neural-netw...

The example was and is simple and perfect. The word bank exists. You can tell what bank means by its proximity to words, such as river or vault. You compare bank to every word in a sentence to decide which bank it is. Rinse, repeat. A lot. You then add all the meanings together. Language models are making a frequency association of every word to every other word, and then summing it to create understanding of complex ideas, even if it doesn't understand what it is understanding and has never seen it before.

That all happens BEFORE "autocompleting the next syllable."

The magic part of LLMs is understanding the input. Being able to use that to make an educated guess of what comes next is really a lucky side effect. The fact that you can chain that together indefinitely with some random number generator thrown in and keep saying new things is pretty nifty, but a bit of a show stealer.

What really amazes me about transformers is that they completely ignored prescriptive linguistic trees and grammar rules and let the process decode the semantic structure fluidly and on the fly. (I know google uses encode/decode backwards from what I am saying here.) This lets people create crazy run on sentences that break every rule of english (or your favorite language) but instructions that are still parsable.

It is really helpful to remember that transformers origins are language translation. They are designed to take text and apply a modification to it, while keeping the meaning static. They accomplish this by first decoding meaning. The fact that they then pivoted from translation to autocomplete is a useful thing to remember when talking to them. A task a language model excels at is taking text, reducing it to meaning, and applying a template. So a good test might be "take Frankenstein, and turn it into a magic school bus episode." Frankenstein is reduced to meaning, the Magic School Bus format is the template, the meaning is output in the form of the template. This is a translation, although from English to English, represented as two completely different forms. Saying "find all the Wild Rice recipes you can, normalize their ingredients to 2 cups of broth, and create a table with ingredient ranges (min-max) for each ingredient option" is closer to a translation than it is to "autocomplete." Input -> Meaning -> Template -> Output. With my last example the template itself is also generated from its own meaning calculation.

A lot has changed since 2017, but the interpreter being the real technical achievement still holds true imho. I am more impressed with AI's ability to parse what I am saying than I am by it's output (image models not withstanding.)

discuss

order

qsera|3 days ago

>represents UNDERSTANDING of the input language.

It does not have an understanding, it pattern matches the "idea shape" of words in the "idea space" of training data and calculates the "idea shape" that is likely to follow considering all the "idea shape" patterns in its training data.

It mimics understanding. It feels mysterious to us because we cannot imagine the mapping of a corpus of text to this "idea space".

It is quite similar to how mysterious a computer playing a movie can appear, if you are not aware of mapping of movie to a set of pictures, pictures to pixels, and pixels to co-ordinates and colors codes.

basch|3 days ago

Semantics. Its a encoded position that represents meaning in a way that is useful and reusable. That is "understanding." It's a mathematical representation of grasp.

aqua_coder|3 days ago

I am not knowledgeable on how transformer works but, what if, us humans just do the same thing in our minds as well ? What if our feeling of "understanding" is merely just the emotional response to a pattern matching as you just said?

dnautics|3 days ago

> pattern matches the "idea shape" of words in the "idea space

it does much more than this. first layer has an attention mechanism on all previous tokens and spits out an activation representing some sum of all relations between the tokens. then the next layer spits out an activation representing relations of relations, and the next layer and so forth. the llm is capable of deducing a hierarchy of structural information embedded in the text.

not clear to me how this isn't "understanding".

joquarky|3 days ago

Even if it gets the output wrong, it always seems to provide some output that indicates that it got the input right. This is the first thing that really surprised me about this tech.

steve1977|3 days ago

From what I understand, it's more like "input is 1, 3, 5, 7" so "output is likely to be 9".

Understanding would be a bit generous of a term for that I guess, but that also depends on the definition of understanding.