top | item 35241914

(no title)

x1000 | 2 years ago

Imagine you are a LLM and all you see are tokens. Your job is not only to predict the next token in a sequence, but also to create a nice embedding for the token (where two similar words sit next to each other). Given a small enough latent space, you're probably not concerning yourself too much with the "structure inside" the tokens. But given a large enough latent space, and a large enough training corpus, you will encounter certain tokens frequently enough that you will begin to see a pattern. At some point during training, you are fed:

1) An English dictionary as input.

2) List of words that start with "app" wiki page as input.

3) Other alphabetically sorted pieces of text.

4) Elementary school homeworks for spelling.

5) Papers on glyphs, diphthongs, and other phonetic concepts.

You begin to recognize that the tokens in these lists appear near each other in this strange context. You hardly ever see token 11346 ("apple") and token 99015 ("appli") this close to each other before. But you see it frequently enough that you decide to nudge these two tokens' embeddings closer to one another.

Your ability to predict the next token in a sequence has improved. You have no idea why these two tokens are close every ten millionth training example. Your word embeddings start to encode spelling information. Your word embeddings start to encode handwriting information. Your word embeddings start to encode phonic information. You've never seen or heard the actual word, "apple". But, after enough training, your embeddings contain enough information so that if you're asked, ["How do", "you", "spell", "apple"], you are confident as you proclaim ["a", "p", "p", "l", "e", "."] as the obvious answer.

discuss

order

pertymcpert|2 years ago

Can you explain what people mean by an "embedding" or "embedding space"? It seems like something really abstract and...supernatural?

sriram_malhar|2 years ago

An object in the real world can be located in 3d space. You can say that one representation of that object is as a point in that space; it is embedded in a 3d embedding space.

Of course, those coordinates are not the only way in which the object can be represented, but for a certain problem context, these location coordinates are useful.

Given objects A,B,C, or rather, given their coordinates, one can tell which two are closest to each other, or you can find the point D that is the other point of the parallelogram ... this. In fact, it allows you to do similarity tests like "A:B :: C:D". This is through standard vector algebra.

Now, imagine each word associated with a 100-dimensional vector. You can do the same thing. Amazingly, one can do things like "man:woman ::king: ...." and get the answer "queen", just by treating each word as a vector, and looking up the inverse mapping for vector to word. It almost feels ... intelligent!

This embedding -- each word associated with an n-D vector -- is obtained while training neural nets. In fact, now you have readymade, pre-trained embedding approaches like Word2Vec.

https://www.tensorflow.org/tutorials/text/word2vec

pallas_athena|2 years ago

An Embedding is a n-dimensional vector (think of it as a sequence of n numbers).

During training, each token (or word) gets an Embedding assigned.

Critically, _similar words will get similar embeddings_. And "similar words" could mean both semantically or (as was the example) syntactically ("apple" and "appli").

And being vectors, you can do operations on them. To give the classic example, you could do: Embedding(`king`) + Embedding(`female`) = Embedding(`queen`).

stormfather|2 years ago

Imagine you think of 2 numbers to describe a basketball. You give a number for weight (1), and redness (0.7). Now, a basketball can be described by those 2 numbers, (1, 0.7). That is an embedding of a basketball in 2d space. In that coordinate system a baseball would be less heavy and less red, so maybe you would embed it as (0.2, 0.2).

basketball ==> (1.0, 0.7) # heavier, redder baseball ==> (0.2, 0.2) # less heavy, less red

When an LLM (large language model) is fed a word, it transforms that word into a vector in n-dimensional space. For example:

basketball -> [0.5, 0.3, 0.6, ... , 0.9] # Here the embedding is many, many numbers

It does this because computers process numbers not words. These numbers all represent some property of the word/concept basketball in a way that makes sense to the model. It learns to do this during it's training, and the humans that train these models can only guess what the embedding mappings it's learning actually represent. This is the first step of what a LLM does when it processes text.

wyldfire|2 years ago

I have no idea if these concepts are similar, but as a machine learning beginner, I found the concept of a "perceptron" [1] to be useful in understanding how networks get trained. IIRC a perceptron can be activated or not activated by a particular input depending on the specific network-under-training between the two. What it means to be activated or not depends on that perceptron's overall function. That perceptron is like a single "cell" of the larger matrix, maybe like the cells in your brain.

When I read the GP description referring to "embedding" above I thought of the perceptron.

Definitely not supernatural at all. The act of making an automaton that "can perceive" feels to me like it's closer to the opposite. Taking that which might seem mystical and breaking it down into something predictable and reproducible.

[1] https://en.wikipedia.org/wiki/Perceptron

rrrrrrrrrrrryan|2 years ago

> you are confident as you proclaim ["a", "p", "p", "l", "e", "."] as the obvious answer.

Is it possible for the current generation of LLMs to assign confidence intervals to their responses?

That's my main qualm with ChatGPT so far: sometimes it will give you an answer, but it will be confidently incorrect.

terramex|2 years ago

Yes, but it has some issues in latest models.

> GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake. Interestingly, the pre-trained model is highly calibrated (its predicted confidence in an answer generally matches the probability of being correct). However, after the post-training process, the calibration is reduced (Figure 8).

pages 10-11: https://cdn.openai.com/papers/gpt-4.pdf

askiiart|2 years ago

I don't know exactly how it works, but using GPT-3 via https://platform.openai.com/playground/, you can have it assign a likeliness score to each word, given all the previous text. That could act as a good confidence score.

Take this with a grain of salt though, I'm far from an expert, and it's been a while since I've played around with that feature.

harpiaharpyja|2 years ago

Not an expert myself, but I imagine that generating output that expresses confidence would be distinct from any measure of confidence within the inner workings of GPT itself.

photochemsyn|2 years ago

If it's learning from human behavior, this is nothing new. Our society of late has been rewarding confidence over questioning and that's likely reflected in the ChatGPT training corpus.