top | item 46690670

(no title)

Jorge1o1 | 1 month ago

Well, pretty much all of the LLMs are based on the decode-only version of the Transformer architecture (in fact it’s the T in GPT).

And in the Transformer architecture you’re working with embeddings, which are exactly what this article is about, the vector representation of words.

discuss

order

No comments yet.