top | item 39136663

(no title)

two_in_one | 2 years ago

> LLMs translate textual descriptions and are part of GenAI compute.

You are talking about embeddings. This is a different things. It's when model generates binary presentation (embedding) of the prompt given. Then this embedding is used to condition generator's output.

LLM is usually a text model which can predict next words, in its basic form. After tuning it can do more, like answer the question, follow instructions.

So, models used by generators aren't exactly LLMs. With one exception that I know: ChatGPT processes prompt before sending it to DALLE-3 generator. Which then makes embedding off it.

discuss

order

No comments yet.