top | item 46544415

(no title)

notrealyme123 | 1 month ago

My educated guess: Not more than any other LLM. The text-latent encoder and latent-text decoder just find am more efficient representation of the tokens, but it's more of a compression instead of turning words/sentences into abstract concepts. There will be residuals of the input language be in there.

discuss

order

No comments yet.