My educated guess:
Not more than any other LLM.
The text-latent encoder and latent-text decoder just find am more efficient representation of the tokens, but it's more of a compression instead of turning words/sentences into abstract concepts.
There will be residuals of the input language be in there.
No comments yet.