top | item 45675879

(no title)

sabareesh | 4 months ago

It might be that our current tokenization is inefficient compared to how well image pipeline does. Language already does lot of compression but there might be even better way to represent it in latent space

discuss

order

ACCount37|4 months ago

People in the industry know that tokenizers suck and there's room to do better. But actually doing it better? At scale? Now that's hard.

typpilol|4 months ago

It will require like 20x the compute

CuriouslyC|4 months ago

Image models use "larger" tokens. You can get this effect with text tokens if you use a larger token dictionary and generate common n-gram tokens, but the current LLM architecture isn't friendly to large output distributions.

yorwba|4 months ago

You don't have to use the same token dictionary for input and output. There's things like simultaneously predicting multiple tokens ahead as an auxiliary loss and for speculative decoding, where the output is larger than the input, and similarly you could have a model where the input tokens combine multiple output tokens. You would still need to do a forward pass per output token during autoregressive generation, but prefill would require fewer passes and the KV cache would be smaller too, so it could still produce a decent speedup.

But in the DeepSeek-OCR paper, compressing more text into the same number of visual input tokens leads to progressively worse output precision, so it's not a free lunch but a speed-quality tradeoff, and more fine-grained KV cache-compression methods might deliver better speedups without degrading the output as much.

mark_l_watson|4 months ago

Interesting idea! Haven’t heard that before.