(no title)
ACCount36 | 6 months ago
And if you find a way to compress text that isn't insanely computationally expensive, and still makes the compressed text compressible by LLMs further - i.e. usable in training/inference? You, basically, would have invented a better tokenizer.
A lot of people in the industry are itching for a better tokenizer, so feel free to try.
No comments yet.