(no title)
angusturner | 8 months ago
In one view, you can view LLMs as SOTA lossless compression algorithms, where the number of weights don’t count towards the description length. Sounds crazy but it’s true.
angusturner | 8 months ago
In one view, you can view LLMs as SOTA lossless compression algorithms, where the number of weights don’t count towards the description length. Sounds crazy but it’s true.
swyx|8 months ago
and his last before departing for Meta Superintelligence https://www.youtube.com/live/U-fMsbY-kHY?si=_giVEZEF2NH3lgxI...
Workaccount2|8 months ago
Nevermark|8 months ago
Compressing a comprehensive command line reference via model might introduce errors and drop some options.
But for many people, especially new users, referencing commands, and getting examples, via a model would delivers many times the value.
Lossy vs. lossless are fundamentally different, but so are use cases.