It’s kind of crazy really. Before LLMs, any type of world scale disaster you’d hope for what? Wikipedia backups? Now, a single LLM ran locally would be much more effective. Imagine the local models in 5 years!
There's a lot more than just Wikipedia that gets archived, and yes, that is a far more sensible way to go about it. For one thing, the compute required to then read it back is orders of magnitude less (a 15 year old smartphone can handle it just fine). For another, you don't have to wonder how much of what you got back is hallucinated - data is either there or it's corrupted and unreadable.
The processing required to run current language models with a useful amount of knowledge encoded in them is way more than I imagine would be available in a "world scale disaster".
int_19h|1 year ago
Zambyte|1 year ago
danmur|1 year ago