(no title)
PenguinRevolver | 2 years ago
It might or might not be reasonable speeds, but I would reason that it could avoid "sunk cost irony"; if you decide, that any point, Chat-GPT would have sufficed in your task. It's rare, but it can happen.
If you want to take this silly logic further, you can theoretically run any sized model on any computer. You could even attempt this dumb idea on a computer running Windows 95. I don't care how long it would take; if it takes seven and a half million years for 42 tokens, I would still call it a success!
pocketarc|2 years ago
> thousands for RAM
I wonder if your perspective might be a little off - you can get 64GB DDR4 RAM for ~$100, it’s really not a big deal these days.
It’s a big deal on Mac, of course, where 64GB means big kitted out high-end model that costs thousands, but RAM really is that cheap.
PenguinRevolver|2 years ago