(no title)
piyh | 6 months ago
Also, you CAN run local models that are as good as GPT 4 was on launch on a macbook with 24 gigs of ram.
https://artificialanalysis.ai/?models=gpt-oss-20b%2Cgemma-3-...
piyh | 6 months ago
Also, you CAN run local models that are as good as GPT 4 was on launch on a macbook with 24 gigs of ram.
https://artificialanalysis.ai/?models=gpt-oss-20b%2Cgemma-3-...
cornholio|6 months ago
Conversely, you can't do the same thing as a self hosted user, you can't really bank your idle compute for a week and consume it all in a single serving, hence the much more expensive local hardware to reach the peak generation rate you need.
0cf8612b2e1e|6 months ago
I assume the former has massive overhead, but maybe it is worthwhile to keep responsiveness up for everyone.