(no title)
rembal
|
8 days ago
It's not certain this is the future: the obvious trade off is lack of flexibility: not only when a new model comes out, but also varying demand in the data centers - one day people want more LLM queries, another day more diffusion queries.
Aaand, this blocks the holly grail of self improving models, beyond in-context learning.
A realistic use case? More efficient vision based drone targeting in Ukraine/Taiwan/ whatevers next. That's the place where energy efficiency, processing speed, and also weight is most critical. Not sure how heavy ASICS are though, bit they should be proportional to the model size.
I heard many complaints about onboard AI 'not being there yet', and this may change it.
Not listing middle east as there is no serious jamming problem there.
darkwater|8 days ago
wmf|8 days ago
throwthrowuknow|8 days ago
luckydata|8 days ago
iugtmkbdfil834|8 days ago
To your point, its neat tech, but the limitations are obvious since 'printing' only one LLM ensures further concentration of power. In other words, history repeats itself.