(no title)
maxloh | 3 days ago
To really have local LLMs become "good enough for 99% of use cases," we are essentially dependent on Google's blessing to provide APIs for our local models. I don't think they have any interest in doing so.
maxloh | 3 days ago
To really have local LLMs become "good enough for 99% of use cases," we are essentially dependent on Google's blessing to provide APIs for our local models. I don't think they have any interest in doing so.
mark_l_watson|3 days ago
EDIT: I have also experimented with creating a local search index for the common tech web sites I get information from - this is a pain in the ass to maintain, but offers very low latency to add search context for local model use. This is most useful with very small and fast local models so the whole experience is low latency.
macNchz|3 days ago
amelius|3 days ago
barrkel|3 days ago
The set of things it needs approximate knowledge over grows slowly but noticeably over time.
varispeed|3 days ago
So flow is you type search query to Gemini, Gemini uses Google search, scans few results, go to selected websites, see if there is anything relevant and then compose it into something structured, readable and easy to ingest.
It's almost like going back to 90s browsing through forums, but this time Gemini is generating equivalent of forum posts "on the fly".
chasd00|3 days ago
kavalg|3 days ago