top | item 43608376

(no title)

fire_lake | 10 months ago

Massive search overlap though - and some questions (like the golf ball puzzle) can be cached for a long time.

discuss

order

summerlight|10 months ago

AFAIK they got 15% of unseen queries everyday, so it might be not very simple to design an effective cache layer on that. Semantic-aware clustering of natural language queries and projecting them into a cache-able low rank dimension is a non-trivial problem. Of course, LLM can effectively solve that, but then what's the point of using cache when you need LLM for clustering queries...

fire_lake|10 months ago

Not a search engineer, but wouldn’t a cache lookup to a previous LLM result be faster than a conventional free text search over the indexed websites? Seems like this could save money whilst delivering better results?