(no title)
azath92 | 7 months ago
The tricky part is having that act across all sites in a light and seamless way. Ive been working on a HN reskin, and it only is fast/transparent/cheap enough because HN has an api (no scraping needed), and the titles are descriptive enough that you can filter based on them, as simonws demo does. But its still HN specific.
I dont know if llms are fast enough at the moment to do this on the fly for arbitrary sites, but steps in that direction are interesting!
gojomo|7 months ago
But of course local GPU processing power, & optimizations for LLM-like tools, all adancing rapidly. And these local agents could potentially even outsource tough decisions to heavierweight remote services. Essentially, they'd maintain/reauthor your "custom extension", themselves using other models, as necessary.
And forward-thinking sites might try to make that process easier, with special APIs/docs/recipe-interchanges for all users' agents to share their progress on popular needs.
azath92|7 months ago
We ended up finding that a middle ground between that and ~simonw's no-AI but fast, was to use flash for fast semantic understanding of preferences and recs, but degraded quality compared with a friontier model.
> And forward-thinking sites might try to make that process easier, with special APIs/docs/recipe-interchanges for all users' agents to share their progress on popular needs.
HN is that! our exploration was made 1000% easier because they have an API which is "good enough" for most information.