top | item 46120106

(no title)

pentaphobe | 2 months ago

This is really neat!

If there was one of these for non-AI papers I'd easily lose hours each day

Totally off topic, but come to think of it, I'd love to see more feeds support anti-bubbling (show me _less_ of what I've frequently consumed)

discuss

order

davailan|2 months ago

Thanks!

What topics are you interested in?

Two different directions I'm thinking of for Open Paper Digest:

- either some recommendation algorithm that figures out which topics you are interested in and serves you papers based on that. Would need a good way to get signals though. That's why I'm now bootstrapping the process with Huggingface Trending Papers, but that immediately constrains the topics.

- or more search driven, where you type "I'd like to read about X" and it starts your feed

With regards to anti-bubbling: interesting thought, a "reverse" recommendation algorithm...

stym06|2 months ago

you could just rank the papers, and show trending ones as a separate tab.

for filters, create a set of pre-defined tags and let the LLM choose one of your pre-defined tags from the paper's summary.

pentaphobe|2 months ago

> what topics are you interested in?

That's just it - any list I give would probably miss the mark. I guess it all ties back to computational thinking in some way? (physics, neuroscience, rendering algorithms, medicine, linguistics, category theory)

Perhaps if recommendation algorithms could be that generalised it would scratch most of the desire for a good anti-bubble..

But still misses that special sauce of discovering papers/topics I didn't know I was interested in.

Libraries and stumbling into random university lectures did this very well (or newsagents, video shops, etc..) -- broadening rather than narrowing

LLMs / vector space seem well placed to automate this kind of expansive/lateral matching -- but it does seem we (or marketers) tend to build recommenders around the assumption that individuals' interests are a singularity to zero in on.. (and so likely train our models for same)

Anyway - end rant - thanks again, really cool project! Clearly got me inspired :)