This actually looks really cool! Especially the dark mode and the feature of showing the comments reply to which actually is something that I wanted as well!
Honestly this is how Hackernews should look haha!
It does take some time in firefox/zen tho in the start so its not really instant (especially the bars which are shown next to the comment to indicate who they are responding to)
For some reason also, Hackernews stopped working when I installed this extension, my wifi may have glitched and I reconnected to wifi so its working now.
We've been doing Bayesian content (aka spam) filtering for over 20 years, based in no small part on Paul Graham's essay "A plan for spam". According to HP [1], a home computer at the time had a single 1.5Ghz core and 256Mb of RAM.
Using LLMs would achieve essentially the same while requiring a couple orders of magnitude more resources.
toomuchtodo|1 month ago
Let https://github.com/plibither8/refined-hacker-news be your inspiration. Put out the tip jar, I will tip!
(Firefox first class citizen in this regard pls if possible)
Imustaskforhelp|1 month ago
Honestly this is how Hackernews should look haha!
It does take some time in firefox/zen tho in the start so its not really instant (especially the bars which are shown next to the comment to indicate who they are responding to)
For some reason also, Hackernews stopped working when I installed this extension, my wifi may have glitched and I reconnected to wifi so its working now.
It's pretty cool fwiw.
komali2|1 month ago
charles_f|1 month ago
javascript:(function() {function match(text) {return text.match(/((chat\s?g[pt]{2})|claude|llm|gen\s?ai|((^|\s)ai(\s|$))|anthropic|mistral|gemini|open\s?ai|prompt\sengineer|prompt\sinject)/i)}; if (document.getElementsByClassName("comment-tree").length) { Array.from(document.getElementsByClassName("comtr")).filter(elt => match(elt.innerText)).forEach(e => e.parentNode.removeChild(e)); } else { Array.from(document.getElementsByTagName("a")).filter(elt => match(elt.innerText)).forEach(a => { const tr = a.parentNode.parentNode.parentNode; const table = tr.parentNode; const details = tr.nextSibling; const spacer = details?.nextSibling; table.removeChild(tr); if (details) table.removeChild(details); if (spacer)table.removeChild(spacer);});}})();
firesteelrain|1 month ago
postalcoder|1 month ago
malfist|1 month ago
GorbachevyChase|1 month ago
cactusplant7374|1 month ago
LorenDB|1 month ago
probably_wrong|1 month ago
We've been doing Bayesian content (aka spam) filtering for over 20 years, based in no small part on Paul Graham's essay "A plan for spam". According to HP [1], a home computer at the time had a single 1.5Ghz core and 256Mb of RAM.
Using LLMs would achieve essentially the same while requiring a couple orders of magnitude more resources.
[1] https://www.hp.com/us-en/shop/tech-takes/specifications-pers...