top | item 45427465

(no title)

mvieira38 | 5 months ago

Yes, that's what it is. Kagi as a brand is LLM-optimist, so you may be fundamentally at odds with them here... If it lessens the issue for you, the sources of each item are cited properly in every example I tried, so maybe you could treat it as a fancy link aggregator

discuss

order

freediver|5 months ago

> Kagi as a brand is LLM-optimist

Kagi founder here. I am personally not an LLM-optimist. The thing is that I do not think LLMs will bring us to "Star Trek" level of useful computers (which I see humans eventually getting to) due to LLM's fundamentally broken auto-regressive nature. A different approach will be needed. Slight nuance but an important one.

Kagi as a brand is building tools in service of its users, no particular affinity towards any technologies.

pseudalopex|5 months ago

You claimed reading LLM summaries will provide complete understanding. Optimistic would be a charitable description of this claim. And optimism is not limited to the most optimistic.

sally_glance|5 months ago

Another LLM-pragmatist here. I don't see why we should treat LLMs differently than any other tool in the box. Except maybe that it's currently the newest and most shiny, albeit still a bit clunky and overpriced.

freedomben|5 months ago

Fwiw, I love your approach to AI. It's been very useful to me. Quick answers especially has been amazingly accurate and I've used it hundreds of times, if not thousands, and routinely check the links it gives

agiacalone|5 months ago

Happy Kagi Ultimate user here, so thank you!

NobodyNada|5 months ago

I'm about as AI-pessimist as it gets, but Kagi's use of LLMs is the most tasteful and practical I've seen. It's always completely opt-in (e.g. "append a ? to your search query if you want an AI summary", as opposed to Google's "append a swear word to your search query if you don't want one"), it's not pushy, and it's focused on summarizing and aggregating content rather than trying to make it up.

arrosenberg|5 months ago

FYI, you can append &udm=14 to Google searches to remove AI results and a bunch of the other clutter they've added.

meowface|5 months ago

I consider myself a major LLM optimist in many ways, but if I'm receiving a once per day curated news aggregation feed I feel I'd want a human eye. I guess an LLM in theory might have less of the biases found in humans, but you're trading one kind of bias for another.

rwl|5 months ago

Indeed! A once per day human-curated news aggregation feed used to be called a "newspaper". You can still get them in some places, I believe.

mvieira38|5 months ago

Yeah, I agree. The entire value/fact dichotomy that the announcement bases itself on is a pretty hot philosophical topic I lean against Kagi on. It's just impossible to summarize any text without imparting some sort of value judgement on it, therefore "biasing" the text

b112|5 months ago

Don't worry, all those news articles are of course human curated.

(I say this sarcastically and unhappily)

leephillips|5 months ago

Hard pass then. I’m a happy Kagi search subscriber, but I certainly don’t want more AI slop in my life.

I use RSS with newsboat and I get mainstream news by visiting individual sites (nytimes.com, etc.) and using the Newshound aggregator. Also, of course, HN with https://hn-ai.org/

cyberax|5 months ago

You can also convert regular newspapers into RSS feeds! NYTimes and Seattle Times have official RSS feeds, and with some scripting you can also get their article contents.