top | item 45429203

(no title)

Harmon758 | 5 months ago

Just for concrete confirmation that LLM(s) are being used, there's an open issue on the GitHub repository, on hallucinations with made up information, where a Kagi employee specifically mentions "an LLM hallucination problem":

https://github.com/kagisearch/kite-public/issues/97

There's also a line at the bottom of the about page at https://kite.kagi.com/about that says "Summaries may contain errors. Please verify important information."

discuss

order

jazzyjackson|5 months ago

Love how it only took 8 years to go from "Fake News!" to "News May Be Fake"

eitland|5 months ago

FWIW, as someone who has chosen to pay for Kagi for three years now:

- I agreee fake news is a real problem

- I pay for Kagi because I get more much more precise results[1]

- They have a public feedback forum and I think every time I have pointed out a problem they have come back with an answer and most of the time also a fix

- When Kagi introduced AI summaries in search they made it opt in, and unlike every other AI summary provider I had seen at that point they have always pointed to the sources. The AI might still hallucinate[2] but if it does I am confident that if I pointed it out to them my bug report would be looked into and I would get a good answer and probably even a fix.

[1]: I hear others say they get more precise Google results, and if so, more power to them. I have used Google enthusiastically since 2005, as the only real option from 2012, as fallback for DDG since somewhere between 2012 and 2022 and basically only when I am on other peoples devices or to prove a point since I started using Kagi in 2022

[2]: haven't seen much of that, but that might be because of the kind of questions I ask and the fact that I mostly use ordinary search.

pjc50|5 months ago

There's too much demand for fake news, plenty of subsidy for it, and it's far easier to make.

Non fake news is going to be restricted to pay services like Bloomberg terminals.

EasyMark|5 months ago

At least we're going from Fake News from certain MAGA leaning sources at 75-90% fake to 99% actual news and 1% hallucinations?

MarcelOlsz|5 months ago

Man am I tired of this stuff.

malfist|5 months ago

My LLM Investor Agent says we must keep investing in AI, tulips will always be worth more at a later date

bbor|5 months ago

To take a moment to be a hopeless Stan for one of my all-time favorite companies: I don't think the summary above yours is fair, and I see why they don't center the summary part of it.

Unlike the disastrous Apple feature from earlier this year (which is still available, somehow!), this isn't trying to transform individual articles. Rather, it's focused on capturing broader trends and giving just enough info to decide whether to click into any of the source articles. That seems like a much smaller, more achievable scope than Apple's feature, and as always, open-source helps work like this a ton.

I, for one, like it! I'll try it out. Seems better than my current sources for a quick list of daily links, that's for sure (namely Reddit News, Apple News, Bluesky in general, and a few industry newsletters).

johnnyanmac|5 months ago

>giving just enough info to decide whether to click into any of the source articles.

If that info is hallucinated, then it's worse than useless. Click bait still attempts to represent the article, a hallucination isn't guaranteed to do thst.

Why not have someone properly vet out interesting and curious news and articles and provide traffic to their site? In this age of sincerity, proper citation is more vital than ever.