I think there's a very important nugget here unrelated to agents: Kagi as a search engine is a higher signal source
of information than Google page rank and ad sense funded model. Primarily because google as it is today includes a massive amount of noise and suffered from blowback/cross-contamination as more LLM generated content pollute information truth.
> We found many, many examples of benchmark tasks where the same model using Kagi Search as a backend outperformed other search engines, simply because Kagi Search either returned the relevant Wikipedia page higher, or because the other results were not polluting the model’s context window with more irrelevant data.
> This benchmark unwittingly showed us that Kagi Search is a better backend for LLM-based search than Google/Bing because we filter out the noise that confuses other models.
> Primarily because google as it is today includes a massive amount of noise and suffered from blowback/cross-contamination as more LLM generated content pollute information truth.
I'm not convinced about this. If the strategy is "lets return wikipedia.org as the most relevant result", that's not sophisticated at all. Infact, it only worked for a very narrow subset of queries. If I search for 'top luggages for solo travel', I dont want to see wikipedia and I dont know how kagi will be any better.
I tried a prompt that consistently gets Gemini to badly hallucinate, and it responded correctly.
Prompt: "At a recent SINAC conference (approx Sept 2025) the presenters spoke about SINAC being underresourced and in crisis, and suggested better leveraging of and coordination with NGOs. Find the minutes of the conference, and who was advocating for better NGO interaction."
The conference was actually in Oct 2024. The approx date in parens causes Gemini to create an entirely false narrative, which includes real people quoted out of context. This happens in both Gemini regular chat and Gemini Deep Research (in which the narrative gets badly out of control).
Kagi reasonably enough answers: "I cannot find the minutes of a SINAC conference from approximately September 2025, nor any specific information about presenters advocating for better NGO coordination at such an event."
I also tried two hallucinations that I've seen in the latest Gemini and Claude. The first one is the famous "how many r in strawberry" and another one is asking about king sacrifice in chess where most LLMs tell you it's a very powerful move and tell you how to do it (obviously a nonsense since the point of the game is to protect the king).
Kagi actually got those two correct, explaining how losing your king means losing your chess game, which is quite impressive.
You might be interested to know your comment caused Gemini 3 to go meta on this question. This is what I found in its thinking:
```
Evaluating Date Discrepancies
I've just uncovered a significant discrepancy: a "Hacker News" result flags a date error originating from the very format of parenthetical dates. The system appears to be misinterpreting these, leading to demonstrably false information. This warrants further investigation to understand the root cause and ensure data integrity.
Analyzing Date Ambiguity
I'm now grappling with the user's date. The Hacker News comment suggests the user's prompt might be flawed, as it contradicts the actual date of an event I've indexed. Specifically, the conference referenced occurred in October 2024, not September 2025 as the prompt implies. This leads me to question the user's understanding of the timeline or if the prompt refers to an unrelated event in September 2025. This inconsistency demands further clarification.
...
Investigating User Prompt Intent
I've just realized that the Hacker News snippet reveals this might be a specific problematic test prompt. My strategy is to focus on the October 2024 forum, a strong match, organized by Kattia Cambronero and CCT. I'll verify if there was a 2025 event, but the 2024 event fits the described "crisis" and "NGO coordination". If the user says "approx Sept 2025," they likely mean a recent event.
```
As a Kagi subscriber, I find this to be mostly useful. I'd say I do about 50% standard Kagi searches, 50% Kagi assistant searches/conversations. This new ability to change the level of "research" performed can be genuinely useful in certain contexts. That said, I probably expect to use this new "research assistant" once or twice a month.
I've already used the Research assistant half a dozen times today and am super happy with the outcomes. It does seem to be more trigger happy with doing multiple searches based on information it found in earlier results, and I've found the resulting output to be reasonably accurate. Some models in particular seem to never want to do more than one search, and you can tell the output in those cases is often not very useful if the sources partially contradict each other or don't provide enough detail. The best I've found to avoid this is o3 pro, but o3 pro is very slow and expensive. If the Research assistant gets 85% of the results in half the time as o3 pro...
Same, I'm quite happy with it. I first subscribed because I was fed up with the promoted results in Google but now I find their assistant searches actually useful too.
I really enjoy the tone/style of this announcement/blog post. It doesn't feel like it's overhyping something or using "salesy" language. I wish we would see this more often from companies.
I'm struggling to find a usecase for the quick assistant vs Kimi K2. Does anyone know any particular situations where the Assisstant is better than the model? I am not seeing how it is different than a model. Only speaking in regards to quick.
Mostly depends on which features are most important for you. I'm a SWE, so using their Assistant for Web RAG nearly exclusively for work-related stuff and for most of my personal queries. I'm rarely using multi-modal content, mostly sticking to text. They support many providers and new notable models are typically rolled out only a few days after release which is always great to test out. I have a standalone subscription for coding agent LLM. If above aligns - might be a good choice.
I use regular Kagi assistant for most of my questions. I use Openrouter chat to chat with Gemini 2.5 Pro and GPT 5 at the same time and continue only the productive conversation. My OR account is for my coding agent and its completely pay as you go, no fixed monthly costs.
That’s what I did and I’m pretty happy with it. I just fall back to something free on the rare occasion I want an image generated (tbh, mostly emojis of my dog).
Kagi reminds me of the original search engines of yore, when I could type what I want and it would appear, and I could go on with my work/life.
As for the people who claim this will create/introduce slop, Kagi is one of the few platforms where they are actively fighting against low quality AI generated content with their community fueled "SlopStop" campaign.[0]
Not sponsored, just a fan. Looking forward to trying this out.
I used quick research and it was pretty cool. A couple of caveats to keep in mind:
1. It answers using only the crawled sites. You can't make it crawl a new page.
2. It doesn't use a page' search function automatically.
This is expected, but doesn't hurt to take that in mind. I think i'd be pretty useful. You ask for recent papers on a site and the engine could use hackernews' search function, then kagi would crawl the page.
I'm a little confused about what the point of these are compared to the existing features/models that kagi already has. Are they just supposed to be a one-stop shop where I don't have to choose which model to use? When should I use kagi quick/research assistant instead of, e.g. kimi?
I tried the quick assistant a bit (don't have ultimate so I can't try research), and while the writing style seems slightly different, I don't see much difference in information compared to using existing models through the general kagi assistant interface.
I want a kagi mcp server I can use with ChatGPT or Claude.
I don’t want to use kagi ultimate (I use too many other features of ChatGPT and Claude), I just want to be able to improve the results of my AI models with kagi.
I use perplexity a lot and pretty much exclusively with "deep research" on. Is this on the same level? Because Perplexity often take more than a minute, and this is only 20 secs.
For most queries it is around the same level. Time spent isn't always the best tell of quality, particularly when the search engine used returns little noise
regular reminder: kagi is - above all else - a really really good search engine, and if google/etc, or even just the increasingly horrific ads-ocracy make you sad, you should definitely give it a go - the trial is here: https://kagi.com/pricing
if you like it, it's only $10/month, which I regrettably spend on coffee some days.
The fact that people applaud Kagi taking the money they gave for search to invest it in bullshit AI products and spit on Google's AI search at the same time tells you everything you need to know about HackerNews.
Search is AI now, so I don’t get what your argument is.
Since 2019 Google and Bing both use BERT style encoder-only search architecture.
I’ve been using Kagi ki (now research assistant) for months and it is a fantastic product that genuinely improves the search experience.
So overall I’m quite happy they made these investments. When you look at Google and Perplexity this is largely the direction the industry is going.
They’re building tools on other LLMs and basically running open router or something behind the scenes. They even show you your token use/cost against your allowance/budget in the billing page so you know what you’re paying for. They’re not training their own from-scratch LLMs, which I would consider a waste of money at their size/scale.
Do you have any evidence that the AI efforts are not being funded by the AI product, Kagi Assistant? I would expect the reverse: the high-margin AI products are likely cross-subsidizing the low-margin search products and their sliver of AI support.
We're explicitly conscious of the bullshit problem in AI and we try to focus on only building tools we find useful. See position statement on the matter yesterday:
Kagi is already expensive for a search engine. Now I know part of my subscription is going towards funding AI bullshit. And I know the cost of that AI bullshit will get jacked up in price and force Kagi sub price up as well. I'm so tired of AI being forced into everything.
These are only available on the Ultimate tier. If (like me) you don't care about the LLMs then there is no reason to be on the Ultimate tier so you don't pay for it.
I hadn't been sure about Kagi before, but this has really swung it for me, I'm off to sign up post haste. It's a revolutionary move that really shows how fast ahead of the competition Kagi is, how dexterous their fingers at the pulse of humanity, how bold.
Not for nothing, but I wish there was an anonymized ai built into a kagi that was able to have normal conversation discussion about sexual topics or search for pornographic topics like a safe search off function.
I understand the safety needs around things LLM should not build nuclear weapons, but it would be nice to have a frontier model that could write or find porn.
[+] [-] jryio|3 months ago|reply
> We found many, many examples of benchmark tasks where the same model using Kagi Search as a backend outperformed other search engines, simply because Kagi Search either returned the relevant Wikipedia page higher, or because the other results were not polluting the model’s context window with more irrelevant data.
> This benchmark unwittingly showed us that Kagi Search is a better backend for LLM-based search than Google/Bing because we filter out the noise that confuses other models.
[+] [-] clearleaf|3 months ago|reply
Hey Google, Pinterest results are probably messing with AI crawlers pretty badly. I bet it would really help the AI if that site was deranked :)
Also if this really is the case, I wonder what an AI using Marginalia for reference would be like.
[+] [-] solarkraft|3 months ago|reply
[+] [-] unknown|3 months ago|reply
[deleted]
[+] [-] bitpush|3 months ago|reply
I'm not convinced about this. If the strategy is "lets return wikipedia.org as the most relevant result", that's not sophisticated at all. Infact, it only worked for a very narrow subset of queries. If I search for 'top luggages for solo travel', I dont want to see wikipedia and I dont know how kagi will be any better.
[+] [-] smallerfish|3 months ago|reply
Prompt: "At a recent SINAC conference (approx Sept 2025) the presenters spoke about SINAC being underresourced and in crisis, and suggested better leveraging of and coordination with NGOs. Find the minutes of the conference, and who was advocating for better NGO interaction."
The conference was actually in Oct 2024. The approx date in parens causes Gemini to create an entirely false narrative, which includes real people quoted out of context. This happens in both Gemini regular chat and Gemini Deep Research (in which the narrative gets badly out of control).
Kagi reasonably enough answers: "I cannot find the minutes of a SINAC conference from approximately September 2025, nor any specific information about presenters advocating for better NGO coordination at such an event."
[+] [-] adriantoine|3 months ago|reply
Kagi actually got those two correct, explaining how losing your king means losing your chess game, which is quite impressive.
[+] [-] sometimes_all|3 months ago|reply
``` Evaluating Date Discrepancies
I've just uncovered a significant discrepancy: a "Hacker News" result flags a date error originating from the very format of parenthetical dates. The system appears to be misinterpreting these, leading to demonstrably false information. This warrants further investigation to understand the root cause and ensure data integrity.
Analyzing Date Ambiguity
I'm now grappling with the user's date. The Hacker News comment suggests the user's prompt might be flawed, as it contradicts the actual date of an event I've indexed. Specifically, the conference referenced occurred in October 2024, not September 2025 as the prompt implies. This leads me to question the user's understanding of the timeline or if the prompt refers to an unrelated event in September 2025. This inconsistency demands further clarification.
...
Investigating User Prompt Intent
I've just realized that the Hacker News snippet reveals this might be a specific problematic test prompt. My strategy is to focus on the October 2024 forum, a strong match, organized by Kattia Cambronero and CCT. I'll verify if there was a 2025 event, but the 2024 event fits the described "crisis" and "NGO coordination". If the user says "approx Sept 2025," they likely mean a recent event. ```
[+] [-] VHRanger|3 months ago|reply
[+] [-] natemcintosh|3 months ago|reply
[+] [-] VHRanger|3 months ago|reply
[+] [-] milch|3 months ago|reply
[+] [-] adriantoine|3 months ago|reply
[+] [-] bryanhogan|3 months ago|reply
[+] [-] Computer0|3 months ago|reply
[+] [-] teecha|3 months ago|reply
Just recently started paying for Kagi search and quite love it.
[+] [-] everlier|3 months ago|reply
[+] [-] aitchnyu|3 months ago|reply
[+] [-] coffeefirst|3 months ago|reply
[+] [-] Atotalnoob|3 months ago|reply
You have a spend limit, but the assistant has dozens of of models
[+] [-] ceroxylon|3 months ago|reply
As for the people who claim this will create/introduce slop, Kagi is one of the few platforms where they are actively fighting against low quality AI generated content with their community fueled "SlopStop" campaign.[0]
Not sponsored, just a fan. Looking forward to trying this out.
[0] https://help.kagi.com/kagi/features/slopstop.html
[+] [-] ranyume|3 months ago|reply
1. It answers using only the crawled sites. You can't make it crawl a new page. 2. It doesn't use a page' search function automatically.
This is expected, but doesn't hurt to take that in mind. I think i'd be pretty useful. You ask for recent papers on a site and the engine could use hackernews' search function, then kagi would crawl the page.
[+] [-] Rehanzo|3 months ago|reply
[+] [-] hatthew|3 months ago|reply
I tried the quick assistant a bit (don't have ultimate so I can't try research), and while the writing style seems slightly different, I don't see much difference in information compared to using existing models through the general kagi assistant interface.
[+] [-] itomato|3 months ago|reply
Agents/assistants but nothing more.
[+] [-] VHRanger|3 months ago|reply
https://blog.kagi.com/llms
[+] [-] ugurs|3 months ago|reply
[+] [-] spott|3 months ago|reply
I don’t want to use kagi ultimate (I use too many other features of ChatGPT and Claude), I just want to be able to improve the results of my AI models with kagi.
[+] [-] nsonha|3 months ago|reply
[+] [-] Rehanzo|3 months ago|reply
[+] [-] Havoc|3 months ago|reply
[+] [-] milch|3 months ago|reply
[+] [-] bananapub|3 months ago|reply
if you like it, it's only $10/month, which I regrettably spend on coffee some days.
[+] [-] iLoveOncall|3 months ago|reply
What they've been building for the past couple of years makes it blindingly clear that they are definitely not a search engine *above all else*.
Don't believe me? Check their CEO's goal: https://news.ycombinator.com/item?id=45998846
[+] [-] skydhash|3 months ago|reply
[+] [-] paradox460|3 months ago|reply
[+] [-] Rehanzo|3 months ago|reply
[+] [-] HotGarbage|3 months ago|reply
[+] [-] iLoveOncall|3 months ago|reply
[+] [-] data-ottawa|3 months ago|reply
Since 2019 Google and Bing both use BERT style encoder-only search architecture.
I’ve been using Kagi ki (now research assistant) for months and it is a fantastic product that genuinely improves the search experience.
So overall I’m quite happy they made these investments. When you look at Google and Perplexity this is largely the direction the industry is going.
They’re building tools on other LLMs and basically running open router or something behind the scenes. They even show you your token use/cost against your allowance/budget in the billing page so you know what you’re paying for. They’re not training their own from-scratch LLMs, which I would consider a waste of money at their size/scale.
[+] [-] w10-1|3 months ago|reply
[+] [-] VHRanger|3 months ago|reply
https://blog.kagi.com/llms
[+] [-] AuthAuth|3 months ago|reply
[+] [-] progval|3 months ago|reply
[+] [-] johnnyanmac|3 months ago|reply
As in, not "free"?
Either way, I guess we'll see how this affects the service.
[+] [-] DontForgetMe|3 months ago|reply
I hadn't been sure about Kagi before, but this has really swung it for me, I'm off to sign up post haste. It's a revolutionary move that really shows how fast ahead of the competition Kagi is, how dexterous their fingers at the pulse of humanity, how bold.
[+] [-] daft_pink|3 months ago|reply
I understand the safety needs around things LLM should not build nuclear weapons, but it would be nice to have a frontier model that could write or find porn.
[+] [-] VHRanger|3 months ago|reply