Ask HN: Do you still use search engines?
350 points| davidkuennen | 10 months ago
Using Google now feels completely lackluster in comparison.
I've noticed the same thing happening in my circle of friends as well—and they don’t even have a technical background.
How about you?
Some comments were deferred for faster rendering.
wavemode|10 months ago
You hear about this new programming language called "Frob", and you assume it must have a website. So you google "Frob language". You hear that there was a plane crash in DC, and assume (CNN/AP/your_favorite_news_site) has almost certainly written an article about it. You google "DC plane crash."
LLMs aren't ever going to replace search for that use case, simply because they're never going to be as convenient.
Where LLMs will take over from search is when it comes to open-ended research - where you don't know in advance where you're going or what you're going to find. I don't really have frequent use cases of this sort, but depending on your occupation it might revolutionize your daily work.
Modified3019|10 months ago
Just yesterday I was trying to remember the name of a vague concept I’d forgotten, with my overall question being:
“Is there a technical term in biology for the equilibrium that occurs between plant species producing defensive toxins, and toxin resistance in the insect species that feed on those plants, whereby the plant species never has enough evolutionary pressure to increase it’s toxin load enough to kill off the insect that is adapting to it”
After fruitless searching around because I didn’t have the right things to look for, putting the above in ChatGPT gave an instant reply of exactly what I was looking for:
“Yes, the phenomenon you're describing is often referred to as evolutionary arms race or coevolutionary arms race.”
rstuart4133|10 months ago
The reason is pretty simple. If the result you want is in the first few search hits, it's always better. Your query is shorter so there is less typing, the search engine is always faster, the results are far better because you side step the LLM hallucinating as it regurgitates the results it remembers on the page your would have read if you searched.
If you aren't confident of the search times, it can take 1/2 an hour of dicking around with different terms, clicking though a couple of pages of search results for each set of term, until you finally figure out the lingo to use. Figuring out what you are really after from that wordy description is the inner magic of LLM's.
deadbabe|10 months ago
keithnz|10 months ago
npilk|10 months ago
Really, for many “page searches”, a good search engine should just be able to take you immediately to the page. When I search “Tom Hanks IMDB”, there’s no need to see a list of links - there’s obviously one specific page I want to visit.
https://notes.npilk.com/custom-search
generalizations|10 months ago
desipenguin|10 months ago
I know what I'm looking for. I just need exact URL.
Perplexity miserably fails at this.
dumbfounder|10 months ago
bmcahren|10 months ago
Traditional search is dead, semantic search through AI is alive and well.
I can't yet count once AI misunderstood the meaning of my search while Google loves to make assumptions, rewrite my search query, and deliver the results that pay it the best which have the best ads (in my opinion as a lifetime user).
Lets not even mention how they willingly accept misleading ads atop the results which trick the majority of common users into downloading malware and adware on the regular.
crowcroft|10 months ago
The reason Google is still seeing growth (in revenue etc.) is that for a lot 'commercial' search still ends with this kind of action.
Take purchasing a power drill for example, you might use an LLM for some research on what drills are best, but when you're actually looking to purchase you probably just want to find the product on Home Depot/Lowe's etc.
kiney|10 months ago
unknown|10 months ago
[deleted]
FloorEgg|10 months ago
moralestapia|10 months ago
What? On Planet Earth, this is already a thing.
mr_toad|10 months ago
Kind of like a manual, with an index.
RTFM people.
coldtea|10 months ago
Sounds trivial to integrate an LLM front end with a search engine backend (probably already done), and be able to type "frob language" and it gives you a curated clickable list of the top resources (language website, official tutorial, reference guide, etc) discarding spam and irrelevant search engine results in the process.
tremarley|10 months ago
Or any other LLM that’s continuously trained on trending news?
Okawari|10 months ago
I don't like LLMs for two reasons:
* I can't really get a feel for the veracity of the information without double checking it. A lot of context I get from just reading results from a traditional search engine is lost when I get an answer from a LLM. I find it somewhat uncomfortable to just accept the answer, and if I have to double check it anyways, the LLM's answer is kind of meaningless and I might as well use a traditional search engine.
* I'm missing out on learning opertunities that I would usually get otherwise by reading or skimming through a larger document trying to find the answer. I appreciate that I skim through a lot of documentation on a regular basis and can recall things that I just happened to read when looking for a solution for another problem. I would hate it if an LLM would drop random tidbits of information when I was looking for concrete answers, but since its a side effect of my information gathering process, I like it.
If I were to use an AI assistant that could help me search and curate the results, instead of trying to answer my question directly. Hopefully in a more sleek way than Perplexity does with its sources feature.
SoftTalker|10 months ago
At least that has been my experience. I admit I don't use LLMs very much.
graemep|10 months ago
This is my main reason for not using LLMs as a replacement for search. I want an accurate answer. I quote often search for legal or regulatory issues, health, scientific issues, specific facts about lots of things. i want authoritative sources.
supportengineer|10 months ago
leptons|10 months ago
And this is how LLMs perform when LLM-rot hasn't even become widely pervasive yet. As time goes on and LLMs regurgitate into themselves, they will become even less trustworthy. I really can't trust what an LLM says, especially when it matters, and the more it lies, the more I can't trust them.
bluGill|10 months ago
miloignis|10 months ago
Really, these days, either I know some resource exists and I want to find it, in which case a search engine makes much more sense than an LLM which might hallucinate, or I want to know if something is possible / how to do it, and the LLM will again hallucinate an incorrect way to do it.
I've only found LLMs useful for translation, transcription, natural language interface, etc.
NelsonMinar|10 months ago
marvinblum|10 months ago
LLMs have mostly been useful for three things: single line code completion (in GoLand), quickly translating JSON, and generating/optimizing marketing texts.
averageRoyalty|10 months ago
I use LLMs as a sounding board. Often if I'm trying to tease out the shape of a concept in my head, it's best to write it out. I now do this in the form of a question or request for information and dump it into the LLM.
jlbang|10 months ago
star-glider|10 months ago
"Search" can mean a lot of things. Sometimes I just want a website but can't remember the URL (traditional); other times I want an answer (LLMs); and other times, I want a bunch of resources to learn more (search+LLMs).
sshine|10 months ago
bayindirh|10 months ago
Instead I use a search engine and do my own reading and filtering. This way I learn what I'm researching, too, so I don't fall into the vicious cycle of drug abu ^H^H^H^H^H laziness. Otherwise I'll inevitably rely more on more on that thing, and be a prisoner of my own doing by increasingly offloading my tasks to a black box and be dependent on it.
drpixie|10 months ago
Google recently (unrequested) provided me with very detailed AI generated instructions for server config - instructions that would have completely blown away the server. There will be someone out there who just follows the bouncing ball, I hope they've got good friends, understanding colleagues, and good backups!
tasuki|10 months ago
What a weird sentence. What accuracy guarantees does Kagi have? Or, if you're not "offloading your brain to it", can't you do the same with an LLM?
EliasWatson|10 months ago
As for AI search, I do find it extremely useful when I don't know the right words to search for. The LLM will instantly figure out what I'm trying to say.
sshine|10 months ago
And the ratio between using search engine and Kagi’s LLM agent with search is still 70% search. Sometimes, searching is faster, sometimes asking AI is faster.
jacobmarble|10 months ago
tiborsaas|10 months ago
I use LLM-s for what they are good at, generative stuff. I know some task take me a long time and I can shortcut with LLM-s easily.
So here's a ChatGPT example query* which is completely off:
https://chatgpt.com/share/67f5a071-53bc-8013-9c32-25cc2857e5...
* It's intentionally bad be able to compare with Google.
And here's the web result, which is spot on:
https://imgur.com/a/6ELOeS1
zer00eyz|10 months ago
LLM's are great when you want AN answer, and not get side tracked.
Search is great when you want to know what answers are out there. The best example is Recipes... From what spices go into chai to the spice mix in any given version of chili (let's not start on beans).
The former is filling in missing knowledge the latter is learning.
keithnz|10 months ago
https://imgur.com/a/boNS2YZ
https://chatgpt.com/share/67f5a9f9-f0a8-800d-9101-aafb88e455...
which I think is way better than google.
yellowapple|10 months ago
So yeah, I do still use search engines, specifically Kagi and (as a fallback) DuckDuckGo. From either of them I might tack on a !g if I'm dissatisfied with the results, but it's pretty rare for Google's results to be any better.
When I do use an LLM, it's specifically for churning through some unstructured text for specific answers about it, with the understanding that I'll want to verify those answers myself. An LLM's great for taking queries like "What parts of this document talk about $FOO?" and spitting out a list of excerpts that discuss $FOO that I can then go back and spot-check myself for accuracy.
mepian|10 months ago
dowager_dan99|10 months ago
stonemetal12|10 months ago
For example Jeep consistently lands at the bottom of the reliability ratings. Try asking GPT if Jeeps are reliable. The response reads like Jeep advertising.
s1artibartfast|10 months ago
My impression is that different llms are more or less people pleasing. I found grok is more willing to tell me something is a bad idea.
afpx|10 months ago
https://chatgpt.com/share/67f57459-2744-8009-a94e-3b67dce8fd...
“[Jeeps] often score below average in reliability rankings from sources like Consumer Reports and J.D. Power.”
0xbadcafebee|10 months ago
marcusverus|10 months ago
If you want to know how modern Jeep models stack up against their peers in terms of reliability, try asking GPT that question!
RattlesnakeJake|10 months ago
For me, searches fall into one of three categories, none of which are a good fit for LLMs:
1. A single business, location, object, or concept (I really just want the Google Maps or Wikipedia page, and I'm too lazy to go straight to the site). For these queries, LLMs are either overkill or outdated.
2. Product reviews, setup instructions, and other real-world blog posts. LLMs want to summarize these, and I don't want that.
3. Really specific knowledge in a limited domain ("2017 Kia Sedona automatic sliding door motor replacement steps," "Can I exit a Queue-Triggered Azure Function without removing it from the queue?"). In these cases, the LLMs are so prone to hallucination that I can't trust them.
georgemcbay|10 months ago
The answer I'm seeking is not always on reddit itself, but google limited to reddit is far more likely to give me quality starting links than google unbound is.
supportengineer|10 months ago
hbn|10 months ago
inferiorhuman|10 months ago
anoldperson|10 months ago
jacobr1|10 months ago
I've mostly switched to using Claude these days, with MCPs for websearch and fetching specific remote or local files. It answers questions generally very accurately (from the source documents it identifies) and includes citations.
I've found that people that haven't really tried the latest models, and just rely on whatever knowledge is in the model training are really missing out on the potential power. GPT4o+ and equivalent models really changed the game. And using tools to do a search, or pull in your code, or run a db query or whatever enables them to either synthesize information or generate context relevant material. Not perfect for everything, but much better than a year ago, or what people are doing with the free systems.
0xbadcafebee|10 months ago
IAmGraydon|10 months ago
senko|10 months ago
Even without much customization (lenses, scoring, etc) it's so much better (for my use cases) I happily pay for it.
Recently I have also started to use Perplexity more for "research for a few minutes and get back to me" type of things.
Queries like "what was that Python package for X" I usually ask an AI right from my editor, or ChatGPT if I'm in the browser already.
disambiguation|10 months ago
2 recent success stories:
I was toying around with an esp32 - i was experimenting to turn it into a bluetooth remote control device. The online guides help to an extent, setting up and running sample projects, but the segue into deploying my own code was less clear. LLMs are "expert beginners" so this was a perfect request for it. I was able to jump from demos to live deploying my own code very quickly.
Another time I was tinkering with opnsense and setting up VLANs. The router config is easy enough but what I didnt realize before diving in was that the switch and access point require configuration too. What's difficult about searching this kind of problem is that most of the info is buried in old blog posts and forum threads and requires a lot of digging and piecing together disparate details. I wasn't lucky enough to find someone who did a writeup with my exact setup, but since LLMs are trained on all these old message boards, this was again a perfect prompt playing to its strengths.
footy|10 months ago
matt_trentini|10 months ago
The results from LLMs are still too slow, vary too much in quality and still frequently hallucinate.
My typical use-case is that when I'm looking for an answer I make a search query, sometimes a few. Then scan through the list of results and open tabs for the most promising of them - often recognising trusted, or at least familiar, sites. I then scan through those tabs for the best results. It turns out I can scan rapidly - that whole process only takes a few seconds, maybe a minute for the more complex queries.
I've found LLMs are good when you have open-ended questions, when you're not really sure what you're looking for. They can help narrow the search space.
joseda-hg|10 months ago
saaaaaam|10 months ago
jpc0|10 months ago
At most I use AI now to speed up my research phase dramatically. AI is also pretty good at showing what is in the ballpark for more popular tools.
However I am missing forum style communities more and more, sometimes I don't want the correct answer, I want to know what someone that has been in the trenches for 10 years has to say, for my day job I can just make a phone call but for hobbies, side projects etc I don't have the contacts built up and I don't always have local interest groups that I can tap for knowledge.
GuB-42|10 months ago
LLMs can't be trusted, you have no way to tell between a correct answer and a hallucination. Which means I often end up searching what the LLM told me just to check, and it is often wrong.
Search engines can also lead you to false information, but you have a lot more context. For example, a StackOverflow answer has comments, and often, they point out important nuances and inaccuracies. You can also cross-reference different websites, and gauge how reliable the information is (ex: primary source vs Reddit post). A well trained LLM can do that implicitly, but you have no idea how it did for your particular case.
pizzly|10 months ago
foragerdev|10 months ago
What are the specs for new Goolge Pixel 9a? LLM can't answer this may after a year they can.
rrr_oh_man|10 months ago
Not anymore
dcl|10 months ago
Last night, I asked Claude 3.7 Sonnet to obtain historical gold prices in AUD and the ASX200 TR index values and plot the ratio of them, it got all of the tickers wrong - I had to google (it then got a bunch of other stuff wrong in the code).
Also yesterday, I was preparing a brief summary of forecasting metrics/measures for a stakeholder and it incorrectly described the properties of SMAPE (easily validated by checking Wikipedia).
I constantly have issues with my direct reports writing code using LLM's. They constantly hallucinate things for some of the SDK's we use.
throwaway422432|10 months ago
Was a bit more useful at questions like "Rank these stocks by exposure to the Chinese market", as you can prioritise your own research but in the end you just have to go through the individual company filings yourself.
caseyy|10 months ago
But now, the veracity of most LLMs' responses is terrible. They often include “sources” unrelated to what they say and make up hallucinations when I search for what I'm an expert in. Even Gemini in Google Search told me yesterday that Ada Lovelace invented the first programming language in the 18th century. The trust is completely gone.
So, I'm back to the plain old search. At least it doesn't obscure its sources, and I can get a sense of the veracity of what I find.
esperent|10 months ago
I mean, for everyone else it was never there to begin with. Hallucinations are constantly raised as the biggest issues with AI. According to the tests, and my experience, newer AI models are objectively better, not worse than the ones from a few years ago. They still have a long way to go/may never be fully trustworthy though.
What I have lost trust in, and what I and many feel has become much worse over the few years, is Google search, and all the other search engines that are based on it.
axegon_|10 months ago
mavamaarten|10 months ago
Google seems to be better at bringing up a variety of stackoverflow and blogposts relevant to my search queries. Qwant seems to struggle exactly with that: it's great at giving exactly what I was searching for but that's sometimes not what I was looking for, if you get what I mean.
In a sense, LLM's are actually perfect for that. But like you say, the super confident hallucinations are just too frustrating. Literally every time I've asked it a serious programming question, it's hallucinated an API that doesn't exist. Everyone seems to be focusing on letting LLM's solve math and thinking problems. That's exactly what _I'm_ good at. I would much rather have an LLM that is good at combining sources and giving me facts (knowledge, rather than thinking) while most of all being able to say "I don't know".
throwawa14223|10 months ago
geocrasher|10 months ago
I recently upgraded my video card, and I run a 4K display. Suddenly the display was randomly disconnecting until I restarted the monitor. I googled my brains out trying to figure out the issue, and got nowhere.
So I gave ChatGPT a shot. I told it exactly what I upgraded from/to, and which monitor I have, and it said "Oh, your HDMI 2.0 cable is specced to work, but AMD cards love HDMI2.1 especially ones that are grounded, so go get one of those even if it's overspecced for your setup."
So I did what it said, and it worked.
juliangmp|10 months ago
neilsimp1|10 months ago
nailer|10 months ago
jrvarela56|10 months ago
A query in a regular search engine can at best perform like an LLM-based provider like Perplexity for simple queries.
If you have to click or browse several results forget it, makes no sense not to use an LLM that provides sources.
Delk|10 months ago
For other topics, exact pedantic correctness may not always be as important, but I definitely do want to be able to evaluate my sources nevertheless, for other obvious reasons.
Search is actually pretty much what I want: a condensed list of possible sources of information for whatever I'm looking for. I can then build my own understanding of the topic by checking the sources and judging their credibility. Search seems to have been getting worse lately, sadly, but it's still useful.
s1artibartfast|10 months ago
farm42|10 months ago
Meet Sarah – a researcher burning the midnight oil, trying to extract insights from a 50-page paper. She’s exhausted, overwhelmed by the endless paragraphs, trying to condense hours of reading into key findings for her thesis. Sound familiar?
If you’re tired of drowning in long PDFs, AI PDF Summary is here to change your workflow. In just one click, AI PDF Summary helps you:
Instant Analysis: Summarize documents and highlight essential information in seconds. Key Insights: Quickly access the most important findings, methodologies, and conclusions. Automatic References: AI extracts references from your PDF, saving you hours of tedious work.
Stop wasting time, and start using AI PDF Summary for faster, more efficient research.
Try it now: https://clck.ru/3LQuDa
leephillips|10 months ago
jasonvorhe|10 months ago
layman51|10 months ago
If they get rid of those operators, then that would be really bad. But I have a feeling that’s what a lot of search engine people are itching to do.
trumbitta2|10 months ago
bromuro|10 months ago
milesvp|10 months ago
Conversely it’s a huge mistake to rely on LLMs for anything that requires authoritative content. It’s not good at appropriately discounting low quality sources in my experience. Google can have a similar problem, but I find it easier to find good sources there first for many topics.
Where LLMs really replace modern google is for topics you only kind of think should exist. Google used to show some pretty tenuously related links by the time you got to page 5 results and there you might find terms that bring you closer to what you’re looking for. Google simply doesn’t do that anymore. So for me, one of the joys is being able to explore topics in a way I haven’t been able to for over a decade
alganet|10 months ago
Search engines tend to over summarize less, and provide lots of references. Something LLM researchers worked hard to achieve.
If they feel lackluster for you, maybe you are not interested in those specific use cases in which they shine.
Similarly, the reason could be that you don't want to check references for yourself, and you prefer to trust the selection of cross references provided by your LLM of choice.
It is likely that your close circle of friends share an identity similar to yours. That is, by many, considered a defining characteristic of friendship. Although it can be a sign of the rising popularity of LLMs, one must take it as an anecdote and not a statistically significant fact.
I do prefer a soft selection of queries on different search engines and different LLM models. Since you asked for an opinion and self-declared an ability to do searches and questions yourself, I don't feel obligated to cite sources for this answer.
sonorous_sub|10 months ago
I don't have a circle of friends, so I have no idea what other people are doing, outside of what I read online.
jemmyw|10 months ago
I use an LLM a lot for coding. However, I was never as much into doing web searches for programming problems anyway, I used docs more and rarely needed sites like SO. I haven't therefore moved away from search engines for that side of things.
plsbenice34|10 months ago
Slash65|10 months ago
thefz|10 months ago
Koshcheiushko|10 months ago
By decades, I assume atleast 2. So minm 20 years. I'm very interested to know about your experience.
Would you please elaborate how do you filter or specifically what techniques you use to get your desired result?
Thanks.
Cthulhu_|10 months ago
With chatbots I first need to formulate a question (or, I feel like I do), then wait for it to slowly churn out an overly wordy response. Or I need to prompt it first to keep it short.
I suppose this difference is different if you already used a search engine by asking it a fully formulated question like "What is a html fieldset and how do I use it?" instead of "html fieldset" and clicking through to MDN.
notepad0x90|10 months ago
I would use the analogy of consuming a perfectly tasty and nutritional meal crafted by chef chatgpt vs visiting a few restaurants around your neighborhood and tasting different cuisines. neither approach is wrong but you get different things and values out of each approach. Do what you feel like doing!
Last week, there was a specific coding problem I needed help with, I asked chatgpt which gave me a great answer. Except I spent a few hours trying to figure out why the function chatgpt was using wasn't being included, despite the #include directives being all correct. neither chatgpt nor google were helpful. The solution was to just take a different approach to my code, if I only googled, I wouldn't have spent that time chasing the wrong solution.
Also consider this, when you ask a question, there are a bunch of rude people (well meaning) that ask you questions like "what are you really trying to do?" and who criticize a bunch of unrelated things about your code/approach/question. a lot of times that's just annoying but sometimes that gives you really good insights on the problem domain.
maxehmookau|10 months ago
mooreds|10 months ago
If it is more of an open ended question that I am not sure there'll be a page with an answer for, I am more likely to use ChatGPT/Claude.
rossdavidh|10 months ago
Same with my wife (non-technical) and teenage daughter.
agentultra|10 months ago
jillesvangurp|10 months ago
You don't want AIs reproducing information necessarily. But they are really great at interpreting your query, digging out the best links and references using a search engine and then coming up with an answer complete with links that back that up.
I'd suggest just giving perplexity a spin for a few days. Just go nuts with it; don't hold back. It's one of the better AI driven search tools I've seen.
quadsteel|10 months ago
Someone at work yesterday asked me if I knew which bus lines would be active today due to the ongoing strike. Googled, got a result, shared back in under 10 seconds.
Out of curiosity I just checked with various LLMs through t3.chat, with all kinds of features, none had anything more than a vague "check with local news" to say. Last one I tried Gemini with Deep Research and what do you know, it actually found the information and it was correct!
It also took nearly 5 minutes..
Like I feel if your search is about _reality_ (what X product should I buy, is this restoraunt good, when is A event in B city, recipes etc.) then LLMs are severely lacking.
Too slow, almost always incomplete answers if not straight up incorrect, deep research tends to work if you have 20 minutes to spare both to get an initial answer and manually go and vet the sources/look for more information in them.
DavidaGinter|10 months ago
deevus|10 months ago
People should do what makes them feel good, but I think we're all going to get a bit dumber if we rely too much on LLMs for our information.
I personally still use search engines daily when I know what it is that I am searching for. I am actually finding that I am reaching less for LLMs even though it is getting easier and cheaper (I pay for T3 Chat at $8USD p/m).
Where I find LLMs useful is when I am trying to unpack a concept or I can't remember the name of something. The result of these chats often lead to their own Google searches. Even after all this development, the best LLMs still hallucinate constantly. The best way that I've found to reduce hallucinations is to use better prompts. I have used https://promptcowboy.ai/ to some success for this.
Manfred|10 months ago
JohnFen|10 months ago
promiseofbeans|10 months ago
II2II|10 months ago
- If I am seeking technical information, I would rather get it from the original source. It is often possible to do that with a search. The output from an LLM is not going to be the original source. Even with dealing with secondary sources, it is typically easier to spot red flags in a secondary source than it is with the output of an LLM.
- I often perform image searches. I have no desire for generated images, though I'm not going to object to one if someone else "curated" the outputs of an AI model.
That said, I will use an LLM for things that aren't strictly factual. i.e. I can judge if it is good enough for my needs by simply reading it over.
legohead|10 months ago
As an example, someone typo'd an abbreviation, so I asked GPT and it gladly made up something for me. So I gave it a random abbreviation, and it did the same (using its knowledge of the game).
Even when I tell it the specific version I'm playing it gets so much wrong it's basically useless. Item stats, where mobs are located, how to do a certain quest - anything. So I'm back to using websites like wowhead and google.
tiffanyh|10 months ago
Until LLMs stop responding with over confident “MBA talk” that sounds impressive but doesn’t really say much, I’ll continue to use search engines.
maximilianburke|10 months ago
ChrisArchitect|10 months ago
Image searches without having to describe every minute detail of what I'm looking for?
Bah, even some searches that are basically looking for wikipedia/historical lookups....so much easier UI in Google Search than chatgpt's endless paragraphs with unclear sources etc.
For some things Google's AI results are helpful too, if not to just narrow down the results to certain sources.
There's no chat interface helping any of this
lqstuart|10 months ago
oldjim69|10 months ago
Search is for finding specific websites and products. Totally different things.
whyITS|10 months ago
coderjames|10 months ago
Hikikomori|10 months ago
mikrl|10 months ago
Basically, there’s a lot of good and specific information on the web, but not necessarily combined in the way I want. LLMs can help break apart my specific combination at a high level but struggle with the human ability to get to solutions quickly.
Or maybe I just suck at asking questions haah
entropyneur|10 months ago
For programming stuff that can be immediately verified LLMs are good. They also cover many cases where search engines can't go (e.g. "what was that song where X did Y?"). But looking up facts? Not yet. Burned many times and not trying it again until I hear something changed fundamentally.
superkuh|10 months ago
The serendipity of doing search with your own eyes and brain on page 34 of the results cannot be understated. Web surfing is good and does things that curated results (ie, google's <400, bing's <900, kagi's <200, LLM's very limited single results) cannot.
miki123211|10 months ago
1. questions where I expect SEO crap, like for cooking recipes, are for LLMs. I use the best available LLM for those to avoid hallucinations as much as possible, 2.5 pro these days. With so much blogspam, LLMs are actually less likely to hallucinate at this point than the real internet IMO.
2. Questions whose answer I can immediately verify, like "how do I do x in language y", also go to an LLM. If the suggestion doesn't work, then I google. My stackoverflow usage has fallen to almost 0.
3. General overviews / "how is this algorithm called" / "is there a library that does x" are LLMs, usually followed by Googling about the solutions discussed.
4. When there's no answer to my exact question anywhere, or when I need a more detailed overview of a new library / language, I still read tutorials and reference docs.
5. Local / company stuff, things like "when is this place open and how do I call them" or "what is the refund policy of this store" are exclusively Google. Same for shopping (not an American, so LLM shopping comparisons aren't very useful to me). Sadly, online reviews are still a cesspool.
akaike|10 months ago
Freak_NL|10 months ago
It's extremely disheartening. I have no trust in Youtube staying accessible as a font of public knowledge. It just works out that way now.
Reddit seems hit or miss depending on the topic. Plenty of threads there where [deleted] asked a question and [disgruntled user] replied with something which has been replaced with random text by a fancy deletion tool.
godshatter|10 months ago
ergonaught|10 months ago
Google wants to show me products to buy, which I'm almost never searching for, or they're "being super helpful" by removing/modifying my search terms, or they demonstrate that the decision makers simply don't care (or understand) what search is intended to accomplish for the user (ex: ever-present notices that there "aren't many results" for my search).
Recently tried to find a singer and song title based on lyrics. Google wouldn't present either of those, despite giving it the exact lyrics. ChatGPT gave me nonsense until I complained that it was giving me worse results than Google, at which point it gave me the correct singer but the wrong song, and then the correct song after pointing out that it was wrong about that.
Still can't get Google to do it unless my search is for the singer's name and song title, which is a bit late to the party.
udev4096|10 months ago
nfriedly|10 months ago
I use gemini more on my phone, where I feel like going through search results and reading is more effort, but I'll fall back to searching on duck duck go fairly often.
On a desktop I generally start at duck duck go, and if it's not there, then I don't bother with AI. (I use copilot in my editor, and it's usually helpful, but not really "search").
internet_points|10 months ago
ddg is often faster for when I want to get to an actual web site and find up-to-date info, for "search as navigation".
llm's are often faster for finding answers to slightly vague questions (where you know you're going to have to burn at least as much climate on wading through blogspam and ads and videos-that-didn't-need-to-be-videos if you do a search).
austin-cheney|10 months ago
Yes, I still use search engines and almost always find what I need in long form if I can’t figure it out on my own.
globnomulous|10 months ago
When I need to search, I use a search engine and try to find a trustworthy source, assuming one is available.
ASalazarMX|10 months ago
I won't deny LLMs can be useful, but they're like the news: double-check and form your own conclusions.
simonbw|10 months ago
mbirth|10 months ago
I’m mostly using my personal SearXNG instance and am still finding what I’m looking for.
On systems where I don’t have access to that, I’m currently trying Mojeek and experiment with Marginalia. Both rather traditional search engines.
I’m not a big fan of using LLMs for this. I rather punch in 3-5 keywords instead of explaining to some LLM what I’m looking for.
okayokayokay123|10 months ago
I use perplexity pro + Claude a lot as well. Maybe too much but mostly for coding and conversations about technical topics.
It really depends on intent.
I have noticed that I’ve started reading a lot more. Lots of technical books in the iPad based on what I’m interested in at the moment.
cosmic_cheese|10 months ago
These tools are useful, but in my view the level of trust seemingly commonly being placed in them far exceeds their capabilities. They’re not capable of distinguishing confidently worded but woefully incorrect reddit posts from well-verified authoritative pages which combined with their inclination for hallucinations and overeagerness to please the user makes them dangerous in an insidious way.
add-sub-mul-div|10 months ago
Why would I want to have a conversation in a medium of ambiguity when I could quickly type in a few keywords instead? If we'd invented the former first, we'd build statues of whoever invented the latter.
Why would I want to use a search service that strips privacy by forcing me to be logged in and is following the Netflix model of giving away a service cheap now to get you to rely on it so much that you'll have no choice but to keep paying for it later when it's expensive and enshittified?
gwbas1c|10 months ago
When I do, it's because either I can't think of good terms to use, and the LLM helps me figure out what I'm looking for, or I want to keep asking follow-up questions.
Even then, I probably use an LLM every other week at most.
DaSexiestAlive|10 months ago
Given my time dedicated to researching thing, I feel like I am "more productive" b/c I waste less time.
But I do my due diligence to double-check what ChatGPT suggests. So if I ask ChatGPT to recommend a list of books, I double-check with Goodreads and Amazon reviews/ratings. Like that. I guess it's like having a pair-research-sesson with an AI librarian friend? I am not sure.
But I know that I am appreciative. Does anyone remember how bad chatbots were before the arrival of low-hanging-AI-fruits like generative AI? Intel remembers.
mitthrowaway2|10 months ago
This can be very difficult, if there's a lot of semantic overlap with a more commonly-searched mainstream topic, or if the date-range-filtering is unreliable.
Sometimes I'll look for a recipe for banana bread or something, and searching "banana bread recipe" will get me to something acceptable. Then I just have to scroll down through 10 paragraphs of SEO exposition about how much everyone loves homemade banana bread.
Searching for suppliers for products that I want to buy is, ironically, extremely difficult.
I don't trust LLMs for any kind of factual information retrieval yet.
bythckr|10 months ago
Specific search expecting 1 answer. These type search is enhanced by ChatGPT. Google is losing here.
Wild goose chase / brainstorming. For this, I need a broad set of answers. I am looking for a radically different solution. Here, today's Google is inferior to the OG Google. That is for 2 reasons.
1. SEOs have screwed up the results. A famous culprit is pinterest and many other irrelevant site that fill the first couple of pages.
2. Self-sensoring & shadow banning. Banning of torrent sites, politically motivated manipulation. Though the topic I am searching is not political, there is some issue with the result. I can see the difference when I try the same in Bing or DuckDuckGo.
wolrah|10 months ago
No, I don't use the hallucination machines to search, and I never will.
I use search engines to search. I use the "make shit up" machine when I want shit made up. Modern voice models are great for IVR menus and other similar tasks. Image generation models have entirely taken over from clipart when I want a meaningless image to represent an idea. LLMs are even fun to make up bogus news articles, boilerplate text to fill a template, etc. They're not search engines though and they can't replace search engines.
If I want to find real information I use a search engine to find primary sources containing the keywords I'm looking for, or well referenced secondary sources like Wikipedia which can lead me to primary sources.
AbraKdabra|10 months ago
axelthegerman|10 months ago
I echo what others say, Kagi is a joy to use and feels just like Google used to be - useful
sReinwald|10 months ago
But a lot of my classic ADHD "let's dive into this rabbit hole" google sessions have definitely been replaced by AI deep searches like Perplexity. Instead of me going down a rabbit hole personally for all the random stuff that comes across my mind, I'll just let perplexity handle it and I come back a few minutes later and read whatever it came up with.
And sometimes, I don't even read that, and that's also fine. Just being able to hand that "task" off to an AI to handle it for me is very liberating in a way. I still get derailed a bit of course, but instead of losing half an hour, it's just a few seconds of typing out my question, and then getting back to what I've been doing.
nkrisc|10 months ago
nh23423fefe|10 months ago
runarberg|10 months ago
Just now for example I wanted to know how Emma Goldman was deported despite being a US citizen. Or whether she was a citizen to begin with. If an LLM gave me an answer I for sure would not trust it to be factual.
My search was simple: Emma Goldman citizenship. I got a wikipedia article, claiming it was argued that her citizenship was considered void after her ex husband’s citizenship was revoked. Now I needed to confirm it from a different source and also find out why her ex’s citizenship was revoked. So I searched his name + citizenship and got an New Yorker article claiming it was revoked because of some falsified papers. Done
If an LLM told me that, I simply wouldn’t trust it and would need to search for it anyway.
bloopernova|10 months ago
https://kagi.com/search?q=how+Emma+Goldman+was+deported+desp...
But you're right, I'd have to check the sources cited before I'd trust the answer.
0xbadcafebee|10 months ago
But if I then click the Google search text box at the top, and start typing, it takes 20 seconds for my text to start appearing (the screen is clearly lagged by whatever Google is doing in the background), and then somehow it starts getting jumbled. Google is the only web page this happens to.
I actually like their results, they just don't want me to see their results. Weird business model.
ruszki|10 months ago
mooiedingen|10 months ago
The more you thrust the models, the less cognitive load you are spending checking and verifiefing which will lead to what people call ai but which actually is nothing more than a for loop over in memory loaded data. That those who still think that: for Μessage in messages... can represent any sort of intelligence actually has already brainwashed on a new itteration of the "one armed bandit" where you click regenerate indefinatly with a random seed being distracted from what is going on around you
n_ary|10 months ago
Hence, search still remains my hope until SO and the likes decay.
Additionally, many search engines now already generate quick summaries or result snippets without a lot of prompt-fu, hence LLMs have actually become 40:60(llm:search) ratio day to day.
Shorel|10 months ago
Of course, I have used Phind and other LLMs, and the results sometimes are useful, but in general the information they give back feels like a summary written for the “Explain Like I'm Five” crowd, it just gives me more questions than answers, and frustrates me more than it helps me.
Where LLMs excel is when I don't know the exact search term to use for some particular concept. I ask the LLM about something, it answers with the right terms I can use in a search engine to find what I want, then I use these terms instead of my own words, and what I want is in the search results, in the first page.
alkonaut|10 months ago
The question is: are you searching for answers to something, or are you searching for a site/article/journal/whatever in order to consume the actual content? If you are searching for a page/article/journal/ in order to find an answer, then the journal/article itself was just a detour, if the LLM could give you the answer and you could trust it. But if you were looking for the page/article itself, not some piece of information IN the article then ChatGPT can (at best) give you the same URL google did, but 100x slower?
wenbin|10 months ago
Still have a trust issue with LLM/ChatGPT for facts. Maybe in a couple years my mindset will shift and trust LLM/chatgpt more.
elseleigh|10 months ago
bbyford|10 months ago
I use ChatGPT for text summation and translation, and midjourney for slide decks and graphic design ideation.
unknown|10 months ago
[deleted]
wodenokoto|10 months ago
kryptiskt|10 months ago
jacobgkau|10 months ago
I just tried ChatGPT and saw that you can ask it to search the web and also can see its sources now. I still remembered how it was last time I used it, where it specifically refused to link out to external sources (looks like they changed it around last November). That's a pretty good improvement for using it as search.
lovehashbrowns|10 months ago
bflesch|10 months ago
I'd rank kagi > chatgpt > google any day.
ttctciyf|10 months ago
But in fact I overwhelmingly use search over llm because it's an order of magnitude quicker (I also have google search's ai bobbins turned off by auto-using "web" instead of "all".)
I've used llm "for real" about 3 times in the last two months, twice to get a grounding in an area where I lacked any knowledge, so I could make better informed web searches, and once in a (failed) attempt to locate a piece of music where web search was unsuccessful.
renegat0x0|10 months ago
- I use RSS to see 'what's new', and to search it. My RSS client support search
- I maintain list of domains, so when I want to find particular place I check my list of domains (I can search domain title, description, etc.). I have 1 million of domains [0]
- If I want more precise information I try to google it
- I also may ask chatgpt
So in fact I am not using one tool to find information. I use many tools, and often narrowing it down to tools that most likely will have the answer.
[0] https://github.com/rumca-js/Internet-Places-Database
AndrewDucker|10 months ago
laweijfmvo|10 months ago
The biggest issue is when GPT returns something that doesn’t match your knowledge, experience, or intuition and you ask the “are you sure?” question, it seems to inevitably come back with “you’re right!”. But then why/how did it get it wrong the first time? Which one is actually true? So I go back to search (Kagi).
So for me, LLMs are about helping to process and collate large bodies information, but not final answers on their own.
yieldcrv|10 months ago
I use Claude pretty exclusively, and GPT as a backup because GPT errors too much and tries to train on you too much and has a lackluster search feature. The web UIs are not these company’s priority, as they focus more on other offerings and API behavior. Which means any gripe will not be addressed and you have to just go for the differentiating UX.
For a second opinion from Claude, I use ChatGPT and Google pretty much the same amount. Raw google searches are just my glorified reddit search engine.
I also use offline LLM’S a lot. But my reliance on multimodal behavior brings me back to cloud offerings.
Adachi91|10 months ago
johnny_canuck|10 months ago
On the flip side, any time I'm searching for something programming (FE, JavaScript in my case) it's last resort because an LLM is not giving me the answer I'm looking for.
This is still shocking to me, I really never thought I would replace my reliance on Google with something new.
sans_souse|10 months ago
Operator words still do work in google, albeit less so than in the past - they still do the job.
I see the AI as being there to do the major leg work. But the devil's in the details and we can't simply take their word that something is fact without scrutinizing the data.
Kostic|10 months ago
One interesting trend that I like is that I started using local LLMs way more in the last couple of months. They are good enough that I was able to cancel my personal ChatGPT subscription. Still using ChatGPT on the work machines since the company is paying it.
hnlurker22|10 months ago
sfblah|10 months ago
Keep in mind that I'm not counting in my 75% queries where I get my answer from Google Gemini I'm just guessing if you added that in, it would rise to 85-90%.
My thought is if browsers and phones started pushing queries over to an LLM, search (and search revenue) would virtually disappear.
d4mi3n|10 months ago
There is some room for optimism, though. There's been a rise in smaller search engines with different funding models that are more aligned with user needs. Kagi is the only one that comes to mind (I use it), but I'm sure there are others.
j-bos|10 months ago
Though lately for more in-depth research I've been enjoying working with the LLM to have it do the searching for me and provide me links back to the sources.
holografix|10 months ago
That’s if they can swing the immense ads machine (and by that I mean the ads organisation not the tech) and point it at a new world and a different GTM strategy.
They still haven’t figured out how to properly incentivise content producers. A lazy way would be to display ads that the source websites would display alongside the summary or llm generated response and pass on any CPM to the source.
JamesAdir|10 months ago
booleandilemma|10 months ago
runjake|10 months ago
- Specific documentation
- Datasets
- Shopping items
- Product reviews
But for the search engines I use, their branded LLM response takes up half of the first page. So that 25% figure may actually be a lot smaller.
It's important to note that these search engine LLM responses are often ludicrously incorrect -- at least, in my experience. So now I'm in this weird phase where I visit Google and debate whether I need to enter search terms or some prompt engineering in the search box.
Saris|10 months ago
For example I asked it about rear springs for a 3rd gen 4runner and it recommended springs for a 5th gen.
jerejacobson|10 months ago
I was very surprised to hear this, and it made me wonder how much of traditional SEO will be bypassed through LLM search results. How do you leverage trying to get ranked by an LLM? Do you just provide real value? Or do you get featured on a platform like Chrome Extensions Store to improve your chances? I don't know, but it is fun to think about.
nsluss|10 months ago
For the people who say they've reduced their search engine use by some large percentage, do you never need to find a particular document on the web or look for reference material?
d1an|10 months ago
GuinansEyebrows|10 months ago
Learning is fun! Reading is good for you! Being spoon fed likely-inaccurate/incomplete info or unmaintainable code is not why i got into computers.
locallost|10 months ago
JimT777|10 months ago
degrees57|10 months ago
And yes, just plain old Google search is completely lackluster in comparison to the perplexity.ai search I get to do today.
llm_nerd|10 months ago
Earlier today I was trying to remember the name of the lizard someone tweeted about seeing in a variety store. Google search yielded nothing. Gemini immediately gave me precise details of what I was talking about, it linked to web resources about it.
grishka|10 months ago
mancerayder|10 months ago
I use ChatGPT at home constantly, for history questions, symptoms of an illness, identification of a plant hiking, remembering a complex term or idea I can't articulate, tips for games, and this list goes on.
At work it's Copilot.
I've come to loathe and mock Google search and I can't be the only one.
linacica|10 months ago
lo_fye|10 months ago
If I want to play with ideas, I chat with AI. If I need facts, I use search.
noer|10 months ago
satisfice|10 months ago
Unlike Google, or Duck Duck Go, which serve up links that we can instantly judge are relevant to us, LLM spin stories that sound pretty good but may be and often are insidiously wrong. It’s too much effort to fact check them, so people don’t.
shmerl|10 months ago
jonathanstrange|10 months ago
InfiniteLoup|10 months ago
I'm still using Google for searches on Reddit these days because Reddit's own search engine is terrible.
nullbio|10 months ago