top | item 46718800

(no title)

MadDemon | 1 month ago

LLMs and their capabilities are very impressive and definitely useful. The productivity gains often seem to be smaller than intuitively expected though. For example, using ChatGPT to get a response to a random question like "How do I do XYZ" is much more convenient than googling it, but the time savings are often not that relevant for your overall productivity. Before LLMs you were usually already able to find the information quickly and even a 10x speed up does not really have too much of an impact on your overall productivity, because the time it took was already negligible.

discuss

order

palmotea|1 month ago

> For example, using ChatGPT to get a response to a random question like "How do I do XYZ" is much more convenient than googling it, but the time savings are often not that relevant for your overall productivity. Before LLMs you were usually already able to find the information quickly and even a 10x speed up does not really have too much of an impact on your overall productivity, because the time it took was already negligible.

I'd even question that. The pre-LLM solutions were in most cases better. Searching a maintained database of curated and checked information is far better than LLM output (which is possibly bullshit).

Ditto to software engineering. In software, we have things call libraries: you write the code once, test it, then you trust it and can use it as many times as you want forever for free. Why use LLM generated code when you have a library? And if you're asking for anything complex, you're probably just getting a plagiarized and bastardized version of some library anyway.

The only thing where LLMs shine is a kind of simple, lazy "mash this up so I don't have to think about it" cases. And sometimes it might be better to just do it yourself and develop your own skills instead of use an LLM.

neilalexander|1 month ago

One advantage with LLMs is that they are often more able to find things that you can roughly explain but don't know the name of. They can be a good take-off point for wider searches.

mountainriver|1 month ago

Why not have the LLM generate the library?

drzaiusx11|1 month ago

If only search engine AI output didn't constantly haluciate nonexistent APIs, it might be a net productivity gain for me...but it's not. I've been bit enough times by their false "example" output for it to be a significant net time loss vs using traditional search results.

danudey|1 month ago

Gemini hallucinated a method on a rust crate that it was trying to use and then spent ten minutes googling 'method_name v4l2 examples' and so on. That method doesn't exist and has never existed; there was a property on the object that contained the information it wanted, but it just sat there spinning its wheels convinced that this imagined method was the key to its success.

Eventually it gave up and commented out all the code it was trying to make work. Took me less than two minutes to figure out the solution using only my IDE's autocomplete.

It did save me time overall, but it's definitely not the panacea that people seem to think it is and it definitely has hiccups that will derail your productivity if you trust it too much.

gloosx|1 month ago

It's even worse when LLM eats documentation for multiple versions of the same library and starts hallucimixing methods from all versions at the same time. Certainly unusable for some libraries which had a big API transition between versions recently.

skybrian|1 month ago

Using ChatGPT and phrasing it like a search seems like a better way? “Can you find documentation about an API that does X?”

CyberDildonics|1 month ago

The real benefit to a search engine is to rework and launder other people's information and make it your information.

Now instead of the wikipedia article you are reading the exact same thing from google's home page and you don't click on anything.

SkiFire13|1 month ago

I think you're underestimating how many people don't know how to properly search on google (i.e. finding the proper keywords, selecting the reputable results, etc etc). Those are probably also the same people that will blindly believe anything a LLM says unfortunately.

1718627440|1 month ago

True, I do not know how two properly search something on google.com in 2025. I only know how to do it on startpage.com in 2025, kagi.com in 2025 or google.com in 2015.

zelos|1 month ago

LLM output is quickly rendering google search unusable, so it's kind of creating its own speedup multiplier.

robofanatic|1 month ago

It really depends on what 'XYZ' is and how many hoops you need to jump through to get to the answer. ChatGPT gets information from various places and gives you the answer as well as the explanation at each step. Without tools like ChatGPT its definitely not negligible in a lot of cases.

skybrian|1 month ago

I use ChatGPT “thinking” mode as a way to run multiple searches and summarize the results. It takes some time, but I can do other stuff in another tab and come back.

It’s for queries that are unlikely to be satisfied in a single search. I don’t think it would be a negligible amount of time if you did it yourself.

Incipient|1 month ago

But for large searches, I then have to spend a lot of time validating the output - which I'd normally do while reading the content etc as I searched (discarding dodgy websites etc).

On the other hand, where I think llms are going to excel, is you roll the dice, trust the output, and don't validate it. If it works out yayy you're ahead of everyone else that did bother to validate it.

I think this is how vibe coded apps are going to go. If the app blows up, shut down the company and start a new one.

entropicdrifter|1 month ago

I find Gemini to be the most consistent at actually using the search results for this, in "Deep Research" mode

Gud|1 month ago

This is the way. I do it the same way for development. The main point is I can run multiple tasks in parallel(myself + LLM(s)).

I let Claude and ChatGPT type out code for me, while I focus on my research

direwolf20|1 month ago

This is partly because Google is past the enshittification hump and ChatGPT is just starting to climb up it - they just announced ads.

hodgesrm|1 month ago

This. And the wonderful thing about LLMs is that they can be trained to bend responses in specific directions, say toward using Oracle Cloud solutions. There's fertile ground for commercial value extraction that goes far beyond ads. Think of it as product placement on steroid.

robofanatic|1 month ago

> they just announced ads

wondering how is it going to work when they "search the web" to get the information, are they essentially going to take ad revenue away from the source website?

foobarchu|1 month ago

Not to be a dick, but enshittification is not a hump you get past, it's a constant climb until the product is abandoned. Did you just mean growing pains?

binary132|1 month ago

The difference is that in the past that information had to come from what people wrote and are writing about, and now it can come from a derivative of an archive of what people once wrote, upon a time. So if they just stop doing that — whether because they must, or because they no longer have any reason to, or because they are now drowned out in a massive ocean of slop, or simply because they themselves have turned into slopslaves — no new information will be generated, only derivative slop, milled from derivative slop.

I think we all understand that at this point, so I question deeply why anyone acts like they don’t.

sylware|1 month ago

That makes me think about the development of much software out there: the development time is often several orders of magnitude smaller than its life cycle.

HarHarVeryFunny|1 month ago

> For example, using ChatGPT to get a response to a random question like "How do I do XYZ" is much more convenient than googling it

More convenient than traditional search? Maybe. Quicker than traditional search? Maybe not.

Asking random questions is exactly where you run into time-wasting hallucinations since the models don't seem to be very good at deciding when to use a search tool and when just to rely on their training data.

For example, just now I was asking Gemini how to fix a bunch of Ubuntu/Xfce annoyances after a major upgrade, and it was a very mixed bag. One example: the default date and time display is in an unreadably small "date stacked over time" format (using a few pixel high font so this fits into the menu bar), and Gemini's advice was to enable the "Display date and time on single line" option ... but there is no such option (it just hallucinated it), and it also hallucinated a bunch of other suggestions until I finally figured out what you need to do is to configure it to display "Time only" rather than "Data and Time", then change the "Time" format to display both data and time! Just to experiment, I then told Gemini about this fix and amusingly the response was basically "Good to know - this'll be useful for anyone reading this later"!

More examples, from yesterday (these are not rare exceptions):

1) I asked Gemini (generally considered one of the smartest models - better than ChatGPT, and rapidly taking away market share from it - 20% shift in last month or so) to look at the GitHub codebase for an Anthropic optimization challenge, to summarize and discuss etc, and it appeared to have looked at the codebase until I got more into the weeds and was questioning it where it got certain details from (what file), and it became apparent it had some (search based?) knowledge of the problem, but seemingly hadn't actually looked at it (wasn't able to?).

2) I was asking Gemini about chemically fingerprinting (via impurities, isotopes) roman silver coins to the mines that produced the silver, and it confidently (as always) comes up with a bunch of academic references that it claimed made the connection, but none or references (which did at least exist) actually contained what it claimed (just partial information), and when I pointed this out it just kept throwing out different references.

So, it's convenient to be able to chat with your "search engine" to drill down and clarify, etc, but a big time waste if a lot of it is hallucination.

Search vs Chat has anyways really become a difference without a difference since Google now gives you the "AI Overview" (a diving off point into "AI Mode"), or you can just click on "AI Mode" in the first place - which is Gemini.

fragmede|1 month ago

> I asked Gemini (generally considered one of the smartest models

Everyone is entitled to their own opinion, but I asked ChatGPT and Claude your XFCE question, and they both gave better answers than Gemini did (imo). Why would you blindly believe what someone else tells you over what you observe with your own eyes?

NoGravitas|1 month ago

Another reason search vs chat has become a difference without a difference is that search results are full of highly-ranked AI slop. I was searching yesterday for a way to get a Gnome-style hot corner in Windows 11, and the top result falsely asserted that hot corners were a built-in feature, and pointed to non-existing settings to enable them.

linuxftw|1 month ago

You're overestimating the mean person's ability to search the web effectively.

jgalt212|1 month ago

And perhaps both are overestimating the mean person's ability to detect a hallucinated solution vs a genuine one.

avaer|1 month ago

The difference is LLMs let you "run Google" on your own data with copy paste. Which you could not do before.

If you're using ChatGPT like you use Google then I agree with you. But IMO comparing ChatGPT to Google means you haven't had the "aha" moment yet.

As a concrete example, a lot of my work these days involves asking ChatGPT to produce me an obscure micro-app to process my custom data. Which it usually does and renders in one shot. This app could not exist before I asked for it. The productivity gains over coding this myself are immense. And the experience is nothing like using Google.

MadDemon|1 month ago

It's great for you that you were able to create this app that wouldn't otherwise exist, but does that app dramatically increase your overall productivity? And can you imagine that a significant chunk of the population would experience a similar productivity boost? I'm not saying that there is no productivity gain, but big tech has promised MASSIVE productivity gains. I just feel like the productivity gains are more modest for now, similar to other technologies. Maybe one day AGI comes along and changes everything, but I feel like we'll need a few more break throughs before that.

bryanrasmussen|1 month ago

there have been various solutions that allow you to "run Google" on your own data for quite a while, what is the "aha" moment related to that?