LLMs and their capabilities are very impressive and definitely useful. The productivity gains often seem to be smaller than intuitively expected though. For example, using ChatGPT to get a response to a random question like "How do I do XYZ" is much more convenient than googling it, but the time savings are often not that relevant for your overall productivity. Before LLMs you were usually already able to find the information quickly and even a 10x speed up does not really have too much of an impact on your overall productivity, because the time it took was already negligible.
palmotea|1 month ago
I'd even question that. The pre-LLM solutions were in most cases better. Searching a maintained database of curated and checked information is far better than LLM output (which is possibly bullshit).
Ditto to software engineering. In software, we have things call libraries: you write the code once, test it, then you trust it and can use it as many times as you want forever for free. Why use LLM generated code when you have a library? And if you're asking for anything complex, you're probably just getting a plagiarized and bastardized version of some library anyway.
The only thing where LLMs shine is a kind of simple, lazy "mash this up so I don't have to think about it" cases. And sometimes it might be better to just do it yourself and develop your own skills instead of use an LLM.
neilalexander|1 month ago
mountainriver|1 month ago
drzaiusx11|1 month ago
danudey|1 month ago
Eventually it gave up and commented out all the code it was trying to make work. Took me less than two minutes to figure out the solution using only my IDE's autocomplete.
It did save me time overall, but it's definitely not the panacea that people seem to think it is and it definitely has hiccups that will derail your productivity if you trust it too much.
gloosx|1 month ago
skybrian|1 month ago
CyberDildonics|1 month ago
Now instead of the wikipedia article you are reading the exact same thing from google's home page and you don't click on anything.
SkiFire13|1 month ago
1718627440|1 month ago
zelos|1 month ago
robofanatic|1 month ago
skybrian|1 month ago
It’s for queries that are unlikely to be satisfied in a single search. I don’t think it would be a negligible amount of time if you did it yourself.
Incipient|1 month ago
On the other hand, where I think llms are going to excel, is you roll the dice, trust the output, and don't validate it. If it works out yayy you're ahead of everyone else that did bother to validate it.
I think this is how vibe coded apps are going to go. If the app blows up, shut down the company and start a new one.
entropicdrifter|1 month ago
Gud|1 month ago
I let Claude and ChatGPT type out code for me, while I focus on my research
direwolf20|1 month ago
hodgesrm|1 month ago
robofanatic|1 month ago
wondering how is it going to work when they "search the web" to get the information, are they essentially going to take ad revenue away from the source website?
foobarchu|1 month ago
binary132|1 month ago
I think we all understand that at this point, so I question deeply why anyone acts like they don’t.
sylware|1 month ago
HarHarVeryFunny|1 month ago
More convenient than traditional search? Maybe. Quicker than traditional search? Maybe not.
Asking random questions is exactly where you run into time-wasting hallucinations since the models don't seem to be very good at deciding when to use a search tool and when just to rely on their training data.
For example, just now I was asking Gemini how to fix a bunch of Ubuntu/Xfce annoyances after a major upgrade, and it was a very mixed bag. One example: the default date and time display is in an unreadably small "date stacked over time" format (using a few pixel high font so this fits into the menu bar), and Gemini's advice was to enable the "Display date and time on single line" option ... but there is no such option (it just hallucinated it), and it also hallucinated a bunch of other suggestions until I finally figured out what you need to do is to configure it to display "Time only" rather than "Data and Time", then change the "Time" format to display both data and time! Just to experiment, I then told Gemini about this fix and amusingly the response was basically "Good to know - this'll be useful for anyone reading this later"!
More examples, from yesterday (these are not rare exceptions):
1) I asked Gemini (generally considered one of the smartest models - better than ChatGPT, and rapidly taking away market share from it - 20% shift in last month or so) to look at the GitHub codebase for an Anthropic optimization challenge, to summarize and discuss etc, and it appeared to have looked at the codebase until I got more into the weeds and was questioning it where it got certain details from (what file), and it became apparent it had some (search based?) knowledge of the problem, but seemingly hadn't actually looked at it (wasn't able to?).
2) I was asking Gemini about chemically fingerprinting (via impurities, isotopes) roman silver coins to the mines that produced the silver, and it confidently (as always) comes up with a bunch of academic references that it claimed made the connection, but none or references (which did at least exist) actually contained what it claimed (just partial information), and when I pointed this out it just kept throwing out different references.
So, it's convenient to be able to chat with your "search engine" to drill down and clarify, etc, but a big time waste if a lot of it is hallucination.
Search vs Chat has anyways really become a difference without a difference since Google now gives you the "AI Overview" (a diving off point into "AI Mode"), or you can just click on "AI Mode" in the first place - which is Gemini.
fragmede|1 month ago
Everyone is entitled to their own opinion, but I asked ChatGPT and Claude your XFCE question, and they both gave better answers than Gemini did (imo). Why would you blindly believe what someone else tells you over what you observe with your own eyes?
unknown|1 month ago
[deleted]
NoGravitas|1 month ago
linuxftw|1 month ago
jgalt212|1 month ago
avaer|1 month ago
If you're using ChatGPT like you use Google then I agree with you. But IMO comparing ChatGPT to Google means you haven't had the "aha" moment yet.
As a concrete example, a lot of my work these days involves asking ChatGPT to produce me an obscure micro-app to process my custom data. Which it usually does and renders in one shot. This app could not exist before I asked for it. The productivity gains over coding this myself are immense. And the experience is nothing like using Google.
MadDemon|1 month ago
bryanrasmussen|1 month ago