I disagree with the "confidence trick" framing completely. My belief in this tech isn't based on marketing hype or someone telling me it's good – it's based on cold reality of what I'm shipping daily. The productivity gains I'm seeing right now are unprecedented. Even a year ago this wouldn't have been possible, it really feels like an inflection point.
I'm seeing legitimate 10x gains because I'm not writing code anymore – I'm thinking about code and reading code. The AI facilitates both. For context: I'm maintaining a well-structured enterprise codebase (100k+ lines Django). The reality is my input is still critically valuable. My insights guide the LLM, my code review is the guardrail. The AI doesn't replace the engineer, it amplifies the intent.
Using Claude Code Opus 4.5 right now and it's insane. I love it. It's like being a writer after Gutenberg invented the printing press rather than the monk copying books by hand before it.
Even assuming all of what you said is true, none of it disproves the arguments in the article. You're talking about the technology, the article is about the marketing of the technology.
The LLM marketing exploits fear and sympathy. It pressures people into urgency. Those things can be shown and have been shown. Whether or not the actual LLM based tools genuinely help you has nothing to do with that.
> The productivity gains I'm seeing right now are unprecedented.
My company just released a year-long productivity chart covering our shift to Claude Code, and overall, developer productivity has plummeted despite the self-reported productivity survey conveying developers felt it had shot through the roof.
It's fine for a Django app that doesn't innovate and just follows the same patterns for the 100 solved problems that it solves.
The line becomes a lot blurrier when you work on non trivial issues.
A Django app is not particularly hard software, it's hardly software but a conduit from database to screens and vice-versa; which is basic software since the days of terminals. I'm not judging your job, if you get paid well for doing that, all power to you. I had a well paying Laravel job at some point.
What I'm raising though is the fact that AI is not that useful for applications that aren't solving what has been solved 100 times before. Maybe it will be, some day, reasoning that well that it will anticipate and solve problems that don't exist yet. But it will always be an inference on current problems solved.
Glad to hear you're enjoying it, personally, I enjoy solving problems, not the end result as much.
> It's like being a writer after Gutenberg invented the printing press rather than the monk copying books by hand before it.
That's not how book printing works and I'd argue the monk can far more easy create new text and devise new interpretations. And they did in the sidelines of books. It takes a long time to prepare one print but nearly just as long as to print 100 which is where the good of the printing press comes from. It's not the ease of changing or making large sums of text, it's the ease of reproducing and since copy/paste exist it is a very poor analogue in my opinion.
I'd also argue the 10x is subject/observer bias since they are the same person. My experience at this point is that boilerplate is fine with LLMs, and if that's only what you do good for you, otherwise it will hardly speed up anything as the code is the easy part.
Are you actually reading the code? I have noticed most of the gains go away when you are reading the code outputted by the machine. And sometimes I do have to fix it by hand and then the agent is like "Oh you changed that file, let me fix it"
> My belief in this tech isn't based on marketing hype or someone telling me it's good – it's based on cold reality of what I'm shipping daily
Then why is half of the big tech companies using Microsoft Teams and sending mails with .docx embedded in ?
Of course marketing matters.
And of course the hard facts also matters, and I don't think anybody is saying that AI agents are purely marketing hype. But regardless, it is still interesting to take a step back and observe what marketing pressures we are subject to.
"My belief in this tech isn't based on marketing hype or someone telling me it's good - it's based on cold reality of what I'm shipping daily."
This may be true. The commenter may "believe in this tech" based on his experimentation with it
But the majority of sentences following this statement ironically appear to be "marketing hype" or "someone telling [us] it's good":
1. "The productivity gains I'm seeing right now are unprecedented."
2. "Even a year ago this wouldn't have been possible, it really feels like an inflection point."
3. "I'm seeing legitimate 10x gains because I'm not writing code anymore - I'm thinking about code and reading code."
4. "Using Claude Code Opus 4.5 right now and it's insane."
5. "It's like being a writer after Gutenberg invented the printing press rather than the monk copying books by hand before it."
The "framing" in this blog post is not focused on whether "this tech" actually saves anyone any time or money
It is focused on _hype_, namely how "this tech" is promoted. That promotion could be intentional or unintentional
N.B. I am not "agreeing" with the blog post author or "disagreeing" with the HN commenter, or vice versa. The point I'm making is that one is focused on whether "this tech" works for them and the other is focused on how "this tech" is being promoted. Those are two different things, as other replies have also noted. Additionally, the comment appears to be an example of the promotion (hype) that its author claims is not the basis for his "belief in this tech"
I think the use of the term "belief" is interesting
That term normally implies a lack of personal knowledge:
151 "Belief" gcide "The Collaborative International Dictionary of English v.0.48"
Belief \Be*lief"\, n. [OE. bileafe, bileve; cf. AS. gele['a]fa. See {Believe}.]
1. Assent to a proposition or affirmation, or the acceptance of a fact, opinion, or assertion as real or true, without immediate personal knowledge; reliance upon word or testimony; partial or full assurance without positive knowledge or absolute certainty; persuasion; conviction; confidence; as, belief of a witness; the belief of our senses. [1913 Webster]
Belief admits of all degrees, from the slightest suspicion to the fullest assurance. --Reid. [1913 Webster]
2. (Theol.) A persuasion of the truths of religion; faith. [1913 Webster]
No man can attain [to] belief by the bare contemplation of heaven and earth. --Hooker. [1913 Webster]
4. A tenet, or the body of tenets, held by the advocates of any class of views; doctrine; creed. [1913 Webster]
In the heat of persecution to which Christian belief was subject upon its first promulgation. --Hooker. [1913 Webster]
{Ultimate belief}, a first principle incapable of proof; an intuitive truth; an intuition. --Sir W. Hamilton. [1913 Webster]
n 1: any cognitive content held as true [ant: {disbelief}, {unbelief}]
2: a vague idea in which some confidence is placed; "his impression of her was favorable"; "what are your feelings about the crisis?"; "it strengthened my belief in his sincerity"; "I had a feeling that she was lying" [syn: {impression}, {feeling}, {belief}, {notion}, {opinion}]
151 "BELIEF" bouvier "Bouvier's Law Dictionary, Revised 6th Ed (1856)"
BELIEF. The conviction of the mind, arising from evidence received, or from information derived, not from actual perception by our senses, but from. the relation or information of others who have had the means of acquiring actual knowledge of the facts and in whose qualifications for acquiring that knowledge, and retaining it, and afterwards in communicating it, we can place confidence. " Without recurring to the books of metaphysicians' "says Chief Justice Tilghman, 4 Serg. & Rawle, 137, "let any man of plain common sense, examine the operations of, his own mind, he will assuredly find that on different subjects his belief is different. I have a firm belief that, the moon revolves round the earth. I may believe, too, that there are mountains and valleys in the moon; but this belief is not so strong, because the evidence is weaker." Vide 1 Stark. Ev. 41; 2 Pow. Mortg. 555; 1 Ves. 95; 12 Ves. 80; 1 P. A. Browne's R 258; 1 Stark. Ev. 127; Dyer, 53; 2 Hawk. c. 46, s. 167; 3 Wil. 1, s. 427; 2 Bl. R. 881; Leach, 270; 8 Watts, R. 406; 1 Greenl. Ev. Sec. 7-13, a.
Have you seen the 2025 METR report on AI coding productivity?
TLDR: everyone thought AI made people faster, including those who did the task, both before and after doing it. However, AI made people slower at doing the task.
I agree that all the AI doomerism is silly (by which I mean those that are concerned about some Terminator-style machine uprising, the economic issues are quite real).
But it's clear the LLM's have some real value, even if we always need a human-in-the-loop to prevent hallucinations it can still massively reduce the amount of human labour required for many tasks.
NFT's felt like a con, and in retrospect were a con. The LLM's are clearly useful for many things.
Those aren’t mutually exclusive; something can be both useful and a con.
When a con man sells you a cheap watch for an high price, what you get is still useful—a watch that tells the time—but you were also still conned, because what you paid for is not what was advertised. You overpaid because you were tricked about what you were buying.
LLMs are useful for many things, but they’re also not nearly as beneficial and powerful as they’re being sold as. Sam Altman, while entirely ignoring the societal issues raised by the technology (such as the spread of misinformation and unhealthy dependencies), repeatedly claims it will cure all cancers and other kinds of diseases, eradicate poverty, solve the housing crisis, democracy… Those are bullshit, thus the con description applies.
I disagree with this perspective. Human labour is mostly inefficiency from habitual repetition from experience. LLMs tend not to improve that. They look like they do but instead train the user into replacing the repetition with machine repetition.
We had an "essential" reporting function in the business which was done in Excel. All SMEs seem to have little pockets of this. Hours were spent automating the task with VBA to no avail. Then LLMs came in after the CTO became obsessed with it and it got hit with that hammer. This is four iterations of the same job: manual, Excel, Excel+VBA, Excel+CoPilot. 15 years this went on.
No one actually bothered to understand the reason the work was being done and the LLM did not have any context. This was being emailed weekly to a distribution list with no subscribers as the last one had left the company 14 years ago. No one knew, cared or even though about it.
And I see the same in all areas LLMs are used. They are merely pasting over incompetence, bad engineering designs, poor abstractions and low knowledge situations. Literally no one cares about this as long as the work gets done and the world keeps spinning. No one really wants to make anything better, just do the bad stuff faster. If that's where something is useful, then we have fucked up.
Another one. I need to make a form to store some stuff in a database so I can do some analytics on it later. The discussion starts with how we can approach it with ReactJS+microservices+kubernetes. That isn't the problem I need solving. People have been completely blinded on what a problem is and how to get rid of it efficiently.
> it can still massively reduce the amount of human labour required for many tasks.
I want to see some numbers before I believe this. So far my feelings is that the best case scenario is that it reduces the time it needs to do bureaucratic tasks, tasks that were not needed anyway and could have just been removed for an even grater boost in productivity. Maybe, it seems to be automating tasks from junior engineer, tasks which they need to perform in order to gain experience and develop their expertise. Although I need to see the numbers before I believe even that.
I have a suspicion that AI is not increasing productivity by any meaningful metric which couldn’t be increased by much much much cheaper and easier means.
I don't think that's of any doubt. Even beyond programming, imo especially beyond programming, there are a great many things they're useful for. The question is; is that worth the enormous cost of running them?
NFT's were cheap enough to produce and that didn't really scale depending on the "quality" of the NFT. With an LLM, if you want to produce something at the same scale as OpenAI or Anthropic the amount of money you need just to run it is staggering.
This has always been the problem, LLMs (as we currently know them) they being a "pretty useful tool" is frankly not good enough for the investment put into them
I think anyone who thinks that LLMs are not intelligent in any sense is simply living in denial. They might not be intelligent in the same way a human is intelligent, they might make mistakes a person wouldn't make, but that's not the question.
Any standard of intelligence devised before LLMs is passed by LLMs relatively easily. They do things that 10 years ago people would have said are impossible for a computer to do.
I can run claude code on my laptop with an instruction like "fix the sound card on this laptop" and it will analyze what my current settings are, determine what might be wrong, devise tests to have me gather information it can't gather itself, run commands to probe hardware for it's capabilities, and finally offer a menu of solutions, give the commands to implement the solution, and finally test that the solution works perfectly. Can you do that?
I'm vibe coding now, after work. I am able to much more quickly explore the landscape of a problem, get into and out of dead ends in minutes instead of wasting an evening. At some point I need to go in and fix, but the benefit of the tool is there. It is like a electric screwdriver vs. normal one. Sometimes the normal one can do things the electric can't, but hell if you get an IKEA deliver you want the electric one.
There are dozens of definitions of "intelligence", we can't even agree what intelligence means in humans, never mind elsewhere. So yes, by some subset of definitions it is intelligent.
But by some subset of definitions my calculator is intelligent. By some subset of definitions a mouse is intelligent. And, more interestingly, by some subset of definitions a mouse is far more intelligent than an LLM.
It works because people have answered similar questions a million times on the internet and the LLMs are trained on it.
So it will work for a while. When the human generated stuff stops appearing online, then LLMs ll quickly fall in usefulness.
But that is enough time for the people who might think that it going to last for ever to make huge investments into it, and the AI companies to get away with the loot.
Actually it is the best kind of scam...
EDIT: Another thought. Thus it seems that AI companies actually have an incentive to hinder developements, because new things mean that their model is less useful. With the widespread dependence on AI, they might even get away with manipulating the population to stagnate.
I did that when I was 14 because I had no other choice, damn you SoundBlaster! I didn't get any menu but I got sound in the end.
I don't think conflating intelligence with "what a computer can do" makes much sense though. I can't calculate the X digit of PI in less than Z, I'm still intelligent (or I pretend to be).
But the question is not about intelligence, it's a red herring, it's just about utility and they (LLM's) are useful.
>I can run claude code on my laptop with an instruction like "fix the sound card on this laptop" and it will analyze what my current settings are, determine what might be wrong, devise tests to have me gather information it can't gather itself, run commands to probe hardware for it's capabilities, and finally offer a menu of solutions, give the commands to implement the solution, and finally test that the solution works perfectly. Can you do that?
Yes, I have worked in small enough companies in which the developers just end up becoming the default IT help desk. I never had any formal training in IT, but most of that kind of IT work can be accomplished with decent enough Google skills. In a way, it worked the same as you and the LLM. I would go poking through settings, run tests to gather info, run commands, and overall just keep trying different solutions until either one worked or it became reasonable to give up. I'm sure many people here have had similar experiences doing the same thing in their own families. I'm not too impressed with an LLM doing that. In this example, it's functionally just improving people's Googling skills.
> The purpose here is not to responsibly warn us of a real threat. If that were the aim there would be a lot more shutting down of data centres and a lot less selling of nuclear-weapon-level-dangerous chatbots.
you're lumping together two very different groups of people and pointing out that their beliefs are incompatible. of course they are! the people who think there is a real threat are generally different people from the ones who want to push AI progress as fast as possible! the people who say both do so generally out of a need to compromise rather than there existing many people who simultaneously hold both views.
I feel this framing in general says more about our attitudes to nuclear weapons than it does about chatbots. The 'Peace Dividend' era which is rapidly drawing to a close has made people careless when they talk about the magnitude of effects a nuclear war would have.
AI can be misused, but it can't be misused to the point an enormously depopulated humanity is forced back into subsistence agriculture to survive, spending centuries if not millennia to get back to where we are now.
I think it's interesting how gamers have developed a pretty healthy aversion to generative ai in video games. Steam and Itch both now make it mandatory that games disclose generative ai use and recently even beloved Larian Studios was under fire for using ai for concept art. Gamers hate that shit.
I think that's good, but the whole "AI is literally not doing anything", that it's just some mass hallucination has to die. Gamers argue it takes jobs from artists away, programmers seem to have to argue it doesn't actually do anything for some reason. Isn't that telling?
> programmers seem to have to argue it doesn't actually do anything for some reason.
It's not really hard to see... spend your whole life defining yourself around what you do that others can't or won't, then an algorithm comes along which can do a lot of the same. Directly threatens the ego, understandings around self-image and self-worth, as well as future financial prospects (perceived). Along with a heavy dose of change scary, change bad.
Personally, I think the solution is to avoid building your self-image around material things, and to welcome and embrace new tools which always bring new opportunities, but I can see why the polar opposite is a natural reaction for many.
I think this is probably a trend that will erode with time, even now it’s probably just moved underground. How many human artists are using AI for concepts then laundering the results? Even if it’s just idea generation, that’s a part of the process. If it speeds up throughput, then maybe that’s fewer jobs in the long run.
And if AI assisted products are cheaper, and are actually good, then people will have to vote with their wallets. I think we’ve learned that people aren’t very good at doing that with causes they claim to care about once they have to actually part with their money.
IDK, I think it's at least reasonable to look at the fact that there isn't a ton of new software available out there and conclude "AI isn't actually making software creation any faster". I understand the counterarguments to that but it's hardly an unreasonable conclusion.
I haven't gamed much in the last few years due to severe lack of time so I'm out of touch, but I used to play a lot of CRPGs and I always dreamed of having NPCs who could talk and react beyond predefined scripted lines. This seems to finally be possible thanks to LLMs and I think it was desired by many (not only me). So why are gamers not excited about generative AI?
Unless AI is used for code (which it is, surely, almost everywhere), then Gamers don't give a damn. Also, Larian didn't use it for concept art, they used it to generate the first mood board to give to the concept artist as a guideline. And then there is Ark Raiders, who uses AI for all their VO, and that game is a massive hit.
This is just a breathless bubble, the wider gaming audience couldn't give two shits if studios use AI or not.
That is consumer choice, a consumer has rights to know whether something is made by using a tech which could make them unemployed or not. I wouldnt pay $70 or $10 on a game that I know someone didnt put effort into.
I think the costs of LLMs (huge energy hunger, people being fired because of it, hostile takeover of human creativity, and it causing computer hardware to rise in cost exponentially) is by far larger than the uses (generating videos of fish with arms, programming slightly faster, writing slop emails to talented people).
I know LLMs won't vanish again magically, but I wish they would every time I have to deal with their output.
"AI safety" groups are part of what's described here: you might assume from the general "safety" label that organizations like PauseAI or ControlAI would focus things like data center pollution, the generation of sexual abuse material, causing mental harm, or many other things we can already observe.
But they don't. Instead, "AI safety" organizations all appear to exclusively warn of unstoppable, apocalyptic, and unprovable harms that seem tuned exclusively to instill fear.
We should do both and it makes sense that different orgs have different focuses. It makes no sense to berate one set of orgs for not working on the exact type of thing that you want. PauseAI and ControlAI have each received less than $1 million in funding. They are both very small organizations as far as these types of advocacy non-profits go.
I don't think it's true. It is probably overhyped but it is legitimately useful. Current agents can do around 70% of coding stuff I do at work with light supervision.
That’s exactly what a con is: selling you something as being more than what it actually is. If you agree it’s overhyped by its sellers, you agree it’s a con.
> Current agents can do around 70% of coding stuff I do
LLMs are being sold as capable of significantly more than coding. Focusing on that singular aspect misses the point of the article.
Considerations around current events aside, what exactly is the supposed "confidence trick" of mechanical or electronic calculators? They're labor-saving devices, not arbiters of truth, and as far as I can tell, they're pretty good at saving a lot of labor.
Reading AI-denier articles in 2026 is almost as boring as reading crypto-booster articles was 10 years ago. You may not like LLMs, you may not want LLMs, but pretending they're not doing anything clever or useful is bizarre, however flowery you make your language.
> Reading AI-denier articles in 2026 is almost as boring as reading crypto-booster articles was 10 years ago.
That’s quite a funny take, because I bet you someone will have made that same argument to criticise “crypto-deniers”.
> pretending they're not doing anything clever or useful
That isn’t at all the argument of the article. No one is claiming LLMs are completely useless or that they aren’t interesting technology. The critique is they’re being sold as way more than what they are or could be, that that has tangible negative consequences we can already feel, and the benefits don’t offset it.
Most of what I've been reading on either side of the argument is reductive. It's possible to have a take based on one's perspective and experience but it's impossible (at this time) to generalize things more broadly. I think what most people feel is multi-faceted: efficiency expectations from "leaders", job change inevitability (perceived or real), economic impact (should things not go well), loss of identity (am I a programmer, engineer, manager of things?), and several others. The discussions on the multiplicative effect of LLMs are being framed as a false dichotomy when it's far more complicated and nuanced.
> We should be afraid, they say, making very public comments about “P(Doom)” - the chance the technology somehow rises up and destroys us.
> This has, of course, not happened.
This is so incredibly shallow. I can't think of even a single doomer, who ever claimed that AI will destroy us by now. P(doom) is about the likelihood of it destroying us "eventually". And I haven't seen anything in this post or in any recent developments to make my reduce my own p(doom), which is not close to zero.
A "confidence trick" doesn't generate a working program on demand -- I just did that yesterday. In a blink of an eye. It worked. It was clean (passed linter).
It wasn't very complex, yet it would have taking me several minutes to type it myself. Again: two sentence description -> enter -> working program/script.
Plus: with simple feedback the program is modified accordingly (and working).
While there might be open issues with AI, those AI companies are providing *far* more value than null.
> GPT-3 was supposedly so powerful OpenAI refused to release the trained model because of “concerns about malicious applications of the technology”. [...] This has, of course, not happened.
What parallel world are they living in? Every single online platform has been flooded with AI generated content and had to enact counter measures, or went the other way, embraced it and replaced humans with AI. AI use in scams has also become common place.
Everything they warned about with the release of GPT‑2 did in fact happen.
"You're absolutely right!"
Those models are geared towards continuing the text. I have my impression that without that, the model would disagree much more as a chat/conversation
Yeah there is overhyped marketing, but at this point, AI has revolutionized software engineering and is writing the majority of code world wide whether you like it or not and is still improving.
> “…LLM vendors [are responsible for the message?] We should be afraid […] The purpose here is not to responsibly warn us of a real threat. If that were the aim there would be a lot more shutting down of data centres…”
Let’s not forget these innovations are on the heels of COVID. Strong, swift action by government, industry, and individuals against a deadly pathogen is “controversial”. Even if killer AI was here, twice shy…
I’m angry about a lot of things right now, but LLM “marketing” (and inadequate reporting which turns to science fiction instead of science) is not one of them. The LLM revolution is getting shoehorned into this Three Card Monte narrative, and I don’t see the utility.
The criticisms of LLM promise and danger is part of the zeitgeist. If firms are playing off of anything I bet it’s that, and not an industry wide conspiracy to trick the public and customers. Advertising and marketing meets people where they’re at, and “imagines” where they want to go, all wrapped up with the product. It doesn’t make the product frightening. It’s the same for all manner of dangerous technologies—guns, nuclear energy, whatever. The product is the solution to the fear.
> “The LLMs we have today are famously obsequious. The phrase “you’re absolutely right!” may never again be used in earnest.”
Hard NO. I get it, the language patterns of LLMs are creepy, but it’s not bad usage. So, no.
I can handle the cognitive dissonance of computer algorithms spewing out anthropomorphic phrasing and not decide that I, as a human being, can no longer in humility and honesty tell someone else they’re right, and i was wrong.
What? The comparison to the confidence trick from 400 years ago already stops at the second point? Why call this article that way if you are not going to bring up any parallels beyond... the extremely weak link of "building trust".
You have not actually made clear how mechanical calculators were a scam.
Ironically, this article feels like it was written by an LLM. Just a baseless opinion.
> Simply put, these companies have fallen for a confidence trick. They have built on centuries of received wisdom about the efficacy and reliability of computers, and have been drawn in by highly effective salespeople selling scarcely-believable technological wonders.
"People are falling in love with LLMs" and "P(Doom) is fearmongering" so close to each other is some cognitive dissonance.
The 'are LLMs intelligent?' discussion should be retired at this point, too. It's academic, the answer doesn't matter for businesses and consumers; it matters for philosophers (which everyone is even a little bit). 'Are LLMs useful for a great variety of tasks?' is a resounding 'yes'.
krystofee|1 month ago
I'm seeing legitimate 10x gains because I'm not writing code anymore – I'm thinking about code and reading code. The AI facilitates both. For context: I'm maintaining a well-structured enterprise codebase (100k+ lines Django). The reality is my input is still critically valuable. My insights guide the LLM, my code review is the guardrail. The AI doesn't replace the engineer, it amplifies the intent.
Using Claude Code Opus 4.5 right now and it's insane. I love it. It's like being a writer after Gutenberg invented the printing press rather than the monk copying books by hand before it.
vanderZwan|1 month ago
The LLM marketing exploits fear and sympathy. It pressures people into urgency. Those things can be shown and have been shown. Whether or not the actual LLM based tools genuinely help you has nothing to do with that.
sotix|1 month ago
My company just released a year-long productivity chart covering our shift to Claude Code, and overall, developer productivity has plummeted despite the self-reported productivity survey conveying developers felt it had shot through the roof.
keyle|1 month ago
The line becomes a lot blurrier when you work on non trivial issues.
A Django app is not particularly hard software, it's hardly software but a conduit from database to screens and vice-versa; which is basic software since the days of terminals. I'm not judging your job, if you get paid well for doing that, all power to you. I had a well paying Laravel job at some point.
What I'm raising though is the fact that AI is not that useful for applications that aren't solving what has been solved 100 times before. Maybe it will be, some day, reasoning that well that it will anticipate and solve problems that don't exist yet. But it will always be an inference on current problems solved.
Glad to hear you're enjoying it, personally, I enjoy solving problems, not the end result as much.
consp|1 month ago
That's not how book printing works and I'd argue the monk can far more easy create new text and devise new interpretations. And they did in the sidelines of books. It takes a long time to prepare one print but nearly just as long as to print 100 which is where the good of the printing press comes from. It's not the ease of changing or making large sums of text, it's the ease of reproducing and since copy/paste exist it is a very poor analogue in my opinion.
I'd also argue the 10x is subject/observer bias since they are the same person. My experience at this point is that boilerplate is fine with LLMs, and if that's only what you do good for you, otherwise it will hardly speed up anything as the code is the easy part.
ManuelKiessling|1 month ago
It’s like arguing that the piano in the room is out of tune and not bothering to walk over to the piano and hit its keys.
Frieren|1 month ago
How long have you been in the industry?
This does not seem a revolution compared with database standardization, abandonment of assembly for most coding, introduction of game engines, etc.
I see a lot of hype for LLMs from people that do not have the experience to compare them to anything else.
falloutx|1 month ago
abricq|1 month ago
Then why is half of the big tech companies using Microsoft Teams and sending mails with .docx embedded in ?
Of course marketing matters.
And of course the hard facts also matters, and I don't think anybody is saying that AI agents are purely marketing hype. But regardless, it is still interesting to take a step back and observe what marketing pressures we are subject to.
mpweiher|1 month ago
Self-reports on this have been remarkably unreliable.
satisfice|1 month ago
How do I know? Because I am testing it, and I see a lot of problems that you are not mentioning.
I don’t know if you’ve been conned or you are doing the conning. It’s at least one of those.
yomismoaqui|1 month ago
It was something like this:
"We think we are building Ultron but really we are building the Iron Man suit. It will be a technology to amplify humans, not replace them"
megamix|1 month ago
WithinReason|1 month ago
whattheheckheck|1 month ago
energy123|1 month ago
How do you avoid this turning into spaghetti? Do you understand/read all the output?
hydr0smok3|1 month ago
1vuio0pswjnm7|1 month ago
This may be true. The commenter may "believe in this tech" based on his experimentation with it
But the majority of sentences following this statement ironically appear to be "marketing hype" or "someone telling [us] it's good":
1. "The productivity gains I'm seeing right now are unprecedented."
2. "Even a year ago this wouldn't have been possible, it really feels like an inflection point."
3. "I'm seeing legitimate 10x gains because I'm not writing code anymore - I'm thinking about code and reading code."
4. "Using Claude Code Opus 4.5 right now and it's insane."
5. "It's like being a writer after Gutenberg invented the printing press rather than the monk copying books by hand before it."
The "framing" in this blog post is not focused on whether "this tech" actually saves anyone any time or money
It is focused on _hype_, namely how "this tech" is promoted. That promotion could be intentional or unintentional
N.B. I am not "agreeing" with the blog post author or "disagreeing" with the HN commenter, or vice versa. The point I'm making is that one is focused on whether "this tech" works for them and the other is focused on how "this tech" is being promoted. Those are two different things, as other replies have also noted. Additionally, the comment appears to be an example of the promotion (hype) that its author claims is not the basis for his "belief in this tech"
I think the use of the term "belief" is interesting
That term normally implies a lack of personal knowledge:
151 "Belief" gcide "The Collaborative International Dictionary of English v.0.48"
Belief \Be*lief"\, n. [OE. bileafe, bileve; cf. AS. gele['a]fa. See {Believe}.]
1. Assent to a proposition or affirmation, or the acceptance of a fact, opinion, or assertion as real or true, without immediate personal knowledge; reliance upon word or testimony; partial or full assurance without positive knowledge or absolute certainty; persuasion; conviction; confidence; as, belief of a witness; the belief of our senses. [1913 Webster]
Belief admits of all degrees, from the slightest suspicion to the fullest assurance. --Reid. [1913 Webster]
2. (Theol.) A persuasion of the truths of religion; faith. [1913 Webster]
No man can attain [to] belief by the bare contemplation of heaven and earth. --Hooker. [1913 Webster]
4. A tenet, or the body of tenets, held by the advocates of any class of views; doctrine; creed. [1913 Webster]
In the heat of persecution to which Christian belief was subject upon its first promulgation. --Hooker. [1913 Webster]
{Ultimate belief}, a first principle incapable of proof; an intuitive truth; an intuition. --Sir W. Hamilton. [1913 Webster]
Syn: Credence; trust; reliance; assurance; opinion. [1913 Webster]
151 "belief" wn "WordNet (r) 3.0 (2006)"
belief
n 1: any cognitive content held as true [ant: {disbelief}, {unbelief}]
2: a vague idea in which some confidence is placed; "his impression of her was favorable"; "what are your feelings about the crisis?"; "it strengthened my belief in his sincerity"; "I had a feeling that she was lying" [syn: {impression}, {feeling}, {belief}, {notion}, {opinion}]
151 "BELIEF" bouvier "Bouvier's Law Dictionary, Revised 6th Ed (1856)"
BELIEF. The conviction of the mind, arising from evidence received, or from information derived, not from actual perception by our senses, but from. the relation or information of others who have had the means of acquiring actual knowledge of the facts and in whose qualifications for acquiring that knowledge, and retaining it, and afterwards in communicating it, we can place confidence. " Without recurring to the books of metaphysicians' "says Chief Justice Tilghman, 4 Serg. & Rawle, 137, "let any man of plain common sense, examine the operations of, his own mind, he will assuredly find that on different subjects his belief is different. I have a firm belief that, the moon revolves round the earth. I may believe, too, that there are mountains and valleys in the moon; but this belief is not so strong, because the evidence is weaker." Vide 1 Stark. Ev. 41; 2 Pow. Mortg. 555; 1 Ves. 95; 12 Ves. 80; 1 P. A. Browne's R 258; 1 Stark. Ev. 127; Dyer, 53; 2 Hawk. c. 46, s. 167; 3 Wil. 1, s. 427; 2 Bl. R. 881; Leach, 270; 8 Watts, R. 406; 1 Greenl. Ev. Sec. 7-13, a.
TechDebtDevin|1 month ago
[deleted]
immibis|1 month ago
TLDR: everyone thought AI made people faster, including those who did the task, both before and after doing it. However, AI made people slower at doing the task.
schnitzelstoat|1 month ago
But it's clear the LLM's have some real value, even if we always need a human-in-the-loop to prevent hallucinations it can still massively reduce the amount of human labour required for many tasks.
NFT's felt like a con, and in retrospect were a con. The LLM's are clearly useful for many things.
latexr|1 month ago
When a con man sells you a cheap watch for an high price, what you get is still useful—a watch that tells the time—but you were also still conned, because what you paid for is not what was advertised. You overpaid because you were tricked about what you were buying.
LLMs are useful for many things, but they’re also not nearly as beneficial and powerful as they’re being sold as. Sam Altman, while entirely ignoring the societal issues raised by the technology (such as the spread of misinformation and unhealthy dependencies), repeatedly claims it will cure all cancers and other kinds of diseases, eradicate poverty, solve the housing crisis, democracy… Those are bullshit, thus the con description applies.
https://youtu.be/l0K4XPu3Qhg?t=60
dgxyz|1 month ago
We had an "essential" reporting function in the business which was done in Excel. All SMEs seem to have little pockets of this. Hours were spent automating the task with VBA to no avail. Then LLMs came in after the CTO became obsessed with it and it got hit with that hammer. This is four iterations of the same job: manual, Excel, Excel+VBA, Excel+CoPilot. 15 years this went on.
No one actually bothered to understand the reason the work was being done and the LLM did not have any context. This was being emailed weekly to a distribution list with no subscribers as the last one had left the company 14 years ago. No one knew, cared or even though about it.
And I see the same in all areas LLMs are used. They are merely pasting over incompetence, bad engineering designs, poor abstractions and low knowledge situations. Literally no one cares about this as long as the work gets done and the world keeps spinning. No one really wants to make anything better, just do the bad stuff faster. If that's where something is useful, then we have fucked up.
Another one. I need to make a form to store some stuff in a database so I can do some analytics on it later. The discussion starts with how we can approach it with ReactJS+microservices+kubernetes. That isn't the problem I need solving. People have been completely blinded on what a problem is and how to get rid of it efficiently.
ACCount37|1 month ago
There is a finite amount of incremental improvements left between the performance of today's LLMs and the limits of human performance.
This alone should give you second thoughts on "AI doomerism".
runarberg|1 month ago
I want to see some numbers before I believe this. So far my feelings is that the best case scenario is that it reduces the time it needs to do bureaucratic tasks, tasks that were not needed anyway and could have just been removed for an even grater boost in productivity. Maybe, it seems to be automating tasks from junior engineer, tasks which they need to perform in order to gain experience and develop their expertise. Although I need to see the numbers before I believe even that.
I have a suspicion that AI is not increasing productivity by any meaningful metric which couldn’t be increased by much much much cheaper and easier means.
bodge5000|1 month ago
I don't think that's of any doubt. Even beyond programming, imo especially beyond programming, there are a great many things they're useful for. The question is; is that worth the enormous cost of running them?
NFT's were cheap enough to produce and that didn't really scale depending on the "quality" of the NFT. With an LLM, if you want to produce something at the same scale as OpenAI or Anthropic the amount of money you need just to run it is staggering.
This has always been the problem, LLMs (as we currently know them) they being a "pretty useful tool" is frankly not good enough for the investment put into them
ltbarcly3|1 month ago
Any standard of intelligence devised before LLMs is passed by LLMs relatively easily. They do things that 10 years ago people would have said are impossible for a computer to do.
I can run claude code on my laptop with an instruction like "fix the sound card on this laptop" and it will analyze what my current settings are, determine what might be wrong, devise tests to have me gather information it can't gather itself, run commands to probe hardware for it's capabilities, and finally offer a menu of solutions, give the commands to implement the solution, and finally test that the solution works perfectly. Can you do that?
dependency_2x|1 month ago
kusokurae|1 month ago
jaccola|1 month ago
But by some subset of definitions my calculator is intelligent. By some subset of definitions a mouse is intelligent. And, more interestingly, by some subset of definitions a mouse is far more intelligent than an LLM.
qsera|1 month ago
It works because people have answered similar questions a million times on the internet and the LLMs are trained on it.
So it will work for a while. When the human generated stuff stops appearing online, then LLMs ll quickly fall in usefulness.
But that is enough time for the people who might think that it going to last for ever to make huge investments into it, and the AI companies to get away with the loot.
Actually it is the best kind of scam...
EDIT: Another thought. Thus it seems that AI companies actually have an incentive to hinder developements, because new things mean that their model is less useful. With the widespread dependence on AI, they might even get away with manipulating the population to stagnate.
SwoopsFromAbove|1 month ago
My pocket calculator is not intelligent. Nor are LLMs.
techpression|1 month ago
I don't think conflating intelligence with "what a computer can do" makes much sense though. I can't calculate the X digit of PI in less than Z, I'm still intelligent (or I pretend to be).
But the question is not about intelligence, it's a red herring, it's just about utility and they (LLM's) are useful.
slg|1 month ago
Yes, I have worked in small enough companies in which the developers just end up becoming the default IT help desk. I never had any formal training in IT, but most of that kind of IT work can be accomplished with decent enough Google skills. In a way, it worked the same as you and the LLM. I would go poking through settings, run tests to gather info, run commands, and overall just keep trying different solutions until either one worked or it became reasonable to give up. I'm sure many people here have had similar experiences doing the same thing in their own families. I'm not too impressed with an LLM doing that. In this example, it's functionally just improving people's Googling skills.
TeriyakiBomb|1 month ago
dgxyz|1 month ago
[deleted]
exceptione|1 month ago
leogao|1 month ago
you're lumping together two very different groups of people and pointing out that their beliefs are incompatible. of course they are! the people who think there is a real threat are generally different people from the ones who want to push AI progress as fast as possible! the people who say both do so generally out of a need to compromise rather than there existing many people who simultaneously hold both views.
BoxOfRain|1 month ago
I feel this framing in general says more about our attitudes to nuclear weapons than it does about chatbots. The 'Peace Dividend' era which is rapidly drawing to a close has made people careless when they talk about the magnitude of effects a nuclear war would have.
AI can be misused, but it can't be misused to the point an enormously depopulated humanity is forced back into subsistence agriculture to survive, spending centuries if not millennia to get back to where we are now.
lyu07282|1 month ago
I think that's good, but the whole "AI is literally not doing anything", that it's just some mass hallucination has to die. Gamers argue it takes jobs from artists away, programmers seem to have to argue it doesn't actually do anything for some reason. Isn't that telling?
timschmidt|1 month ago
It's not really hard to see... spend your whole life defining yourself around what you do that others can't or won't, then an algorithm comes along which can do a lot of the same. Directly threatens the ego, understandings around self-image and self-worth, as well as future financial prospects (perceived). Along with a heavy dose of change scary, change bad.
Personally, I think the solution is to avoid building your self-image around material things, and to welcome and embrace new tools which always bring new opportunities, but I can see why the polar opposite is a natural reaction for many.
Chance-Device|1 month ago
And if AI assisted products are cheaper, and are actually good, then people will have to vote with their wallets. I think we’ve learned that people aren’t very good at doing that with causes they claim to care about once they have to actually part with their money.
bandrami|1 month ago
Al-Khwarizmi|1 month ago
frozenseven|1 month ago
By an extremely loud group of activists, as always. I'd wager most gamers don't care one way or the other.
danielbln|1 month ago
Unless AI is used for code (which it is, surely, almost everywhere), then Gamers don't give a damn. Also, Larian didn't use it for concept art, they used it to generate the first mood board to give to the concept artist as a guideline. And then there is Ark Raiders, who uses AI for all their VO, and that game is a massive hit.
This is just a breathless bubble, the wider gaming audience couldn't give two shits if studios use AI or not.
falloutx|1 month ago
lpcvoid|1 month ago
I know LLMs won't vanish again magically, but I wish they would every time I have to deal with their output.
mossTechnician|1 month ago
But they don't. Instead, "AI safety" organizations all appear to exclusively warn of unstoppable, apocalyptic, and unprovable harms that seem tuned exclusively to instill fear.
iNic|1 month ago
ACCount37|1 month ago
The catastrophic AI risk isn't "oh no, people can now generate pictures of women naked".
rl3|1 month ago
ltbarcly3|1 month ago
mono442|1 month ago
latexr|1 month ago
That’s exactly what a con is: selling you something as being more than what it actually is. If you agree it’s overhyped by its sellers, you agree it’s a con.
> Current agents can do around 70% of coding stuff I do
LLMs are being sold as capable of significantly more than coding. Focusing on that singular aspect misses the point of the article.
lxgr|1 month ago
petesergeant|1 month ago
latexr|1 month ago
That’s quite a funny take, because I bet you someone will have made that same argument to criticise “crypto-deniers”.
> pretending they're not doing anything clever or useful
That isn’t at all the argument of the article. No one is claiming LLMs are completely useless or that they aren’t interesting technology. The critique is they’re being sold as way more than what they are or could be, that that has tangible negative consequences we can already feel, and the benefits don’t offset it.
stavarotti|1 month ago
falcor84|1 month ago
> This has, of course, not happened.
This is so incredibly shallow. I can't think of even a single doomer, who ever claimed that AI will destroy us by now. P(doom) is about the likelihood of it destroying us "eventually". And I haven't seen anything in this post or in any recent developments to make my reduce my own p(doom), which is not close to zero.
Here are some representative values: https://pauseai.info/pdoom
Meneth|1 month ago
And that's the anthropic fallacy. In the worlds where it has happened, the author is dead.
GuestFAUniverse|1 month ago
While there might be open issues with AI, those AI companies are providing *far* more value than null.
grumbel|1 month ago
What parallel world are they living in? Every single online platform has been flooded with AI generated content and had to enact counter measures, or went the other way, embraced it and replaced humans with AI. AI use in scams has also become common place.
Everything they warned about with the release of GPT‑2 did in fact happen.
motbus3|1 month ago
Havoc|1 month ago
Traubenfuchs|1 month ago
unknown|1 month ago
[deleted]
self_awareness|1 month ago
Hm... is it wrong to think like this?
erelong|1 month ago
xtiansimon|1 month ago
Let’s not forget these innovations are on the heels of COVID. Strong, swift action by government, industry, and individuals against a deadly pathogen is “controversial”. Even if killer AI was here, twice shy…
I’m angry about a lot of things right now, but LLM “marketing” (and inadequate reporting which turns to science fiction instead of science) is not one of them. The LLM revolution is getting shoehorned into this Three Card Monte narrative, and I don’t see the utility.
The criticisms of LLM promise and danger is part of the zeitgeist. If firms are playing off of anything I bet it’s that, and not an industry wide conspiracy to trick the public and customers. Advertising and marketing meets people where they’re at, and “imagines” where they want to go, all wrapped up with the product. It doesn’t make the product frightening. It’s the same for all manner of dangerous technologies—guns, nuclear energy, whatever. The product is the solution to the fear.
> “The LLMs we have today are famously obsequious. The phrase “you’re absolutely right!” may never again be used in earnest.”
Hard NO. I get it, the language patterns of LLMs are creepy, but it’s not bad usage. So, no.
I can handle the cognitive dissonance of computer algorithms spewing out anthropomorphic phrasing and not decide that I, as a human being, can no longer in humility and honesty tell someone else they’re right, and i was wrong.
josefritzishere|1 month ago
someuser484848|1 month ago
You have not actually made clear how mechanical calculators were a scam.
Ironically, this article feels like it was written by an LLM. Just a baseless opinion.
pancsta|1 month ago
> Simply put, these companies have fallen for a confidence trick. They have built on centuries of received wisdom about the efficacy and reliability of computers, and have been drawn in by highly effective salespeople selling scarcely-believable technological wonders.
Calculators are ok, but LLMs are not calculators.
unknown|1 month ago
[deleted]
baq|1 month ago
The 'are LLMs intelligent?' discussion should be retired at this point, too. It's academic, the answer doesn't matter for businesses and consumers; it matters for philosophers (which everyone is even a little bit). 'Are LLMs useful for a great variety of tasks?' is a resounding 'yes'.
huflungdung|1 month ago
[deleted]