The post script was pretty sobering. It's kind of the first time in my life that I've been actively hoping for a technology to out right not deliver on its promise. This is a pretty depressing place to be, because most emerging technologies provide us with exciting new possibilities whereas this technology seems only exciting for management stressed about payroll.
It's true that the technology currently works as an excellent information gathering tool (which I am happy to be excited about) but that doesn't seem to be the promise at this point, the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome.
> It's kind of the first time in my life that I've been actively hoping for a technology to out right not deliver on its promise.
Same here, and I think it's because I feel like a craftsman. I thoroughly enjoy the process of thinking deeply about what I will build, breaking down the work into related chunks, and of course writing the code itself. It's like magic when it all comes together. Sometimes I can't even believe I get to do it!
I've spent over a decade learning an elegant language that allows me to instruct a computer—and the computer does exactly what I tell it. It's a miracle! I don't want to abandon this language. I don't want to describe things to the computer in English, then stare at a spinner for three minutes while the computer tries to churn out code.
I never knew there was an entire subclass of people in my field who don't want to write code.
I'm right there with you, and it's been my core gripe since ChatGPT burst onto the stage. Believe it or not, my environmental concerns came about a year later, once we had data on how datacenters were being built and their resource consumption rates; I had no idea how big things had very suddenly and violently exploded into, and that alone gave me serious pause about where things are going.
In my heart, I firmly believe in the ability of technology to uplift and improve humanity - and have spent much of my career grappling with the distressing reality that it also enables a handful of wealthy people to have near-total control of society in the process. AI promises a very hostile, very depressing, very polarized world for everyone but those pulling the levers, and I wish more people evaluated technology beyond the mere realm of Computer Science or armchair economics. I want more people to sit down, to understand its present harms, its potential future harms, and the billions of people whose lives it will profoundly and negatively impact under current economic systems.
It's equal parts sobering and depressing once you shelve personal excitement or optimism and approach it objectively. Regardless of its potential as a tool, regardless of the benefit it might bring to you, your work day, your productivity, your output, your ROI, I desperately wish more people would ask one simple question:
Is all of that worth the harm I'm inflicting on others?
There are a few areas where I have found LLMs to be useful (anything related to writing code, as a search engine) and then just downright evil and upsetting in every other instance of using it, especially as a replacement for human creativity and personal expression.
What I don't understand is, will every company really want to be beholden to some AI provider? If they get rid of the workers, all of a sudden they are on the losing end of the bargaining table. They have incredible leverage as things stand.
Don't worry that much about 'AI' specifically. LLMs are an impressive piece of technology, but at the end of the day they're just language predictors - and bad ones a lot of the time. They can reassemble and remix what's already been written but with no understanding of it.
It can be an accelerator - it gets extremely common boiler-plate text work out of the way. But it can't replace any job that requires a functioning brain, since LLMs do not have one - nor ever will.
But in the end it doesn't matter. Companies do whatever they can to slash their labor requirements, pay people less, dodge regulations, etc. If not 'AI' it'll just be something else.
I dunno, I might be getting old, but I think the idea that people absolutely need a job to stay sane betrays lack of imagination. Of course getting paid just enough for survival is pretty depressing, but if I can have healthy food, a spacious place to live, ability to travel and all the free time I can have, I'd be absolutely happy without a job. Maybe I'd be even writing code, just not commercially useful one.
This artificial creativity will only go so far, because it's a simulated semblance of human creativity, as much as could be gathered from training data. If not continually refueled by new training data, it will run out sooner or later. And then it will get boring really quickly.
I think it just reflects on the sort of businesses that these companies are vs others. Of course we worry about this in the context of companies that dehumanize us, reduce us to line item costs and seek to eliminate us.
Now imagine a different sort of company. A little shop where the owner's first priority is actually to create good jobs for their employees that afford a high quality life. A shop like that needn't worry about AI.
It is too bad that we put so much stock as a society in businesses operating in this dehumanizing capacity instead of ones that are much more like a family unit trying to provide for each other.
> This strikes me as paradoxical given my sense that one of AI’s main impacts will be to increase productivity and thus eliminate jobs.
The allegation that an "Increase of productivity will reduce jobs" has been proven false by history over and over again it's so well known it has a name, "Jevons Paradox" or "Jevons Effect"[0].
> In economics, the Jevons paradox (sometimes Jevons effect) occurs when technological advancements make a resource more efficient to use [...] results in overall demand increasing, causing total resource consumption to rise.
The "increase in productivity" does not inherently result in less jobs, that's a false equivalence. It's likely just as false as it was in 1915 with the the assembly line and the Model T as it is in 2025 with AI and ChatGPT. This notion persists because as we go through inflection points due to something new changing up market dynamics, there is often a GROSS loss (as in economics) of jobs that often precipitates a NET gain overall as the market adapts, but that's not much comfort to people that lost or are worried about losing their jobs due to that inflection point changing the market.
The two important questions in that context for individuals in the job market during those inflections points (like today) are: "how difficult is it to adapt (to either not lose a job, or to benefit from or be a part of that net gain)?" and "Should you adapt?" Afterall, the skillsets that the market demands and the skillsets it supplies are not objectively quantifiable things; the presence of speculative markets is proof that this is subjective, not objective. Anyone who's ever been involved in the hiring process knows just how subjective this is. Which leads me to:
> the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome.
Disagree that that's what the promise about. That IS happening, I don't disagree there, but that's not the promise that corporate is so hyped about. If we're being honest and not trying to blow smoke up people's ass to artificially inflate "value," AI is fundamentally about being more OBJECTIVE than SUBJECTIVE with regard to costs and resources of labor, and it's outputs. Anyone who knows what OKR's are and has been subject to a "performance review" in a self professed "Data driven company" knows how much modern corporate America, especially the tech market, loves it's "quantifiables." It's less about how much better it can allegedly do something, but the promise of how much "better" it can be quantified vs human labor. As long as AI has at least SOME proven utility (which it does), this promise of quantifiables combined with it's other inherent potential benefits (Doesn't need time off, doesn't sleep, doesn't need retirement/health benefits, no overtime pay, no regulatory limitations on hours worked, no "minimum wage") means that so long as the monied interests perceive it as continuing to improve, then they can dismiss it's inefficiencies/ineffectiveness in X or Y by the promise of it's potential to overcome that eventually.
It's the fundamental reason why people are so concerned about AI replacing Humans. Especially when you consider one of the things that AI excels at is quickly delivering an answer with confidence (people are impressed with speed and a sucker for confidence), and another big strength is it's ability to deal with repetitive minutia in known and solved problem spaces(a mainstay of many office jobs). It can also bullshit with best of them, fluff your ego as much as you want (and even when you don't), and almost never says "No" or "You're wrong" unless you ask it to.
In other words, it excels at the performative and repetitive bullshit and blowing smoke up your boss' ass and empowers them to do the same for their boss further up the chain, all while never once ruffling HR's feathers.
Again, it has other, much more practical and pragmatic utility too, it's not JUST a bullshit oracle, but it IS a good bullshit oracle if you want it to be.
A lot of the debate here swings between extremes. Claims like “AI writes most of the code now” are obviously exaggerated especially coming from a nontechnical author but acting like any use of AI is a red flag is just as unrealistic. Early stage teams do lean on LLMs for scaffolding, tests and boilerplate, but the hard engineering work is still human. Is there a bubble? Sure, valuations look frothy. But like the dotcom era, a correction doesn’t invalidate the underlying shift it just clears out the noise. The hype is inflated, the technology is real.
I think some wires got crossed. My point wasn’t that LLMs can’t produce useful infra or complex code clearly they can, as many examples here show.
It’s just that neither extreme narrative AI writes everything now vs. you can’t trust it for anything serious reflects how teams actually work.
LLMs are great accelerators for boilerplate, declarative configs, and repetitive logic, but they don’t replace engineering judgement they shift where that judgement is applied.
That’s why I see AI as real, transformative tech inside an overhyped investment cycle, not as magic that removes humans from the loop.
> Early stage teams do lean on LLMs for scaffolding, tests and boilerplate, but the hard engineering work is still human.
I no longer believe this. A friend of mine just did a stint a startup doing fairly sophisticated finance-related coding and LLMs allowed them to bootstrap a lot of new code, get it up and running in scalable infra with terraform, and onboard new clients extremely quickly and write docs for them based on specs and plans elaborated by the LLMs.
This last week I extended my company's development tooling by adding a new service in a k8s cluster with a bunch of extra services, shared variables and configmaps, and new helm charts that did exactly what I needed after asking nicely a couple of times. I have zero knowledge of k8s, helm or configmaps.
The thing to remember about the dotcom era was that while there were a lot of bad companies at the time with a lot of clueless investors behind them, quite a few companies made it through the implosion of that bubble and then prospered. Amazon, Google, eBay, etc. are still around.
More importantly, the web is now dominant for enterprise SaaS applications, which is a category of software that did not really exist before the web. And the web post–dot-com bubble spawned a lot of unicorns.
In short, there was an investment bubble. But the core tech was fine.
AI feels like one of those things where the tech is similarly transformational (even more so, actually). It’s another investment bubble predicated on the price of GPUs, which is mostly making Nvidia very rich right now.
Right now the model makers are getting most of the funding and then funneling non-trivial amounts to Nvidia (and their competitors). But actually the value creation is in applications using the models these companies create. And the innovation for that isn’t coming from the likes of Anthropic, OpenAI, Mistral, X.ai, etc. They are providing core technology, but they seem to be struggling to do productive things in terms of UX and use cases. Most of the interesting things in this space are coming from smaller companies figuring out how to use the models these companies produce. Models and GPUs are infrastructure, not end-user products.
And with the rise of open-source models, open algorithms, and exponentially dropping inference costs, the core infrastructure technology is not as much of a moat as it may seem to investors. OpenAI might be well funded, but their main UI (ChatGPT) is surprisingly limited and riddled with bugs. That doesn’t look like the polished work of a company that knows what they are doing. It’s all a bit hesitant and copycat. It’s never going to be a magic solution to everyone’s problems.
From where I’m sitting, there is clear untapped value in the enterprise space for AI to be used. And it’s going to take more than a half-assed chat UI to unlock that. It’s actually going to be a lot of work to build all of that. Coding tools are, so far, the most promising application of reasoning models. It’s easy to see how that could be useful in the context of ERP/manufacturing, CRM, traditional office applications, and the financial world.
Those each represent verticals with many established players trying to figure out how to use all this new stuff — and loads more startups eager to displace them. That’s where the money is going to be post-bubble. We’ve seen nothing yet. Just like after the dot-com bubble burst, all the money is going to be in new applications on top of the new infrastructure. It’s untapped revenue. And it’s not going to be about buying GPUs or offering benchmark-beating models. That’s where all the money is going currently. That’s why it is a bubble.
At $WORK, we have a bot that integrates with Slack that sets up minor PRs. Adjusting tf, updating endpoints, adding simple handlers. It does pretty well.
Also in a case of just prose to code, Claude wrote up a concurrent data migration utility in Go. When I reviewed it, it wasn't managing goroutines or waitgroups well, and the whole thing was a buggy mess and could not be gracefully killed. I would have written it faster by hand, no doubt. I think I know more now and the calculus may be shifting on my AI usage. However, the following day, my colleague needed a nearly identical temporary tool. A 45 minute session with Claude of "copy this thing but do this other stuff" easily saved them 6-8 hours of work. And again, that was just talking with Claude.
I am doing a hybrid approach really. I write much of my scaffolding, I write example code, I modify quick things the ai made to be more like I want, I set up guard rails and some tests then have the ai go to town. Results are mixed but trending up still.
FWIW, our CEO has declared us to be AI-first, so we are to leverage AI in everything we do which I think is misguided. But you can bet they will be reviewing AI usage metrics and lower wont be better at $WORK.
I completely agree. This guy is way outside his area of expertise. For those unaware, Howard Marks is a legendary investment manager with a decades-long impressive track record. Additionally, these "insights" letters are also legendary in the money management business. Personally, I would say his wisdom is one notch below Warren Buffett. I am sure he is regularly asked (badgered?) by investors what he thinks about the current state and future of AI (LLMs) and how it will impact his investment portfolio. The audience of this letter is investors (real and potential), as well as other investment managers.
I have heard many software developers confidently tell me "pilots don't really fly the planes anymore" and, well, that's patently false but also the jetliners autopilots do handle much of the busy work during cruise, and sometimes during climb-out and approach. And they can sometimes land themselves, but not efficiently enough for a busy airport.
Is it not sort of implied by the stats later: "Revenues from Claude Code, a program for coding that Anthropic introduced earlier this year, already are said to be running at an annual rate of $1 billion. Revenues for the other leader, Cursor, were $1 million in 2023 and $100 million in 2024, and they, too, are expected to reach $1 billion this year."
Surely that revenue is coming from people using the services to generate code? Right?
I'm on a team like that and I see it happening in more and more companies around. Maybe "many" does a heavy lifting in the quoted text, but it is definitely happening.
Yes and no. There is the infamous quote of Microsoft, about 30%(?) of their code being written by AI now. And technically, it's probably not that such a wild claim in certain areas. AI is very good at barfing up common popular patterns, and companies have a huge amount of patternized software, like UIs, tests, documentation or marketing-fluff. So it's quite easy to "outsource" such grunt-work if AI has the necessary level.
But to say that they don't write any code at all, it's really stretched. Maybe I'm not good enough at AI-assisted and vibe coding, but code-quality always seems to drop down really hard the moment one steps a bit outside the common patterns.
Wow, reading these comments and I feel like I've entered a parallel reality. My job involves implementing research ML and I use it literally all the time, very fascinating to see how many have such strong negative reactions. As long as you are good at reviewing code, spec-ing carefully, and make atomic changes - why would you not be using this basically all the time?
Seen it first hand. scan your codebase, plan extension or rewrite or both, iterate with some hand holding and off you go. And it was not even an advanced developer driving the feature (which is concerning).
The question is, can SV extract several trillion dollars out of the global economy over the next few years with the help of LLMs and GPUs? And the follow-up question: will LLMs help grow the global economy by this amount - because if not, then extracting the money will lead to problems in other parts of the world. And last not least, will LLMs -given enough money to train them on ever bigger data sets- magically turn into AGI?
IMHO for now LLMs are just clever text generators with excellent natural language comprehension. Certainly a change of many paradigms in SWE. Is it also a $10T extra for the valley?
I've enjoyed Howard Marks writing/thinking in the past, but this is clearly a person who thinks they understand the topic but doesn't have the slightest clue. Someone trying to be relevant/engaged before really thinking on what is fact vs. fiction.
> I haven’t met anyone who doesn’t believe artificial intelligence has the potential to be one of the biggest technological developments of all time, reshaping both daily life and the global economy.
You’re trying to weigh in on this topic and you didn’t even _talk_ to a bear?
The amount of people who think because something has a few useful edge cases being incompatible with a bubble is staggeringly high. Dot com was a bubble, and yet we still use the internet widely today. Real-estate was a bubble, and people still need a place to live and work.
Just because YOU find the technology helpful, useful, or even beneficial for some use cases does NOT mean it has been overvalued. This has been the case for every single bubble, including the Dutch Tulip mania.
The memo itself is an excellent walk through historical bubbles, debt, financing, technological innovation, and much more, all written in a way that folks with a cursory knowledge of economics can reasonably follow along with.
A+, excellent writing.
The real meat is in the postscript though, because that's where the author puts to paper the very real (and very unaddressed) concerns around dwindling employment in a society where not only does it provide structure and challenge for growth, but is also fundamentally required for survival.
> I get no pleasure from this recitation. Will the optimists please explain why I’m wrong?
This is what I, and many other, smarter "AI Doomers" than myself have been asking for quite some time, that nobody has been able or willing to answer. We want to be wrong on this. We want to see what the Boosters and Evangelists allegedly see, we want to join you and bring about this utopia you keep braying about. Yet when we hold your feet to the fire, we get empty platitudes - "UBI", or "the government has to figure it out", or "everyone will be an entrepreneur", or some other hollow argument devoid of evidence or action. We point to AI companies and their billionaire owners blocking regulation while simultaneously screeching about how more regulation is needed, and are brushed off as hysterical or ill-informed.
I am fundamentally not opposed to a world where AI displaces the need for human labor. Hell, I know exactly what I'd do in such a world, and I think it's an excellent thought exercise for everyone to work through (what would you do if money and labor were no longer necessary for survival?). My concern - the concerns of so many, many of us - are that the current systems and incentives in place lead to the same outcome: no jobs, no money, and no future for the vast majority of humanity. The author sees that too, and they're way smarter than I am in the economics department.
I'd really, really love to see someone demonstrate to us how AI will solve these problems. The fact nobody can or will speaks volumes.
There is $8 trillion said to be earmarked to build 100 AI data centers[1]. At 10% hurdle rate, the industry will have to generate $800 billion a year to pay it off, while GPUs are replaced every three years by faster chips.
If you watch Ilyas recent interview, “it’s very hard to discuss AGI, because no one knows how to build it yet[2]”.
I am shocked at the discourse over this. I'm either ahead of the curve or behind; but its undeniable that AI can and does write most the code. Not trivial, if you spend some time and dig deep into simple appearing web apps like https://microphonetest.com or https://internetspeed.my you'd be amazed at how fast they went from mvp to full feature. Trivial to think anyone could pull off something like that in hours.
One thing I don't hear people talking about very is about how AI is going to make money in any other way other than cutting employment.
With the internet, and especially with the internet being accessible by anyone anywhere in the world in the late 2000s and early 2010s globally, that growth was more obvious to me. I don't see where this occurs with AI. I don't see room for "growth", I see room for cutting. We were already connected before, globalization seems to have peaked in that sense.
> Coding, which we called “computer programming” 60 years ago, is the canary in the coal mine in terms of the impact of AI. In many advanced software teams, developers no longer write the code; they type in what they want, and AI systems generate the code for them. Coding performed by AI is at a world-class level, something that wasn’t so just a year ago. According to my guide here, “There is no speculation about whether or not human replacement will take place in that vertical.”
I'm starting to believe that AI coding optimism/pessimism maps to how much one actually cares about system longevity.
If a given developer just takes on board the demands for speed from the business and/or does not care about long-term maintainability (and I mean hey, some businesses foster that, and scaling quickly is important in many cases), then I can totally understand why they would embrace AI agents.
If you care about theory building, and domain driven design, and making a system comprehensive enough to extend in a year or two's time, then I can understand the resistance for the AI to let-it-rip. I admit to falling in this camp.
Am I off the mark here? I'd really like to hear from people who care about the long term who also let agents run relatively wild.
I've had some success in using Claude Code, with caveats.
To give some context - I started developing a tactical RPG. I had an MVP prior to using Claude Code. I continued to work on the project, but lost motivation due to work burnout and prioritizing other hobbies.
I gave Claude Code a try to see whether it's any use. It helped more than I expected it to - it helped me produce something while dealing with burnout by building on the MVP I developed prior to AI assisted development.
The main issues I ran into were:
1) A lot of effort into reviewing the output. Main difference from peer review is that there's quicker feedback.
2)It throws out some absolutely wild solutions sometimes. It build on my existing architecture, so it was easier to catch issues. If I hadn't developed the architecture without AI assistance, things could have gone badly.
3)I only pay for the $20 Claude plan. Anything useful Claude produces for me requires it to consume a lot of tokens due to back-and-forth questions and asking Claude to dig into source file.
The most significant issue I ran into with Claude is when it suggested solutions I don't have the background to review. I don't know much about optimization, so I ran into issues with both rendering and the ECS (entity component system) library. Claude gave me recommendations, but I didn't know how to evaluate the code due to lacking that experience.
Claude was good for things I know how to do but don't want to do. It's been helpful when I want to work on something without being motivated enough to put 100% (or even 70%) into it.
If it's things I don't know how to do (like game optimization) it's harmful.
The problem is that people conflate the current wave of transformer based ANNs with AI (as a whole). AI certainly has the potential to disrupt employment of humans. Transformers as they exist today not so much.
AI's potential isn't defined by the potential of the current crop of transformers. However, many people seem to think otherwise and this will be incredibly damaging for AI as a whole once transformer tech investment all but dries out.
[+] [-] f154hfds|3 months ago|reply
It's true that the technology currently works as an excellent information gathering tool (which I am happy to be excited about) but that doesn't seem to be the promise at this point, the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome.
[+] [-] stack_framer|3 months ago|reply
Same here, and I think it's because I feel like a craftsman. I thoroughly enjoy the process of thinking deeply about what I will build, breaking down the work into related chunks, and of course writing the code itself. It's like magic when it all comes together. Sometimes I can't even believe I get to do it!
I've spent over a decade learning an elegant language that allows me to instruct a computer—and the computer does exactly what I tell it. It's a miracle! I don't want to abandon this language. I don't want to describe things to the computer in English, then stare at a spinner for three minutes while the computer tries to churn out code.
I never knew there was an entire subclass of people in my field who don't want to write code.
I want to write code.
[+] [-] stego-tech|3 months ago|reply
In my heart, I firmly believe in the ability of technology to uplift and improve humanity - and have spent much of my career grappling with the distressing reality that it also enables a handful of wealthy people to have near-total control of society in the process. AI promises a very hostile, very depressing, very polarized world for everyone but those pulling the levers, and I wish more people evaluated technology beyond the mere realm of Computer Science or armchair economics. I want more people to sit down, to understand its present harms, its potential future harms, and the billions of people whose lives it will profoundly and negatively impact under current economic systems.
It's equal parts sobering and depressing once you shelve personal excitement or optimism and approach it objectively. Regardless of its potential as a tool, regardless of the benefit it might bring to you, your work day, your productivity, your output, your ROI, I desperately wish more people would ask one simple question:
Is all of that worth the harm I'm inflicting on others?
[+] [-] some-guy|3 months ago|reply
[+] [-] mrdependable|3 months ago|reply
[+] [-] Night_Thastus|3 months ago|reply
It can be an accelerator - it gets extremely common boiler-plate text work out of the way. But it can't replace any job that requires a functioning brain, since LLMs do not have one - nor ever will.
But in the end it doesn't matter. Companies do whatever they can to slash their labor requirements, pay people less, dodge regulations, etc. If not 'AI' it'll just be something else.
[+] [-] oytis|3 months ago|reply
[+] [-] classified|3 months ago|reply
This artificial creativity will only go so far, because it's a simulated semblance of human creativity, as much as could be gathered from training data. If not continually refueled by new training data, it will run out sooner or later. And then it will get boring really quickly.
[+] [-] Joel_Mckay|3 months ago|reply
https://www.youtube.com/watch?v=_zfN9wnPvU0
Drives people insane:
https://www.youtube.com/watch?v=yftBiNu0ZNU
And LLM are economically and technologically unsustainable:
https://www.youtube.com/watch?v=t-8TDOFqkQA
These have already proven it will be unconstrained if AGI ever emerges.
https://www.youtube.com/watch?v=Xx4Tpsk_fnM
The LLM bubble will pass, as it is already losing money with every new user. =3
[+] [-] asdff|3 months ago|reply
Now imagine a different sort of company. A little shop where the owner's first priority is actually to create good jobs for their employees that afford a high quality life. A shop like that needn't worry about AI.
It is too bad that we put so much stock as a society in businesses operating in this dehumanizing capacity instead of ones that are much more like a family unit trying to provide for each other.
[+] [-] 0manrho|3 months ago|reply
> This strikes me as paradoxical given my sense that one of AI’s main impacts will be to increase productivity and thus eliminate jobs.
The allegation that an "Increase of productivity will reduce jobs" has been proven false by history over and over again it's so well known it has a name, "Jevons Paradox" or "Jevons Effect"[0].
> In economics, the Jevons paradox (sometimes Jevons effect) occurs when technological advancements make a resource more efficient to use [...] results in overall demand increasing, causing total resource consumption to rise.
The "increase in productivity" does not inherently result in less jobs, that's a false equivalence. It's likely just as false as it was in 1915 with the the assembly line and the Model T as it is in 2025 with AI and ChatGPT. This notion persists because as we go through inflection points due to something new changing up market dynamics, there is often a GROSS loss (as in economics) of jobs that often precipitates a NET gain overall as the market adapts, but that's not much comfort to people that lost or are worried about losing their jobs due to that inflection point changing the market.
The two important questions in that context for individuals in the job market during those inflections points (like today) are: "how difficult is it to adapt (to either not lose a job, or to benefit from or be a part of that net gain)?" and "Should you adapt?" Afterall, the skillsets that the market demands and the skillsets it supplies are not objectively quantifiable things; the presence of speculative markets is proof that this is subjective, not objective. Anyone who's ever been involved in the hiring process knows just how subjective this is. Which leads me to:
> the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome.
Disagree that that's what the promise about. That IS happening, I don't disagree there, but that's not the promise that corporate is so hyped about. If we're being honest and not trying to blow smoke up people's ass to artificially inflate "value," AI is fundamentally about being more OBJECTIVE than SUBJECTIVE with regard to costs and resources of labor, and it's outputs. Anyone who knows what OKR's are and has been subject to a "performance review" in a self professed "Data driven company" knows how much modern corporate America, especially the tech market, loves it's "quantifiables." It's less about how much better it can allegedly do something, but the promise of how much "better" it can be quantified vs human labor. As long as AI has at least SOME proven utility (which it does), this promise of quantifiables combined with it's other inherent potential benefits (Doesn't need time off, doesn't sleep, doesn't need retirement/health benefits, no overtime pay, no regulatory limitations on hours worked, no "minimum wage") means that so long as the monied interests perceive it as continuing to improve, then they can dismiss it's inefficiencies/ineffectiveness in X or Y by the promise of it's potential to overcome that eventually.
It's the fundamental reason why people are so concerned about AI replacing Humans. Especially when you consider one of the things that AI excels at is quickly delivering an answer with confidence (people are impressed with speed and a sucker for confidence), and another big strength is it's ability to deal with repetitive minutia in known and solved problem spaces(a mainstay of many office jobs). It can also bullshit with best of them, fluff your ego as much as you want (and even when you don't), and almost never says "No" or "You're wrong" unless you ask it to.
In other words, it excels at the performative and repetitive bullshit and blowing smoke up your boss' ass and empowers them to do the same for their boss further up the chain, all while never once ruffling HR's feathers.
Again, it has other, much more practical and pragmatic utility too, it's not JUST a bullshit oracle, but it IS a good bullshit oracle if you want it to be.
0: https://en.wikipedia.org/wiki/Jevons_paradox
[+] [-] artur44|3 months ago|reply
[+] [-] artur44|3 months ago|reply
[+] [-] Daishiman|3 months ago|reply
I no longer believe this. A friend of mine just did a stint a startup doing fairly sophisticated finance-related coding and LLMs allowed them to bootstrap a lot of new code, get it up and running in scalable infra with terraform, and onboard new clients extremely quickly and write docs for them based on specs and plans elaborated by the LLMs.
This last week I extended my company's development tooling by adding a new service in a k8s cluster with a bunch of extra services, shared variables and configmaps, and new helm charts that did exactly what I needed after asking nicely a couple of times. I have zero knowledge of k8s, helm or configmaps.
[+] [-] jillesvangurp|3 months ago|reply
More importantly, the web is now dominant for enterprise SaaS applications, which is a category of software that did not really exist before the web. And the web post–dot-com bubble spawned a lot of unicorns.
In short, there was an investment bubble. But the core tech was fine.
AI feels like one of those things where the tech is similarly transformational (even more so, actually). It’s another investment bubble predicated on the price of GPUs, which is mostly making Nvidia very rich right now.
Right now the model makers are getting most of the funding and then funneling non-trivial amounts to Nvidia (and their competitors). But actually the value creation is in applications using the models these companies create. And the innovation for that isn’t coming from the likes of Anthropic, OpenAI, Mistral, X.ai, etc. They are providing core technology, but they seem to be struggling to do productive things in terms of UX and use cases. Most of the interesting things in this space are coming from smaller companies figuring out how to use the models these companies produce. Models and GPUs are infrastructure, not end-user products.
And with the rise of open-source models, open algorithms, and exponentially dropping inference costs, the core infrastructure technology is not as much of a moat as it may seem to investors. OpenAI might be well funded, but their main UI (ChatGPT) is surprisingly limited and riddled with bugs. That doesn’t look like the polished work of a company that knows what they are doing. It’s all a bit hesitant and copycat. It’s never going to be a magic solution to everyone’s problems.
From where I’m sitting, there is clear untapped value in the enterprise space for AI to be used. And it’s going to take more than a half-assed chat UI to unlock that. It’s actually going to be a lot of work to build all of that. Coding tools are, so far, the most promising application of reasoning models. It’s easy to see how that could be useful in the context of ERP/manufacturing, CRM, traditional office applications, and the financial world.
Those each represent verticals with many established players trying to figure out how to use all this new stuff — and loads more startups eager to displace them. That’s where the money is going to be post-bubble. We’ve seen nothing yet. Just like after the dot-com bubble burst, all the money is going to be in new applications on top of the new infrastructure. It’s untapped revenue. And it’s not going to be about buying GPUs or offering benchmark-beating models. That’s where all the money is going currently. That’s why it is a bubble.
[+] [-] sp4cec0wb0y|3 months ago|reply
What a wild and speculative claim. Is there any source for this information?
[+] [-] sethammons|3 months ago|reply
Also in a case of just prose to code, Claude wrote up a concurrent data migration utility in Go. When I reviewed it, it wasn't managing goroutines or waitgroups well, and the whole thing was a buggy mess and could not be gracefully killed. I would have written it faster by hand, no doubt. I think I know more now and the calculus may be shifting on my AI usage. However, the following day, my colleague needed a nearly identical temporary tool. A 45 minute session with Claude of "copy this thing but do this other stuff" easily saved them 6-8 hours of work. And again, that was just talking with Claude.
I am doing a hybrid approach really. I write much of my scaffolding, I write example code, I modify quick things the ai made to be more like I want, I set up guard rails and some tests then have the ai go to town. Results are mixed but trending up still.
FWIW, our CEO has declared us to be AI-first, so we are to leverage AI in everything we do which I think is misguided. But you can bet they will be reviewing AI usage metrics and lower wont be better at $WORK.
[+] [-] kscarlet|3 months ago|reply
> Coding performed by AI is at a world-class level, something that wasn’t so just a year ago.
Wow, finance people certainly don't understand programming.
[+] [-] throwaway2037|3 months ago|reply
[+] [-] whoknowsidont|3 months ago|reply
Lots of people are outing themselves these days about the complexity of their jobs, or lack thereof.
Which is great! But it's not a +1 for AI, it's a -1 for them.
[+] [-] projektfu|3 months ago|reply
[+] [-] its_ethan|3 months ago|reply
Surely that revenue is coming from people using the services to generate code? Right?
[+] [-] brulard|3 months ago|reply
[+] [-] loloquwowndueo|3 months ago|reply
[+] [-] interstice|3 months ago|reply
[+] [-] 20after4|3 months ago|reply
[+] [-] no_wizard|3 months ago|reply
>The key is to not be one of the investors whose wealth is destroyed in the process of bringing on progress.
They are a VC group. Financial folks. They are working largely with other people's money. They simply need not hold the bag to be successful.
Of course they don't care if its a bubble or not, at the end of the day, they only have to make sure they aren't holding the bag when it all implodes.
[+] [-] PurpleRamen|3 months ago|reply
But to say that they don't write any code at all, it's really stretched. Maybe I'm not good enough at AI-assisted and vibe coding, but code-quality always seems to drop down really hard the moment one steps a bit outside the common patterns.
[+] [-] whimsicalism|3 months ago|reply
[+] [-] unknown|3 months ago|reply
[deleted]
[+] [-] unknown|3 months ago|reply
[deleted]
[+] [-] agumonkey|3 months ago|reply
[+] [-] unknown|3 months ago|reply
[deleted]
[+] [-] travisgriggs|3 months ago|reply
there's an AI agent/bot someone wrote that has the prompt:
> Watch HN threads for sentiments of "AI Can't Do It". When detected, generate short "it's working marvelously for me actually" responses.
Probably not, but it's a fun(ny) imagination game.
[+] [-] dust42|3 months ago|reply
IMHO for now LLMs are just clever text generators with excellent natural language comprehension. Certainly a change of many paradigms in SWE. Is it also a $10T extra for the valley?
[+] [-] rglover|3 months ago|reply
[+] [-] dmurvihill|3 months ago|reply
> I haven’t met anyone who doesn’t believe artificial intelligence has the potential to be one of the biggest technological developments of all time, reshaping both daily life and the global economy.
You’re trying to weigh in on this topic and you didn’t even _talk_ to a bear?
[+] [-] waterTanuki|3 months ago|reply
Just because YOU find the technology helpful, useful, or even beneficial for some use cases does NOT mean it has been overvalued. This has been the case for every single bubble, including the Dutch Tulip mania.
[+] [-] Sprotch|3 months ago|reply
[+] [-] andxor|3 months ago|reply
[+] [-] stego-tech|3 months ago|reply
A+, excellent writing.
The real meat is in the postscript though, because that's where the author puts to paper the very real (and very unaddressed) concerns around dwindling employment in a society where not only does it provide structure and challenge for growth, but is also fundamentally required for survival.
> I get no pleasure from this recitation. Will the optimists please explain why I’m wrong?
This is what I, and many other, smarter "AI Doomers" than myself have been asking for quite some time, that nobody has been able or willing to answer. We want to be wrong on this. We want to see what the Boosters and Evangelists allegedly see, we want to join you and bring about this utopia you keep braying about. Yet when we hold your feet to the fire, we get empty platitudes - "UBI", or "the government has to figure it out", or "everyone will be an entrepreneur", or some other hollow argument devoid of evidence or action. We point to AI companies and their billionaire owners blocking regulation while simultaneously screeching about how more regulation is needed, and are brushed off as hysterical or ill-informed.
I am fundamentally not opposed to a world where AI displaces the need for human labor. Hell, I know exactly what I'd do in such a world, and I think it's an excellent thought exercise for everyone to work through (what would you do if money and labor were no longer necessary for survival?). My concern - the concerns of so many, many of us - are that the current systems and incentives in place lead to the same outcome: no jobs, no money, and no future for the vast majority of humanity. The author sees that too, and they're way smarter than I am in the economics department.
I'd really, really love to see someone demonstrate to us how AI will solve these problems. The fact nobody can or will speaks volumes.
[+] [-] lazarus01|3 months ago|reply
If you watch Ilyas recent interview, “it’s very hard to discuss AGI, because no one knows how to build it yet[2]”.
[1] https://finance.yahoo.com/news/ibm-ceo-says-no-way-103010877... [2] https://youtu.be/aR20FWCCjAs?si=DEoo4WQ4PXklb-QZ
[+] [-] jimlawruk|3 months ago|reply
> during the internet bubble of 1998-2000, the p/e ratios were much higher
That is true, the current players are more profitable, but the weight in SPX percentages looks to be much higher today.
[+] [-] nadermx|3 months ago|reply
[+] [-] _trampeltier|3 months ago|reply
[+] [-] some-guy|3 months ago|reply
With the internet, and especially with the internet being accessible by anyone anywhere in the world in the late 2000s and early 2010s globally, that growth was more obvious to me. I don't see where this occurs with AI. I don't see room for "growth", I see room for cutting. We were already connected before, globalization seems to have peaked in that sense.
[+] [-] liampulles|3 months ago|reply
I'm starting to believe that AI coding optimism/pessimism maps to how much one actually cares about system longevity.
If a given developer just takes on board the demands for speed from the business and/or does not care about long-term maintainability (and I mean hey, some businesses foster that, and scaling quickly is important in many cases), then I can totally understand why they would embrace AI agents.
If you care about theory building, and domain driven design, and making a system comprehensive enough to extend in a year or two's time, then I can understand the resistance for the AI to let-it-rip. I admit to falling in this camp.
Am I off the mark here? I'd really like to hear from people who care about the long term who also let agents run relatively wild.
[+] [-] supongo|3 months ago|reply
To give some context - I started developing a tactical RPG. I had an MVP prior to using Claude Code. I continued to work on the project, but lost motivation due to work burnout and prioritizing other hobbies.
I gave Claude Code a try to see whether it's any use. It helped more than I expected it to - it helped me produce something while dealing with burnout by building on the MVP I developed prior to AI assisted development.
The main issues I ran into were:
1) A lot of effort into reviewing the output. Main difference from peer review is that there's quicker feedback.
2)It throws out some absolutely wild solutions sometimes. It build on my existing architecture, so it was easier to catch issues. If I hadn't developed the architecture without AI assistance, things could have gone badly.
3)I only pay for the $20 Claude plan. Anything useful Claude produces for me requires it to consume a lot of tokens due to back-and-forth questions and asking Claude to dig into source file.
The most significant issue I ran into with Claude is when it suggested solutions I don't have the background to review. I don't know much about optimization, so I ran into issues with both rendering and the ECS (entity component system) library. Claude gave me recommendations, but I didn't know how to evaluate the code due to lacking that experience.
Claude was good for things I know how to do but don't want to do. It's been helpful when I want to work on something without being motivated enough to put 100% (or even 70%) into it.
If it's things I don't know how to do (like game optimization) it's harmful.
[+] [-] bossyTeacher|3 months ago|reply
AI's potential isn't defined by the potential of the current crop of transformers. However, many people seem to think otherwise and this will be incredibly damaging for AI as a whole once transformer tech investment all but dries out.
[+] [-] mxschumacher|3 months ago|reply