top | item 46807796

Software Survival 3.0

94 points| jaybrueder | 1 month ago |steve-yegge.medium.com

84 comments

order
[+] pron|1 month ago|reply
Here's what I don't get about the "AI can build all software" scenario. It extrapolates AI capabilities up to a certain, very advanced point, and then, inexplicably, it stops.

If AI is capable enough to "build pretty much anything", why is it not capable enough to also use what it builds (instead of people using it) or, for that matter, to decide what to build?

If AI can, say, build air traffic control software as well as humans, why can't it also be the controller as well as humans? If it can build medical diagnosis software and healtchare management software, why can't it offer the diagnosis and prescribe treatment? Is the argument that there's something special about writing software that AI can do as well as people, but not other things? Why is that?

I don't know how soon AI will be able to "build pretty much anything", but when it does, Yegge's point that "all software sectors are threatened" seems to be unimaginative. Why not all sectors full stop?

[+] pron|1 month ago|reply
Another way of putting it is that predictions based on an assumption of "exponential growth" of anything (which is presumed to slow down at some point) are almost always wrong. It's saying, sure, we may be at 1 right now, but in a year we'll be at 100, and that's the future we should prepare for! But if you believe that, then if you're off by only a little about the stopping point, we could just as likely have to prepare for a future where we're at 10 or 1000 for a while.

Once you talk about exponential growth of some capability, by definition a small error in the timing of when that growth plateaus translates to an error of the plateau capability by an order of magnitude. It's likely to be be much smaller or much greater than your prediction. In other words, you can believe in exponential growth and you can believe you have a good prediction of the future, but you can't believe in both simultaneously.

Yegge believes that "the future" will arrive by end of year. If he's off by only a few months in one direction, such a future may not arrive for a while; in the other direction, and "the future" will be quite different.

[+] seanjaii|1 month ago|reply
I have to agree, I cannot quite pinpoint the fixation on LLMs doing away with software engineers/developers. I know teachers who rely on ChatGPT and Claude to do so many parts of their jobs; marking work, providing feedback for a kid's homework, lesson planning and even answering subject matter questions they don't know how to. LLMs work so much better so many professions, right now, than it does for software development.

Clearly, even assuming they are wrong far less often and don't hallucinate in the very near future, there is a human element that LLMs certainly can't replace (unless semi-autonomous robots become a thing soon). Why isn't the same thought process applied to writing software? It's obvious if you've worked in this industry for any amount of time just how paramount that human element is to producing useful programs.

If current and/or near future AI will replace us, then it's surely replacing almost everyone else, no?

[+] warkdarrior|1 month ago|reply
There are two ways to answer your questions. You are asking how do we choose between (1) generate+run (AI generate software for some task, then we run the software to do that task) and (2) agentic execution (AI simply completes the task).

First way to look at this is through the lens of specialization. A software engineer could design and create Emacs, and then a writer could use Emacs to write a top-notch novel. That does not mean that the software engineer could write top-notch novels. Similarly, maybe AI can generate software for any task, but maybe it cannot do that task just as well as the task-specialized software.

Second way to look at this is based on costs. Even if AI is as good as specialized software for a given task, the specialized software will likely be more efficient since it uses direct computation (you know, moving and transforming bits around in the CPU and the memory) instead of GPU or TPU-powered multiplications that emulate the direct computation.

[+] xorcist|1 month ago|reply
Yes, closely related to the question why AI services insists on writing software in ineffective high level languages intended for humans to read, such as Python. Which then needs to be compiled, use large standard libraries etc. Why not output the actual software directly, intended for a computer to run directly?
[+] duderific|1 month ago|reply
AI doesn't "want to" control air traffic. It doesn't have any desire or ambition. That's what the humans are for.

It is merely a tool like a hammer. The hammer doesn't build the house, it is the human who wields the hammer that builds the house.

[+] analyte123|1 month ago|reply
I’ve heard this called “technical deflation” and it works similarly to how economic deflation can play out, causing you to forego actions in the present because you think they’ll be easier in the future (or in this case, possibly not needed at all). Time will tell if this results in a “software deflationary spiral” or not.
[+] jnpnj|1 month ago|reply
That was my first question. Somehow people thought that a tool able to generate an entire fullstack app from a few sentences is not capable of inferring needs based on context or customer directly.
[+] Kerrick|1 month ago|reply
> Friction_cost is the energy lost to errors, retries, and misunderstandings when actually using the tool. [...] if the tool is very low friction, agents will revel in it like panthers in catnip, as I’ll discuss in the Desire Paths section.

This is why I think Ruby is such a great language for LLMs. Yeah, it's token-efficient, but that's not my point [0]. The DWIM/TIMTOWTDI [1] culture of Ruby libraries is incredible for LLMs. And LLMs help to compound exactly that.

For example, I recently published a library, RatatuiRuby [2], that feeds event objects to your application. It includes predicates like `event.a?` for the "a" key, and `event.enter?` for the Enter key. When I was implementing using the library, I saw the LLM try `event.tilde?`, which didn't exist. So... I added it! And dozens more [3]. It's great for humans and LLMs, because the friction of using it just disappears.

EDIT: I see that this was his later point exactly! FTA:

> What I did was make their hallucinations real, over and over, by implementing whatever I saw the agents trying to do [...]

[0]: Incidentally, Matz's static typing design, RBS, keeps it even more token-efficient as it adds type annotations. The types are in different files than the source code, which means they don't have to be loaded into context. Instead, only static analysis errors get added to context, which saves a lot of tokens compared to inline static types.

[1]: Do What I Mean / There Is More Than One Way To Do It

[2]: https://www.ratatui-ruby.dev

[3]: https://git.sr.ht/~kerrick/ratatui_ruby/commit/1eebe98063080...

[+] nickorlow|1 month ago|reply
Stuff like this makes me feel like I'm living in a different reality than the author
[+] mawadev|1 month ago|reply
I wonder if interacting with AI all day, whether you work with it or just talk to it, has a negative impact on your perception of reality...
[+] ramesh31|1 month ago|reply
>"Stuff like this makes me feel like I'm living in a different reality than the author"

A quick look at gastown makes me think we all are.

[+] mrandish|1 month ago|reply
Some interesting long-term, directional ideas about the future of software dev here but the implied near-termness of SaaS being disintermediated ignores how management in large orgs evaluates build vs buy Saas decisions. 'Build' getting 10x cheaper/easier is revolutionary to developers and quite possibly irrelevant or only 'nice-to-have' to senior management.

Even if 10x cheaper, internally built Saas tools don't come with service level agreements, a vendor to blame/cancel if it goes wrong or a built-in defense of "But we picked the Gartner top quadrant tool".

[+] condiment|1 month ago|reply
I've made many business cases for internally-built SaaS tools, and they always rest on the idea that our probability of success is higher if we staff a team and build the _exact thing_ we need versus purchasing from a vendor and attempting an integration into our business.

It's far more challenging to win the 'build' argument on a cost savings approach, because even the least-savvy CIO/CTO understands that the the price of the vendor software is a proof point grounded in the difficulty for other firms to build these capabilities themselves. If there's merit to these claims, the first evidence we'll see is certain domains of enterprise software (like everything Atlassian does) getting more and more crowded, and less and less expensive, as the difficulty of competing with a tier-1 software provider drops and small shops spring up to challenge the incumbents.

[+] alexjray|1 month ago|reply
Yeah I don't buy the build internally argument. Its not necessarily the building of an internal tool that is the problem it's the maintenance and service level guarantees that you get from a vendor that are arguable more valuable so you can focus on the thing that matters for your company. Product market fit is now more important than ever and there are so many additional options now; focus on the "right" thing is more valuable now than ever before.
[+] tobyjsullivan|1 month ago|reply
Isn’t this a form of what he labels the “human coefficient”?

Some businesses prefer tools built by other businesses for some tasks. The author advocates pretty plainly to identify and target those opportunities if that’s your strength.

I think his point is to recognize that’s moving toward a niche rather than the norm (on the spectrum of all software to be built).

[+] sidereal1|1 month ago|reply
> implied near-termness of SaaS being disintermediated

Also, is this even true? The author's only evidence was to link to a book about vibe coding. I'd be interested to hear anecdotes of companies who are even attempting this.

Edit: wow, and he's a co-author of that book. This guy really just said "source: me"

[+] meisel|1 month ago|reply
> Gas Town has illuminated and kicked off the next wave for everyone

That sounds pretty hyperbolic. Everyone? Next “wave”?

[+] walt_grata|1 month ago|reply
Sounds like a decent into madness to me and I'm somewhat pro AI.
[+] porkloin|1 month ago|reply
The guy is high on his own supply. This entire thing reads like a fever dream.
[+] dgxyz|1 month ago|reply
Sounds like a cult to me.
[+] joshribakoff|1 month ago|reply
> First let’s talk about my credentials and qualifications for this post. My next-door neighbor Marv has a fat squirrel that runs up to his sliding-glass door every morning, waiting to be fed.

Some of the writings here feels a little incoherent. The article implies progress will be exponential as matter of fact but we will be lucky to maintain linear progress or even avoid regressing.

[+] jonathaneunice|1 month ago|reply
> If you believe the AI researchers–who have been spot-on accurate for literally four decades

LOLWUT?

Counter-factual much?

[+] xyzsparetimexyz|1 month ago|reply
I feel like this isn't adequately accounting for the fact that existing software is becoming easier to refactor as well. If someone wants a 3d modelling program but is unsatisfied with the performance of some operation in Blender, are they going to vibe code a new modelling program or are they going to just vibe refactor the operation in Blender.
[+] troupo|1 month ago|reply
Steve Yegge used to be a decent engineer with a clear head and an ability to precisely describe problems he was seeing. His "Gooogle Platform Rant" [1] is still required reading IMO.

Now his bloviated blogposts only speak of a man extremely high on his own supply. Long, pointless, meandering, self-aggrandising. It really is easier to dump this dump into an LLM to try to summarize it than spend time trying to understand what he means.

And he means very little.

The gist: I am great and amazing and predicted the inevitable orchestration of agents. I also call the hundreds of thousands of lines of extremely low quality AI slop "I spent the last year programming". Also here are some impressive sounding terms that I pretend I didn't pull out of my ass to sound like I am a great philosopher with a lot of untapped knowledge. Read my book. Participate in my meme coin pump and dump schemes. The future is futuring now and in the future.

[1] https://gist.github.com/chitchcock/1281611

[+] the_af|1 month ago|reply
This is my take as well.

Steve Yegge has always read a bit "eccentric" to me, to say the least. But I still quote some of his older blog posts because he often had a point.

Now... his blog posts seem to show, to quote another commenter here, "a man's slow descent into madness".

[+] joshribakoff|1 month ago|reply
Also in a recent interview he implied that anyone who disagrees is an “effing idiot”
[+] raincole|1 month ago|reply
I didn't read the article. And I do think if you read words from someone who did crypto rugpull you don't value your time and intelligence.

I know this doesn't 'contribute to the discussion.' But seriously this guy's latest contribution to the world was a meme coin backed project...

[+] spondyl|1 month ago|reply
While I have zero interest in defending or participating in the financialization of all things via crypto, there is a bit of nuance missing here.

BAGS is a crypto platform where relative strangers can make meme coins and nominate a recipient to receive some or all of the funds.

In both Steve Yegge and Geoffrey Huntley's cases, tokens were made for them but apparently not with their knowledge or input.

It would be the equivalent of a random stranger starting a Patreon or GoFundMe in your name, with the proceeds going to you.

Of course, whether you accept that money is a different story but I'm sure the best of us might have a hard time turning down $300,000 from people who wittingly participate in these sorts of investment platforms.

I don't immediately see how those left holding the bag could have ended up in that position unknowingly.

My parents would likely have a hard enough time figuring out how to buy crypto, let alone finding themselves rugpulled by a meme token is my point so while my immediate read is that pump and dump is bad, bad relative to who the participants are is something I'm curious to know if anyone has an answer for

[+] munificent|1 month ago|reply
I'm not a business dude but even I can see one problem in his argument: He tacitly equates "agents use your software" with "your software survives". But having your software invoked by an agent doesn't magically enrich you. Up to now, human users have made software success by one of:

1. Paying money for the software or access to it.

2. Allowing a fraction of the attention to be siphoned off and sold to advertisers while they use the software.

I don't think advertisers want to pay much for the "mindshare" of mindless bots. And I'm not sure that agents have wallets they can use to pony up cash with. Hopefully someone will figure out a business model here, but Yegge's article certainly doesn't posit one.

[+] hjoutfbkfd|1 month ago|reply
Microsoft CEO said in a podcast they are preparing for the moment the biggest paying customers for Windows, Office and their cloud products will be agents, not humans.
[+] unforbiddenYet|1 month ago|reply
Reads like this and that social network for AI agents makes me depressed. Authors have to cut the LLM dosage.
[+] jonathanstrange|1 month ago|reply
My take, and also perhaps hope, is that the kind of software has best survival chances that is developed by reasonable, down to earth people who understand human needs and desires well, have some overall vision, and create tools that just work and don't waste the user's time. Whether that is created with the help of AI or not might not matter much in the end.

On a side note, any kind of formula that contains what appears to be a variable on the left hand side that appears nowhere on the right hand side deranges my sense of beauty.

[+] henning|1 month ago|reply
This is one of those instances where bullshit takes more effort to debunk than it does to create.

We already went over how Stack Overflow was in decline before LLMs.

SaaS is not about build vs. buy, it's about having someone else babysit it for you. Before LLMs, if you wanted shitty software for cheap, you could try hiring a cheap freelancer on Fiverr or something. Paying for LLM tokens instead of giving it to someone in a developing country doesn't really change anything. PagerDuty's value isn't that it has an API that will call someone if there's an error, you could write a proof of concept of that by hand in any web framework in a day. The point is that PagerDuty is up even if your service isn't. You're paying for maintenance and whatever SLA you negotiate.

Steve Yegge's detachment from reality is sad to watch.

[+] 2001zhaozhao|1 month ago|reply
I'm not convinced of this post's hopeful argument near the end. If you are doing SaaS as a way of making money and don't have a deep moat aside from the code itself, it will probably be dead in a few years. The AI agents of the future will choose free alternatives as a default over your paid software, and by the way said free alternatives are probably made using reliable AI agents and are high-quality and feature complete. AI agents also don't need your paid support or add-on services from your SaaS companies, and if everyone uses agents, nobody will be left to give you money.

As a technical person today, I wouldn't pay a $10/month SaaS subscription if I can login to my VPS and tell claude to install [alternate free software] self-hosted on it. The thing is, everyone is going to have access to this in a few years (if nothing else it will be through the next generation of ChatGPT/Claude artifacts), and the free options are going to get much better to fit any needs common enough to have a significant market size.

You probably need another moat like network effects or unique content to actually survive.

[+] coldtea|1 month ago|reply
>I debated with Claude endlessly about this selection model, and Claude made me discard a bunch of interesting but less defensible claims. But in the end, I was able to convince Claude it’s a good model

Convinced an LLM to agree with you? What a feat!

Yegge's latest posts are not exactly half AI slop - half marketing same (for Beads and co), but close enough.

[+] munificent|1 month ago|reply
A thought I had after reading that sentence: So many people that are very pro-AI also increasingly seem to speak with near infinite confidence. I wonder how much of that comes from them spending too much time chatting with AI bots and effectively surrounding themselves with digital yes-men?
[+] Traubenfuchs|1 month ago|reply
I feel like we are in universal paperclips, a game about turning all matter in the universe into paperclips.

We are entering the absurd phase where we are beginning to turn all of earth into paperclips.

All software is gonna be agents orchestrating agents?

Oh how I wish I would have learned a useful skill.

[+] iainctduncan|1 month ago|reply
"If you believe the AI researchers–who have been spot-on accurate for literally"

I do not understand what has happened to him here... there was an entire "AI winter" in the 90's to 2000's because of how wrong researchers were. Has he gone completely delusional? My PhD supervisor has been in AI for 30 years and talks about how it was impossible to get any grant money then because of catastrophically wrong predictions had been.

Like, honest question. I know he's super smart, but this reads like the kind of ramblings you get from religious zealots or scientologists, just complete revisions of known, documented history, and bizarre beliefs in the inevitably of their vision.

It really makes me wonder what such heavy LLM coding use does to one's brain. Is this going to be like the 90's crack wave?

[+] mrandish|1 month ago|reply
Yeah, I full-stopped on that sentence because it was just so bizarre. I can understand making a counter-to-reality claim and then supporting the claim with context and interpretation to build toward a deeper point. But he just asserts something obviously false and moves on with no comment.

Even if he believes that statement is true, it still means he has no ability to model where his reader is coming from (or simply doesn't care).

[+] xorcist|1 month ago|reply
Maybe it's not for you as a human to understand.

Why presuppose that a human wrote this, as opposed to a language model, given the subject?

[+] AIorNot|1 month ago|reply
So much Noise...

Too many people are running a LLM or Opus in a code cycle or new set of Markdown specs (sorry Agents) and getting some cool results and then writing thought-pieces on what is happening to tech.. its just silly and far to immediate news cycle driven (moltbot, gastown etc really?)

Reminds me of how current news cycle in politics has devolved into hour by hour introspection and no long view or clear headed analyis -we lose attention before we even digest that last story - oh the nurse had a gun, no he spit at ICE, masks on ICE, look at this new angle on the shooting etc.. just endless tweet level thoughts turned into youtube videos and 'in-depth' but shallow thought-pieces..

its impossible to separate the hype from baseline chatter let alone what the real innovation cycle is and where it is really heading.

Sadly this has more momentum then the actual tech trends and serves to guide them chaotically in terms of business decisions -then when confused C suite leaders who follow the hype make stupid decisons we blame them..all while pushing their own stock picks...

Don't get me started on the secondary Linkedin posts that come out of these cycles - I hate the low barrier to entry in connected media sometimes.. it feels like we need to go back to newspapers and print magazines. </end rant>

[+] jryan49|1 month ago|reply
I wish people would stop up voting AI Nostradamus articles...