top | item 46926468

(no title)

peterlk | 22 days ago

I have been having this conversation more and more with friends. As a research topic, modern AI is a miracle, and I absolutely love learning about it. As an economic endeavor, it just feels insane. How many hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories, etc. could we build with the money we’re spending on pretraining models that we throw away next quarter?

discuss

order

Kon5ole|22 days ago

I have to admit I'm flip-flopping on the topic, back and forth from skeptic to scared enthusiast.

I just made a LLM recreate a decent approximation of the file system browser from the movie Hackers (similar to the SGI one from Jurassic park) in about 10 minutes. At work I've had it do useful features and bug fixes daily for a solid week.

Something happened around newyears 2026. The clients, the skills, the mcps, the tools and models reached some new level of usefulness. Or maybe I've been lucky for a week.

If it can do things like what I saw last week reliably, then every tool, widget, utility and library currently making money for a single dev or small team of devs is about to get eaten. Maybe even applications like jira, slack, or even salesforce or SAP can be made in-house by even small companies. "Make me a basic CRM".

Just a few months ago I found it mostly frustrating to use LLM's and I thought the whole thing was little more than a slight improvement over googling info for myself. But the past week has been mind-blowing.

Is it the beginning of the star trek ship computer? If so, it is as big as the smartphone, the internet, or even the invention of the microchip. And then the investments make sense in a way.

The problem might end up being that the value created by LLMs will have no customers when everyone is unemployed.

josephg|22 days ago

Yeah I’m having a similar experience. I’ve been wanting a standard test suite for JMAP email servers, so we can make sure all created jmap servers implement the (somewhat complex) spec in a consistent manner. I spent a single day prompting Claude code on Friday, and walked away with about 9000 lines of code, containing 300 unit tests for jmap servers. And a web interface showing the results. It would have taken me at least a week or two to make something similar by hand.

There’s some quality issues - I think some of the tests are slightly wrong. We went back and forth on some ambiguities Claude found in the spec, and how we should actually interpret what the jmap spec is asking. But after just a day, it’s nearly there. And it’s already very useful to see where existing implementations diverge on their output, even if the tests are sometimes not correctly identifying which implementation is wrong. Some of the test failures are 100% correct - it found real bugs in production implementations.

Using an AI to do weeks of work in a single day is the biggest change in what software development looks like that I’ve seen in my 30+ year career. I don’t know why I would hire a junior developer to write code any more. (But I would hire someone who was smart enough to wrangle the AI). I just don’t know how long “ai prompter” will remain a valuable skill. The AIs are getting much better at operating independently. It won’t be long before us humans aren’t needed to babysit them.

gtech1|22 days ago

My team of 6 people has been building a software to compete with an already established piece of software written by a major software corporation. I'm not saying we'll succeed, I'm not saying we'll be better nor that we will cover every corner case they do and that they learned over the past 30 years. But 6 senior devs are getting stuff done at an insane pace. And if we can _attempt_ to do this, which would have been unthinkable 2 years ago, I can only wonder what will happen next.

bojan|22 days ago

I agree with you, and share the experience. Something changed recently for me as well, where I found the mode to actually get value from these things. I find it refreshing that I don't have to write boilerplate myself or think about the exact syntax of the framework I use. I get to think about the part that adds value.

I also have the same experience where we rejected a SAP offering with the idea to build the same thing in-house.

But... aside from the obvious fact that building a thing is easier than using and maintaining the thing, the question arose if we even need what SAP offered, or if we get agents to do it.

In your example, do you actually need that simple CRM or maybe you can get agents to do the thing without any other additional software?

I don't know what this means for our jobs. I do know that, if making software becomes so trivial for everyone, companies will have to find another way to differentiate and compete. And hopefully that's where knowledge workers come in again.

simoncion|22 days ago

It seems like every quarter or two, I hear a story just like yours (including the <<Wow! We've quietly passed an inflection point!>> part).

What does that tell me?

It tells me that I shouldn't waste my time with a tool that's going to fundamentally change in three to six months; that I should wait until I stop hearing stories like this for a good, long while. "But you're going to be left behind!", yeah, maybe. But. I've been primarily a maintenance programmer for a very long time. The "bleeding edge" is where I am very, very rarely... and it seems to work out fine.

New tools that are useful are nice. Switching to a radically different tool every quarter or two? Not nice. I've got shit to do.

wasmainiac|22 days ago

I have not had the success you mention with programming… I still feel like I have to hold its hand all the way.

Regardless..

> The problem might end up being that the value created by LLMs will have no customers when everyone is unemployed.

This mentality is why investors are scrambling right now. It’s a scare tactic.

raegis|22 days ago

> The problem might end up being that the value created by LLMs will have no customers when everyone is unemployed.

I'm not a professional programmer, but I am the I.T. department for my wife's small office. I used ChatGPT recently (as a search engine) to help create a web interface for some files on our intranet. I'm sure no one in the office has the time or skills to vibe code this in a reasonable amount of time. So I'm confident that my "job" is secure :)

chasd00|22 days ago

I have to admit the last 6-8 weeks have been different. Maybe it’s just me realizing the value in some of these tools…

lII1lIlI11ll|22 days ago

>As a research topic, modern AI is a miracle, and I absolutely love learning about it. As an economic endeavor, it just feels insane. How many hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories, etc. could we build with the money we’re spending on pretraining models that we throw away next quarter?

This is a wrong way to look at it. The right way is to consider that AI investments generate (taxable) economic activity that your government can use to build "hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories".

efficax|22 days ago

they pay very little tax and most of the cash is going into datacenters and electricity which provide very little long term employment. llms can do some amazing things but at the same time they’re setting mountains of cash on fire to nudify random women on twitter and generate more spam than we could’ve ever imagined possible

anon7000|22 days ago

Not so much when there’s a race to the bottom for which municipalities, and states can offer the most tax breaks.

qaq|22 days ago

Not many. Money is not a perfect abstraction. The raw materials used to produce 100B worth of Nvidia chips will not yield you many hospitals. AI researcher with 100M singup bonus from Meta ain't gonna lay you much brick.

thwarted|22 days ago

It's not about the consumption of raw materials or repurposing of the raw materials used for chips. peterlk said:

> How many hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories, etc. could we build with the money we’re spending on pretraining models that we throw away next quarter?

It's about using the money for to build things that we actually need and that have more long term utility. No one expects someone with a 100M signing bonus at Meta to lay bricks, but that 100M could be used to buy a lot of bricks and pay a lot of brick layers to build hospitals.

johnvanommen|22 days ago

> How many hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories, etc. could we build

“We?”

This isn’t “our” money.

If you buy shares, you get a voice.

mike_hearn|22 days ago

FWIW the models aren't thrown away. The weights are used to preinit the next foundation model training run. It helps to reuse weights rather than randomize them even if the model has a somewhat different architecture.

As for the rest, constraint on hospital capacity (at least in some countries, not sure about the USA) isn't money for capex, it's doctors unions that restrict training slots.

YZF|22 days ago

It's not a zero sum game. We could build hospitals and data centers. The reason we are not building hospitals or parks or machine shops have nothing to do with AI. We weren't building them 2 years ago either.

eviks|22 days ago

Indeed not zero sum, but a negative sum game to waste money instead of even doing nothing, let alone building something useful.

polski-g|22 days ago

Google has zero expected build outs of "forests". They've never mentioned this in their 10k ever. There is no misallocation of Google's money from "forests" to datacenters.

uejfiweun|22 days ago

There is a certain logic to it though. If the scaling approaches DO get us to AGI, that's basically going to change everything, forever. And if you assume this is the case, then "our side" has to get there before our geopolitical adversaries do. Because in the long run the expected "hit" from a hostile nation developing AGI and using it to bully "our side" probably really dwarfs the "hit" we take from not developing the infrastructure you mentioned.

A_D_E_P_T|22 days ago

Any serious LLM user will tell you that there's no way to get from LLM to AGI.

These models are vast and, in many ways, clearly superhuman. But they can't venture outside their training data, not even if you hold their hand and guide them.

Try getting Suno to write a song in a new genre. Even if you tell it EXACTLY what you want, and provide it with clear examples, it won't be able to do it.

This is also why there have been zero-to-very-few new scientific discoveries made by LLM.

samrus|22 days ago

Scaling alone wont get us to AGI. We are in the latter half of this AI summer where the real research has slowed down and even stopped and the MBAs and moguls are doing stupid things

For us to take the next step towards AGI, we need an AI winter to hit and the next AI summer to start, the first half of which will produce the advancement we actually need

mylifeandtimes|22 days ago

Here's hoping you are chinese, then.