top | item 46765714

(no title)

v_CodeSentinal | 1 month ago

Hard agree. As LLMs drive the cost of writing code toward zero, the volume of code we produce is going to explode. But the cost of complexity doesn't go down—it actually might go up because we're generating code faster than we can mentally model it.

SRE becomes the most critical layer because it's the only discipline focused on 'does this actually run reliably?' rather than 'did we ship the feature?'. We're moving from a world of 'crafting logic' to 'managing logic flows'.

discuss

order

ottah|1 month ago

I dunno, I don't think in practice SRE or DevOPs are even really different from the people we used to call sys admins (former sysadmin myself). I think the future of mediocre companies is SRE chasing after LLM fires, but I think a competitive business would have a much better strategy for building systems. Humans are still by far the most efficient and generalized reasoners, and putting the energy intensive, brittle ai model in charge of most implementation is setting yourself up to fail.

stvvvv|1 month ago

Former sysadmin and I've been an SRE for >15 years now.

They are very different. If your SREs are spending much of their time chasing fires, they are doing it wrong.

mupuff1234|1 month ago

> But the cost of complexity doesn't go down

But how much of current day software complexity is inherent in the problem space vs just bad design and too many (human) chefs in the kitchen? I'm guessing most of it is the latter category.

We might get more software but with less complexity overall, assuming LLMs become good enough.

legorobot|1 month ago

I agree that there's a lot of complexity today due to the process in which we write code (people, lack of understanding the problem space, etc.) vs the problem itself.

Would we say us as humans also have captured the "best" way to reduce complexity and write great code? Maybe there's patterns and guidelines but no hard and fast rules. Until we have better understanding around that, LLMs may also not arrive at those levels either. Most of that knowledge is gleamed when sticking with a system -- dealing with past choices and requiring changes and tweaks to the code, complexity and solution over time. Maybe the right "memory" or compaction could help LLMs get better over time, but we're just scratching the surface there today.

LLMs output code as good as their training data. They can reason about parts of code they are prompted and offer ideas, but they're inherently based on the data and concepts they've trained on. And unfortunately...its likely much more average code than highly respected ones that flood the training data, at least for now.

Ideally I'd love to see better code written and complexity driven down by _whatever_ writes the code. But there will always been verification required when using a writer that is probabilistic.

oblio|1 month ago

That probably requires superhuman AI, though.

wavemode|1 month ago

By "SRE", are people actually talking about "QA"?

SREs usually don't know the first thing about whether particular logic within the product is working according to a particular set of business requirements. That's just not their role.

stvvvv|1 month ago

Good SREs at a senior level do. They are familiar with the product, and the customers and the business requirements.

Without that it's impossible to correctly prioritise your work.

zeroCalories|1 month ago

Most companies don't have QA anymore, just their CI/CD's automated tests.

belter|1 month ago

>> As LLMs drive the cost of writing code toward zero

And they drive the cost of validating the correctness of such code towards infinity...

storystarling|1 month ago

I see it less as SRE and more about defensive backend architecture. When you are dealing with non-deterministic outputs, you can't just monitor for uptime, you have to architect for containment. I've been relying heavily on LangGraph and Celery to manage state, basically treating the LLM as a fuzzy component that needs a rigid wrapper. It feels like we are building state machines where the transitions are probabilistic, so the infrastructure (Redis, queues) has to be much more robust than the code generating the content.

franktankbank|1 month ago

This sounds like the most min maxed drivel. What if I took every concept and dialed it to either zero or 11 and then picked a random conclusion!!!??