(no title)
mark242 | 22 days ago
Then, as part of the session, you would artificially introduce a bug into the system, then run into the bug in your browser. You'd see the failure happen in browser, and looking at Cloudwatch logs you'd see the error get logged.
Two minutes later, the SRE agents had the bug fixed and ready to be merged.
"understand how these systems actually function" isn't incompatible with "I didn't write most of this code". Unless you are only ever a single engineer, your career is filled with "I need to debug code I didn't write". What we have seen over the past few months is a gigantic leap in output quality, such that re-prompting happens less and less. Additionally, "after you've written this, document the logic within this markdown file" is extremely useful for your own reference and for future LLM sessions.
AWS is making a huge, huge bet on this being the future of software engineering, and even though they have their weird AWS-ish lock-in for some of the LLM-adjacent practices, it is an extremely compelling vision, and as these nondeterministic tools get more deterministic supporting functions to help their work, the quality is going to approach and probably exceed human coding quality.
dasil003|22 days ago
There is some combination of curiosity of inner workings and precision of thought that has always been essential in becoming a successful engineer. In my very first CS 101 class I remember the professor alluding to two hurdles (pointers and recursion) which a significant portion of the class would not be able to surpass and they would change majors. Throughout the subsequent decades I saw this pattern again and again with junior engineers, bootcamp grads, etc. There are some people no matter how hard they work, they can't grok abstraction and unlock a general understanding of computing possibility.
With AI you don't need to know syntax anymore, but to write the write prompts to maintain a system and (crucially) the integrity of its data over time, you still need this understanding. I'm not sure how the AI-native generation of software engineers will develop this without writing code hands-on, but I am confident they will figure it out because I believe it to be an innate, often pedantic, thirst for understanding that some people have and some don't. This is the essential quality to succeed in software both in the past and in the future. Although vibe coding lowers the barrier to entry dramatically, there is a brick wall looming just beyond the toy app/prototype phase for anyone without a technical mindset.
athrowaway3z|22 days ago
But something I'd bet money on is that devs are 10x more productive at using these tools.
stuartaxelowen|21 days ago
rendaw|22 days ago
So the professor just gaslit years of students into thinking they were too dumb to get programming, and also left them with the developmental disability of "if you can't figure something out in a few days, you'll never get it".
plomme|21 days ago
alexpotato|22 days ago
- I 100% believe this is happening and is probably going to be the case in the next 6 months. I've seen Claude and Grok debug issues when they only had half of the relevant evidence (e.g. Given A and B, it's most likely X). It can even debug complex issues between systems using logs, metrics etc. In other words, everything a human would do (and sometimes better).
- The situation described is actually not that different from being a SRE manager. e.g. as you get more senior, you aren't doing the investigations yourself. It's usually your direct reports that are actually looking at the logs etc. You may occasionally get involved for more complex issues or big outages but the direct reports are doing a lot of the heavy lifting.
- All of the above being said, I can imagine errors so weird/complex etc that the LLMs either can't figure it out, don't have the MCP or skill to resolve it or there is some giant technology issue that breaks a lot of stuff. Facebook engineers using angle grinders to get into the data center due to DNS issues comes to mind for the last one.
Which probably means we are all going to start to be more like airline pilots:
- highly trained in debugging AND managing fleets of LLMs
- managing autonomous systems
- around "just in case" the LLMs fall over
P.S. I've been very well paid over the years and being a SRE is how I feed my family. I do worry, like many, about how all of this is going to affect that. Sobering stuff.
-
misir|22 days ago
Airline pilots are still employed because of regulations. The industry is heavily regulated and the regulations move very slowly because of its international cooperative nature. The regulations dictate how many crew members should be on board for each plane type and other various variables. All the airlines have to abide by the rules of the airspace they're flying over to keep flying.
The airlines on the other hand along with the technology producers (airbus for example) are pursuing to reduce number of heads in the cockpit. While their recent attempt to get rid of co-pilots in EASA land has failed [1], you can see the amount of pursuit and investment. The industry will continue to force through cost optimization as long as there's no barrier to prevent. The cases where automation has failed will be just a cost of the business, since the life of the deceased is no concern to the company's balance sheet.
Given the lack of regulation in the software, I suspect the industry will continue the cost optimization and eliminate humans in the loop, except in the regulated domains.
[1] - https://www.easa.europa.eu/en/research-projects/emco-sipo-ex... ; while this was not a direct push to get rid of all pilots, it's a stepping stone in that direction.
pragmatic|22 days ago
What does the code /system look like.
It is going to be more like evolution (fit to environment) than engineering (fit to purpose).
It will be fascinating to watch nonetheless.
ThrowawayR2|22 days ago
People are still wrongly attributing a mind to something that is essentially mindless.
finebalance|22 days ago
Oh, I absolutely love this lens.
skybrian|22 days ago
It's up to you what you want to prioritize.
seba_dos1|22 days ago
That's the vast majority of my job and I've yet to find a way to have LLMs not be almost but not entirely useless at helping me with it.
(also, it's filled with that even when you are a single engineer)
fragmede|22 days ago
kaydub|22 days ago
robhlt|22 days ago
scoofy|22 days ago
All I can say is "holy shit, I'm a believer." I've probably got close to a year's worth of coding done in a month and a half.
Busy work that would have taken me a day to look up, figure out, and write -- boring shit like matplotlib illustrations -- they are trivial now.
Things that are ideas that I'm not sure how to implement "what are some different ways to do this weird thing" that I would have spend a week on trying to figure out a reasonable approach, no, it's basically got two or three decent ideas right away, even if they're not perfect. There was one vectorization approach I would have never thought of that I'm now using.
Is the LLM wrong? Yes, all the damn time! Do I need to, you know, actually do a code review then I'm implementing ideas? Very much yes! Do I get into a back and forth battle with the LLM when it gets starts spitting out nonsense, shut the chat down, and start over with a newly primed window? Yes, about once every couple of days.
It's still absolutely incredible. I've been a skeptic for a very long time. I studied philosophy, and the conceptions people have of language and Truth get completely garbled by an LLM that isn't really a mind that can think in the way we do. That said, holy shit it can do an absolute ton of busy work.
james_marks|22 days ago
This is generous, to the say the least.
fullstackchris|22 days ago
you must have never worked on any software project ever
MrDarcy|22 days ago
pron|22 days ago
True, but there's usually at least one person who knows that particular part of the system that you need to touch, and if there isn't, you'll spend a lot of time fixing that bug and become that person.
The bet you're describing is that the AI will be the expert, and if it can be that, why couldn't it also be the expert at understanding the users' needs so that no one is needed anywhere in the loop?
What I don't understand about a vision where AI is able to replace humans at some (complicated) part of the entire industrial stack is why does it stop at a particular point? What makes us think that it can replace programmers and architects - jobs that require a rather sophisticated combination of inductive and deductive reasoning - but not the PMs, managers, and even the users?
Steve Yegge recently wrote about an exponential growth in AI capabilities. But every exponential growth has to plateau at some point, and the problem with exponential growth is that if your prediction about when that plateau happens is off by a little, the value at that point could be different from your prediction by a lot (in either direction). That means that it's very hard to predict where we'll "end up" (i.e. where the plateau will be). The prediction that AI will be able to automate nearly all of the technical aspects of programming yet little beyond them seems as unlikely to me as any arbitrary point. It's at least as likely that we'll end up well below or well above that point.
anonnon|22 days ago
I'm not sure that the current growth rate is exponential, but the problem is that it doesn't need to be exponential. It should have been obvious the moment ChatGPT and stable diffusion-based systems were released that continued, linear progress of these models was going to cause massive disruption eventually (in a matter of years).
panny|22 days ago
It doesn't matter if you don't use 90% of a framework as the submission bemoans. When everyone uses an identical API, but in different situations, you find lots of different problems that way. Your framework, and its users become a sort of BORG. When one of the framework users discovers a problem, it's fixed and propagated out before it can even be a problem for the rest of the BORG.
That's not true in your LISP curse, one off custom bespoke framework. You will repeat all the problems that all the other custom bespoke frameworks encountered. When they fixed their problem, they didn't fix it for you. You will find those problems over and over again. This is why free software dominates over proprietary software. The biggest problem in software is not writing the software, it's maintaining it. Free software shares the maintenance burden, so everyone can benefit. You bear the whole maintenance burden with your custom, one off vibe coded solutions.
poulsbohemian|22 days ago
max51|22 days ago
If you manage a code base this way at your company, sooner or later you will face a wall. What happens when the AI can't fix an important bug or is unable to add a very important feature? now you are stuck with a big fat dirty pile of code that no human can figure out because it wasn't coded by human and was never designed to be understood by a human in the first place.
jdswain|22 days ago
ghiculescu|22 days ago
pphysch|22 days ago
Customer A is in an totally unknown database state due to a vibe-coded bug. Great, the bug is fixed now, but you're still f-ed.
andrekandre|22 days ago
j-krieger|21 days ago
Except now you have code you didn‘t write and patches you din‘t write either. Your „colleague“ also has no long term memory.
agosta|22 days ago
113|22 days ago
Have we not seen loads of examples of terrible AI generated RPs every week on this site?
viraptor|22 days ago
Don't assume that when people make fun of some examples that there aren't thousands more that nobody cares to write about.
brightball|22 days ago
rsynnott|21 days ago
Amazon, which employs many thousands of SREs (or, well, pseudo-SREs; AIUI it's not quite the conventional SRE role), is presumably just doing so for charitable purposes, if they are so easy to replace with magic robots.
IhateAI|21 days ago
[deleted]