Whenever I get worried about this I comb through our ticket tracker and see that ~0% of them can be implemented by AI as it exists today. Once somebody cracks the memory problem and ships an agent that progressively understands the business and the codebase, then I'll start worrying. But context limitation is fundamental to the technology in its current form and the value of SWEs is to turn the bigger picture into a functioning product.
nemo1618|16 days ago
You are in danger. Unless you estimate the odds of a breakthrough at <5%, or you already have enough money to retire, or you expect that AI will usher in enough prosperity that your job will be irrelevant, it is straight-up irresponsible to forgo making a contingency plan.
overgard|16 days ago
I'm baffled that so many people think that only developers are going to be hit and that we especially deserve it. If AI gets so good that you don't need people to understand code anymore, I don't know why you'd need a project manager anymore either, or a CFO, or a graphic designer, etc etc. Even the people that seem to think they're irreplaceable because they have some soft power probably aren't. Like, do VC funds really need humans making decisions in that context..?
Anyway, the practical reason why I'm not screaming in terror right now is because I think the hype machine is entirely off the rails and these things can't be trusted with real jobs. And honestly, I'm starting to wonder how much of tech and social media is just being spammed by bots and sock puppets at this point, because otherwise I don't understand why people are so excited about this hypothetical future. Yay, bots are going to do your job for you while a small handful of business owners profit. And I guess you can use moltbot to manage your not-particularly-busy life of unemployment. Well, until you stop being able to afford the frontier models anyway, which is probably going to dash your dream of vibe coding a startup. Maybe there's a handful of winners, until there's not, because nobody can afford to buy services on a wage of zero dollars. And anyone claiming that the abundance will go to everyone needs to get their head checked.
Gigachad|16 days ago
nitwit005|16 days ago
It's not the odds of the breakthrough, but the timeline. A factory worker could have correctly seen that one day automation would replace him, and yet worked his entire career in that role.
There have been a ton of predictions about software engineers, radiologists, and some other roles getting replaced in months. Those predictions have clearly been not so great.
At this point the greater risk to my career seems to be the economy tanking, as that seems to be happening and ongoing. Unfortunately, switching careers can't save you from that.
adamkittelson|16 days ago
themafia|16 days ago
I do. Show me any evidence that it is imminent.
> or you expect that AI will usher in enough prosperity that your job will be irrelevant
Not in my lifetime.
> it is straight-up irresponsible to forgo making a contingency plan.
No, I'm actually measuring the risk, you're acting as if the sky is falling. What's your contingency plan? Buy a subscription to the revolution?
zozbot234|16 days ago
jopsen|16 days ago
What contingencies can you really make?
Start training a physical trade, maybe.
If this the end of SWE jobs, you better ride the wave. Odds are you're estimate on when AI takes over are off by half a career, anyways.
snowwrestler|15 days ago
It seems kind of like saying “I’m smarter than all the AIs in this one particular way.” If someone posted that, you would probably jump in to say they’re fooling themselves.
watt|14 days ago
rockbruno|16 days ago
palmotea|16 days ago
More likely they get fired for no reason, never rehired, and the people left get burned out trying to hold it all together.
themafia|16 days ago
If you fail as a "higher up" you're no longer higher up. Then someone else can take your place. To the extent this does not naturally happen is evidence of petty or major corruptions within the system.
sensanaty|16 days ago
The large, overwhelming majority of my team's time is spent on combing through these tickets and making sense of them. Once we know what the ticket is even trying to say, we're usually out with the solution in a few days at most, so implementation isn't the bottleneck, nowhere near.
This scenario has been the same everywhere I've ever worked, at large, old institutions as well as fresh startups.
The day I'll start worrying is when the AI is capable of following the web of people involved to translate what the vaguely phrased ticket that's been backlogged for God knows how long actually means
arcologies1985|15 days ago
However as you point out we have no program-accessible source of data on who stakeholders, contributors, managers, etc. are and have to write a lot of that ourselves. For a smaller business perhaps one could write all of that down in an accessible way to improve this but for a large dynamic business it seems very difficult.
UncleOxidant|16 days ago
I've been doing stuff with recent models (gemini 3, claude 4.5/6, even smaller, open models like GLM5 and Qwen3-coder-next) that was just unthinkable a few months back. Compiler stuff, including implementing optimizations, generating code to target a new, custom processor, etc. I can ask for a significant new optimization feature in our compiler before going to lunch and come back to find it implemented and tested. This is a compiler that targets a custom processor so there is also verilog code involved. We're having the AI make improvements on both the hardware and software sides - this is deep-in-the-weeds complex stuff and AI is starting to handle it with ease. There are getting to be fewer and fewer things in the ticket tracker that AI can't implement.
A few months ago I would've completely agreed with you, but the game is changing very rapidly now.
taysco|16 days ago
I don't agree they have solved this problem, at all, or really in any way that's actually usable.
e_i_pi_2|16 days ago
gordonhart|16 days ago
tharkun__|16 days ago
deet|16 days ago
It's hard to predict how quickly it will be solved and by whom first, but this appears to be a software engineering problem solvable through effort and resources and time, not a fundamental physical law that must be circumvented like a physical sciences problem. Betting it won't be solved enough to have an impact on the work of today relatively quickly is betting against substantial resources and investment.
slopinthebag|16 days ago
Plenty of things get substantial resources and investment and go nowhere.
Of course I could be totally wrong and it's solved in the next couple years, it's almost impossible to make these predictions either way. But I get the feeling people are underestimating what it takes to be truly intelligent, especially when efficiency is important.
ThrowawayR2|16 days ago
datsci_est_2015|16 days ago
Wonder what that means for meatspace.
Edit: Would also disagree this isn’t a physics problem. Pretty sure power required scales according to problem complexity. At a certain level of problem complexity we’re pretty much required to put enough carbon in the atmosphere to cook everyone to a crisp.
Edit 2: illustrative example, an Epic in Jira: “Design fusion reactor”
matt_heimer|16 days ago
louiereederson|16 days ago
datsci_est_2015|16 days ago
krackers|16 days ago
This is no different than onboarding a new member of the team, and I think openAI was working on that "frontier"
>We started by looking at how enterprises already scale people. They create onboarding processes. They teach institutional knowledge and internal language. They allow learning through experience and improve performance through feedback. They grant access to the right systems and set boundaries. AI coworkers need the same things.
And tribal knowledge will not be a moat once execs realize that all they need to do is prioritize documentation instead of "code velocity" as a metric (sure any metric gets goodhearted, but LLMs are great at sifting through garbage to find the high perplexity tokens).
>But context limitation is fundamental to the technology in its current form
This may not be the case, large enough context-windows plus external scratchpads would mostly obviate the need for true in context learning. The main issue today is that "agent harnesses" suck. The fact that claude code is considered good is more an indication of how bad everything else is. Tool traces read like a drunken newb brute-forcing his way through tasks. LLMs can mostly "one-shot" individual functions, but orchestrating everything is the blocker. (Yes there's progress in metr or whatever but I don't trust any of that, else we'd actually see the results in real-world open source projects).
LLMs don't really know how to interact with subagents. They're generally sort of myopic even with tool calls. They'll spend 20 minutes trying to fix build issues going down a rabbit hole without stepping back to think. I think some sort of self-play might end up solving all of these things, they need to develop a "theory of mind" in the same way that humans do, to understand how to delegate and interact with the subagents they spawn. (Today a failure case is agents often don't realize subagents don't share the same context.)
Some of this is certainly in the base model and pretraining, but it needs to be brought out in the same way RL was needed for tool use.
malyk|16 days ago
I look at my ticket tracker and I see basically 100% of it that can be done by AI. Some with assistance because business logic is more complex/not well factored than it should be, but most of the work that is done AI is perfectly capable of doing with a well defined prompt.
gordonhart|16 days ago
- What is the validation system and how does it work today?
- What sort of UX do we want? What are the specific deficiencies in the current UX that we're trying to fix?
- What prior art exists on the backend and frontend, and how much of that can/should be reused?
- Are there any scaling or load considerations that need to be accounted for?
I'll probably implement this as 2-3 PRs in a chain touching different parts of the codebase. GPT via Codex will write 80% of the code, and I'll cover the last 20% of polish. Throughout the process I'll prompt it in the right direction when it runs up against questions it can't answer, and check its assumptions about the right way to push this out. I'll make sure that the tests cover what we need them to and that the resultant UX feels good. I'll own the responsibility for covering load considerations and be on the line if anything falls over.
Does it look like software engineering from 3 years ago? Absolutely not. But it's software engineering all the same even if I'm not writing most of the code anymore.
lbrito|16 days ago
contagiousflow|16 days ago
rockbruno|16 days ago
dwa3592|16 days ago
That's a sign that you have spurious problems under those tickets or you have a PM problem.
Also, a job is a not a task- if your company has jobs which is a single task then those jobs would definitely be gone.
yodsanklai|16 days ago
I think it's more nuanced than that. I'd say that - 0% can't be implemented by AI - but a lot of them can be implemented much faster thanks to AI - a lot of them can be implemented slower when using AI (because author has to fix hallucinations, revert changes that caused bugs)
As we learn to use these tools, even in their current state, they will increase productivity by some factor and reduce needs for programmers.
sarchertech|16 days ago
I have seen numerous 25-50% productivity boosts over my career. Not a single one of them reduced the overall need for programmers.
I can’t even think of one that reduced the absolute number of programmers in a specific field.
vincent_s|16 days ago
It's a coding agent that takes a ticket from your tracker, does the work asynchronously, and replies with a pull request. It does progressively understand the codebase. There's a pre-warming step so it's already useful on the first ticket, but it gets better with each one it completes.
The agent itself is done and working well. Right now I'm building out the infrastructure to offer it as a SaaS.
If anyone wants to try it, hit me up. Email is in my profile. Website isn't live yet, but I'm putting together a waitlist.
danesparza|16 days ago
gordonhart|16 days ago
audience_mem|16 days ago
pupppet|16 days ago
tines|16 days ago
ninetyninenine|16 days ago
[deleted]
zozbot234|16 days ago
Um, you do realize that "the memory" is just a text file (or a bunch of interlinked text files) written in plain English. You can write these things out yourself. This is how you use AI effectively, by playing to its strengths and not expecting it to have a crystal ball.