> Unfortunately, the winds of change are sometimes irreversible. The continuing drop in cost of computers has now passed the point at which computers have become cheaper than people. The number of programmers available per computer is shrinking so fast that most computers in the future will have to work at least in part without programmers.
They do. Servers, smartphones, most embedded systems, don't need an "operator" as in the past. Your source was probably thinking of that kind of "programmer".
In 1989 or so the man who later became my programming teacher at community college night school was at a party and a man who he knew came up to him and told him he was a programmer now too!
This confused my teacher as he knew this guy wasn’t super technical, and asked him more about it. I may have the details not exactly right but the man said something like “I use lotus notes every day!”
The word programmer had a very different meaning 40 years ago.
The ratio of computer minutes per programmer-minute has indeed gone to an amazing number nowadays! I work in VFX (at RSP) and this fact is vividly illustrated for me all the time by the millions of thread-hours we go through on the renderfarm each week!
Despite all the astounding developments in AI/ML though, I still think there's still a critical need for the application of human/biological imagination and creativity. Sure the amount of leverage between thoughts and CPU cycles can be utterly giant now, but it doesn't seem to diminish the need (where performance or correctness/less-bugs are needed) for a full understanding of what the computer actually gets up to in the end.
For what it's worth, we do have an ML department at RSP and they are doing great! But I'm not sure we'd get very far if we tried to vibe-code the underlying pipeline, as it really requires full understanding of many interlocking pieces.
The pattern you've identified has a precise mechanism that's rarely named: each abstraction layer doesn't eliminate complexity; it relocates it. COBOL moved complexity from machine instructions to business logic specification. 4GLs moved it from code to data modeling. No-code moves it from programming to workflow configuration. LLMs are moving it from syntax to prompt engineering and output verification.
The relocation is genuinely valuable — each move makes the simple cases dramatically simpler. But the complexity doesn't disappear. It accumulates at the new boundary, which is why each wave creates a new class of specialists rather than eliminating specialists.
What's underappreciated about the current wave is where the complexity is relocating to. With LLMs generating code, the hard problem is no longer writing correct syntax — it's verifying that the generated output is correct, secure, and maintainable. That verification problem is arguably harder than the original coding problem, because you're now auditing code you didn't write, in a codebase shaped by decisions you didn't make, produced by a system whose reasoning you can't inspect.
The irony is that LLMs may be creating demand for a skill that programming culture has historically undervalued: careful, systematic verification of code you didn't write. That's closer to auditing than engineering. And it turns out auditing is hard.
It has been a while, but I remember a project of mine trying to port a FTP client to a 'secure compiler' (this was long before Rust and probably a distant ancestor of Checked C). In theory, if I could successfully port it, it would be much more resilient to particular kinds of issues (and maybe even attacks). This was in the era where formal proof coding was trying to take off as well in the industry.
After wading through an impressive number of compiler errors (again, it was technically compatible) and attempts to fix them, I eventually surrendered and acknowledged that at the very least, this was beyond my abilities.
I probably would had gotten much further just rewriting it from scratch.
The article talks about 'software development will be democratized' but the current LLM hype is quite the opposite. The LLMs are owned by large companies and are quite impossible to train by any individual, if only because of energy costs. The situation where I am typing my code on my linux machine is much more democratic.
Right, people misuse this term "democratized" all the time. Because it sounds nice. But it's incorrect.
Democracy is about governance, not access.
A "democratized" LLM would be one in which its users collectively made decisions about how it was managed. Or if the companies that owned LLMs were ran democratically.
It is democratising from the perspective of non-programmers- they can now make their own tools.
What you say about big tech is true at same time though. I worry about what happens when China takes the lead and no longer feels the need to do open models. First hints already showing - advance access to ds4 only for Chinese hardware makers
That's a great point but you didn't make your linux machine yourself. A large tech corp made it, and each of its parts. Some of us could probably make their own computers but I don't think I'd be able to make one smaller than the house I live in. There's something to be said about large-scale automation and that's not that it "democratizes" anything. Like you say: quite the opposite.
One important and often overlooked democratization is spreadsheet formulas: non-programmers began programming without knowing they were, and without concern for error and edge cases. I cannot find the reference right now, but I recall seeing years ago articles about how mistakes in spreadsheet formulae were costing millions or more.
I see an analog with AI-generated code: the disciplined among us know we are programming and consider error and edge cases, the rest don't.
Will the AIs get good enough so they/we won't have to? Or will people realize they are programming and discipline up?
I have a feeling that the cost of bad / inefficient / late software runs into at least the billions. The biggest risks are unavoidably attached to the most costly software projects, that are probably the most likely to be conducted in the most sophisticated and professional fashion with the latest silver bullet methodologies.
The Mythical Man Month is just over half a century old, yet still reads like it was written yesterday.
>Or will people realize they are programming and discipline up?
Or will there be coding across disciplines, and attendant theories of literacies in context?
What I like about the OP is the consonance with literate practices, which has gone through similar generations of "our children don't know how to [...]" alongside of "our children will not need to [...] because of the machines."
Worse, they were doing functional programming just by chaining formulas without side effects, surpassing the skills of most self-proclaimed programmers out there.
I often think about how the modern world genuinely does run on Excel formulas, many written by amateurs, most without automated tests and with version control based on final_final_v2 suffixes.
Somehow civilization continues to function!
Makes me a bit less terrified that untested vibe coded slop will sink the economy. It's not that different from how things work already.
> non-programmers began programming without knowing they were
Using excel in the traditional sense isn't the same as programming. Unless they were doing some VBA or something like that which the vast majority of excel/spreadsheet users don't.
> spreadsheet formulae
formulas. We aren't speaking latin here.
> I see an analog with AI-generated code: the disciplined among us know we are programming and consider error and edge cases, the rest don't.
Programming isn't really about edge cases or errors.
I remember sitting in a senior seminar class in 1989 full of CS students. We were solemnly informed by a very earnest IBM employee that we would regret having majored in computer science because IBM's CASE tools were going to kill job market. That aged like milk.
Will something come along some day that will actually drastically reduce the need for programmers/developers/software engineers? Maybe. Are we there yet? My LLM experience makes me seriously doubt it.
I remember sitting in my first year university classes, in 2003, and we were given quite the opposite outlook - don't limit ourselves to what we consider to be 'the industry' as it is right now, because most of the jobs we'll have in our careers don't yet exist.
LOL... I was in the same position. I graduated from high school in 88 and got my first job a couple of years later, working at a small insurance company running IBM AS/400. I had just gotten my job as an operator with a dream of becoming a programmer, and here comes IBM with its CASE tool. I truly thought the world was going to end.
A couple of years later, Microsoft came out with Visual Basic, and I thought, OMG, I'm toast. Secretaries are going to be writing code. I was a developer by this time, writing code in FoxPro and getting into PowerBuilder.
All this to say, "I've been in IT for many years, and companies promise a lot but rarely deliver completely on their promises." Do programmers and others in the tech field need to adapt? Yes. Is AI going to be disruptive to some extent? Yes. Are all jobs going away? No.
A good LLM is a great tool for those who know what they are doing. They can follow some very tedious code paths (if thread 1 is doing this, while thread 4 while thread 2...). However they also can write some really really bad code. They sometimes propose bad solutions/architecture. You need someone knowledge to guide them and keep them on a good path.
Back in the 80's there were ads for tools to "dinosaurs" who everyone looked to when their 4GL language failed to solve the problem.
I attended a CASE tools conference in the 1990s, which of course included a vendor exhibition. The vendors all had demos of creating an application using their tool. At multiple vendor stands I asked to see the code generated by their CASE tool. Invariably, the salespeople would start waffling about how the code was no longer important (sound familiar?), how you didn't need to examine the engine of a car while driving it, and so on. It had a very "pay no attention to the man behind the curtain" feel to it. It convinced me that I didn't need to pay any attention to CASE tools, and history confirmed that.
I find it so fundamentally unhinged that people think things will get fully automated to the point that humans no longer matter. We are centuries into the deep automation of certain things, like looms, but people with deep understanding of those things are still needed to guide the automation and keep it working to meet human needs.
To ignore that pattern and say everything's going to be automated and humanity will be irrelevant seems to me to be... more of a death wish against human agency, than a prediction based on reality.
> We are centuries into the deep automation of certain things, like looms, but people with deep understanding of those things are still needed to guide the automation and keep it working to meet human needs.
The difference this time is that the thing they're trying to automate is intelligence. The goal is a machine that's as smart as a Nobel Prize winner or a good CEO, across all fields of human intellectual endeavor, and which works for dollars an hour. The goal is also for this machine to be infinitely copyable for the cost of some GPUs and hard drives.
The next goal after that will be to give that machine hands, so that it can do any physical labor or troubleshooting a human can do. And again, the goal is for the hands to be cheaper to produce and cheaper to automate than humans.
You may ask yourself, who would need humans in a future where all intellectual and physical tasks can be done better and cheaper by a machine? You may also ask yourself, who would control the machines? You may ask yourself, what leverage would ordinary humans have in a future that no longer needed them for anything? Or perhaps you would not ask those questions.
But this is the future investors are dreaming of, and the future that they're investing trillions of dollars to reach. That's the dream.
I think people feel that once the pool of humans required to do a thing diminishes to the point that their occupation is rare enough to be invisible, that is essentially the same as "fully automating" it.
I have certainly never met anyone who works in "loom engineering" in my entire life.
The thing being automated in this case is human intelligence. If you've been paying attention more and more of economical knowledge work is threatened by advancement of AI capabilities. This is a credible threat. Deny this and you are the one in denial with reality.
Fundamentally unhinged? How presumptive of you to declare with confidence AI will never become more capable than humans.
But I suppose it's fitting. If after all that has happened your priors still have not budged then I'm sorry to say you will probably never understand this.
> I should be extremely skeptical about excitable tech guys predicting big things in short time frames.
Edit: I read your other comment. I don't disagree with you here.
One other thing that is often ignored: Most of the business class, executive class, ... even working class, DON'T want to write code.
The reasons vary, but in general, just as some people don't want to touch maths (even if they might be good at it if they tried), some people loathe the very idea of being technical, either because they think it is beneath them, or they just don't see themselves that way.
And like the article explains, even when "programming" tools seem to become simpler to use, they still require technical specification, and once people feel like they are getting close to "programming" they check out.
Most people don’t avoid programming because it’s too hard. They avoid it because they don’t identify as technical. Just like math many people who could be decent at it still recoil from it. It’s not capability, it’s identity and preference. Humans aren’t fundamentally logical thinkers, we’re storytelling creatures who occasionally use logic to justify what we already feel.
Developers are “unwanted overhead” until the customer money threatens to walk out the door. They’re going to damage their future products and probably reduce their customer base (fewer consumers) and then sit there looking like gaffed fish when the budget ink turns red. “Who would have thought…”
Funny part is we've already had this exact thing happen with outsourcing. It sure looked like a bargain until you got to such pesky details as correctness and maintainability.
I remember being in my early 20s, learning C and Pascal, and having this one kid telling me I was learning dead languages and he’d earn 3 times more than me leaning 4GL as well as himself being 3 times smarter than everyone else too.
The only reason I remember this encounter so clearly was because he got rather annoyed, to the point of being aggressive, when I pointed out that most of the computing landscape was built on C and this wasn’t going to change any time soon.
Multiple decades later, and C-derived languages still rule the world. I do sometimes wonder if his opinion mellowed with time.
LLMs seem quite successful when considered something like a natural langiage interface, but expecting intelligence seems a step too far. For one they do not learn, at least not online, and that is a somewhat important requirement for truly intelligent behaviour.
Arguably programming is as much learning as it is writing code. This is part of the reason some people copy an entire API and don't realise they're not so much building useful code as building an understanding.
In some sense, programming is about figuring out which algorithms are a fitting metaphor for business problems. By programming, you are building a model of the business problem and a model of its solution. Most of the non-programmers who are in positions of authority (managers, CEOs, even some CTOs), do not understand that this is what programmers do. From their point of view, the authorities come up with a "strategy", after dozens of meetings, and give the programmers vague instructions based on the strategy, and programmers turn those instructions into code that does something somewhere, usually after finding ways to avoid bad or unfeasible ideas, while still complying with the instructions.
To them, an LLM is indistinguishable from a programmer. From the point of view of authority, progress happens one meeting at a time. The reality is that there is a pyramid of experts beneath the authorities, that keep everything running smoothly, in spite of the best attempts of the authorities to demolish the foundation of the pyramid by "helping".
EDIT: to end on a positive note, it does not have to be this way. We just have to be willing to understand _how_ the organization we are a part of actually functions. And that means actually being curious instead of merely authoritative. I understand that curiosity is hard to maintain when you swim with sharks, so maybe don't swim with sharks.
If most programming is <em>ShitWork ®</em> and most programmers are performing ShitWork and LLM's are good at ShitWork, then most programmers are out of a job. If those programmers can pivot to another non-ShitWork or programming-adjacent function, they can remain employed.
There are bookoo other things people could be doing besides coding YASW [Yet Another Stupid Website].
At the moment LLM's tend to work well when you constrain them, and you can craft the constraints with the help of the same LLM in a different session. Then you can verify if the outputted code obeys the constraints in yet another session, and make it adjust the code to obey the constraints. If one of the constraints was to yield highly functional code, you can start refining function by function as well. There is a pattern here.
If you are a good engineer you can dictate data structures to it too. It then performs even better.
I believe the writing is on the wall a this point, it does a very adequate job if I invest enough time in writing and refining the specs and give it the data structures (&/| database schemas) I want it to use. And there is no comparison in the number of hours I spend wrangling it and the number of the hours it would take me to do the code myself.
This is the worst it's going to be and it's already quite good, it wasn't that good a mere three months ago.
The main pitfall is trying to get an LLM to read your mind, in doing so you are putting too much load on whatever passes for their intelligence quotient. That isn't how you get good results or get a good measure of their capabilities.
Windows in 1998: this is the worst it's ever going to be.
Uber is 2010: this is the worst it's ever going to be.
There's some triumphalism here. What happens when training data becomes scarcer because open source as a paradigm was killed? What happens when investor cash flows elsewhere and training and inference need to become profitable on their own?
- Software engineering is a cost center, they are middlemen between the C-level ideas and a finished product.
- Software engineering is about figuring out how to automate a problem, exploring the domain, defining context, tradeoffs, and unlocking new capabilities in the process
Quasi-relevant excerpt from an odd essay (footnotes and references omitted).
```
My own eyes spent countless nights observing, with curiosity and wonder and delight, the responses of a computer, as I commanded it with code, like a sorcerer casting spells. I could not have known, that this obedient machine, this silicon golem, was also, slowly and imperceptibly, enchanting me, and changing how my eyes would see.
At the time^21 , I was a mere fifteen years old, young enough, so that the gravity of life was weak enough, and the mind nimble enough, to allow me to explore without any material justification.
The computer was the believed and I was the believer.
A consequence of becoming obsessed^22 with computer programming, is that one starts to see new metaphors, algorithmic metaphors, everywhere one looks. This new metaphorical lense, belongs entirely to the third eye. Without this lense, I would look at a traffic jam, and see a traffic jam. With the lense, I would look at a traffic jam, and wonder if, and to what extent, the latency-throughput trade-off^23 was true for highways. Without the lense, I would read about social theory, and simply see the words. With the lense, I would ask if society was, a tree^24 , a graph^25, a tree of graphs, or a graph of trees^26.
To generalize, the computer programmer looks at something, and asks, _is this thing an algorithm, and if so, what kind_ ? The entire _trade_ of com-puter programming, it revolves around this question, around the discovery of metaphors that fit^27[13][14].
It is thus little surprise, when a computer programmer asks if (or sometimes asserts that) a certain kind of algorithm^28 is intelligence^29 , consciousness, or both.
The entire ritual of computer programming, is similar to the trade, in that it involves discovering metaphors, not as a means to an end, but as their own end. This ritual is difficult to explain to someone who has never practiced it. Imagine, instead of trying to find metaphors that bridge the real to the algorithmic, one tries to find metaphors that bridge the algorithmic to itself.
It is very similar to what mathematicians do, but it requires writing programs in a very principled and abstract way^30 .
This ritual, unlike the ritual of writing, and unlike the ritual of mathemat-ics, has a dominant material component (the computer) which can make your code, in addition to an _imaginary_ experience, a _material_ experience^31 . This makes the computer a medium — an artificial oracle or artificial hallucinogen — that can safely imagine the unimaginable. And like the oracle, the computer exists to provide insight^32.
Without the ritual of programming, there would be no field of chaos the-ory, nor complex systems (very important for economics and environmental sciences), and _certainly_ no elaborate fractals. Pure mathematics could only scratch the surface, because the mathematical ideas, of the mid 20th century, that our imaginations could access, were insufficient for exploring these sys-tems. Computers allow us, not unlike microscopes and telescopes, to magnify the informational dimension of nature [17].
Computers, and the arcane programming languages that make them obey, are magic machines, that created a new interaction between, two elements of the human psychic triad, the immaterial and material.
What is this triad, and what is its third element? The concept of the triad appears so frequently, in recorded human thought, and in the structure of language, that it is either some kind of adaptive ideal^33 , or a consequence of language itself^34, if not both. Pythagoras called _three_ perfection itself. Plato divided the world into three parts. And, even today, our modern shamans and sages, use triads to discuss the universe.
Roger Penrose has a traid consisting of physical, platonic, and mind. Lacan has a triad consisting of real, symbolic, and imaginary. Plato has a triad of good, truth, and beauty. Of the three, Lacan’s naming is the most self-explanatory.
In this essay, the _material_ is the real, and the _immaterial_ is the other two.
The _trade_ of programming is driven by the _real_, while the _ritual_ of pro-gramming is driven by the _imaginary_. A trade is pursued because of real, material concerns (such as covering the cost of living), while a ritual is pur-sued because of imaginary concerns — concerns that can, more precisely, be called _aesthetic_.
```
The market however has done a pretty good job of it, especially when it's a developer bull market that suddenly shifts directions. Case in point: late 90s, the mad rush to put warm bodies in chairs for those who could even spell HTML. A few years later, many had left and gone back to selling cars or whatever they did before.
I generally agree that it's difficult and counterproductive to try to eliminate talented programmers who put together the core of systems and set up the patterns that things like LLMs can emulate.
But, the modal programmer at this point is some person who attended a front-end coding bootcamp for a few months and basically just knows how to chain together CSS selectors and React components. I do think these people are in big trouble.
So, while the core, say, 10% of people I think should remain in the system. This 90% periphery of pretty bad programmers will probably need to move on to other jobs.
Oh:D I have a feeling that the bad programmers won't move anywhere. There is one reason for it. Code part is probably the smallest piece while most of the stuff is in getting actual business requirements that worth a lick.
During the 90’s economic crisis all drafters drawing building blueprints by hand disappeared from the Swedish construction industry. Engineers started using CAD instead
Just one example of how this has happened again and again.
Every recession where there was mass lay-offs on programmers (not every recession hits programmers hard), there were many articles saying that whatever that latest thing [see article] was the cause of this and industry is getting rid of programmers they will never need again.
In every case of course "it is the economy stupid". The tools made little difference in the need for programmers. The tools that worked actually increased the need because things you wouldn't even attempt without the tools were now worth hiring extra people to do.
Until a year ago I believed as the author did. Then LLMs got to the point where they sit in meetings like I do, make notes like I do, have a memory like I do, and their context window is expanding.
Only issue I saw after a month of building something complex from scratch with Opus 4.6 is poor adherence to high-level design principles and consistency. This can be solved with expert guardrails, I believe.
It won’t be long before AI employees are going to join daily standup and deliver work alongside the team with other users in the org not even realizing or caring that it’s an AI “staff member”.
It won’t be much longer after that when they will start to tech lead those same teams.
The closer you get to releasing software, the less useful LLMs become. They tend to go into loops of 'Fixed it!' without having fixed anything.
In my opinion, attempting to hold the hand of the LLM via prompts in English for the 'last mile' to production ready code runs into the fundamental problem of ambiguity of natural languages.
From my experience, those developers that believe LLMs are good enough for production are either building systems that are not critical (e.g. 80% is correct enough), or they do not have the experience to be able to detect how LLM generated code would fail in production beyond the 'happy path'.
After 2 years of using all of these tools (Claude C, Gemini cli, opencode with all models available) I can tell you it is a huge enabler, but you have to provide these "expert guardrails" by monitoring every single deliverable.
For someone who is able to design an end to end system by themselves these tools offer a big time saving, but they come with dangers too.
Yesterday I had a mid dev in my team proudly present a Web tool he "wrote" in python (to be run on local host) that runs kubectl in the background and presents things like versions of images running in various namespaces etc. It looked very slick, I can already imagine the product managers asking for it to be put on the network.
So what's the problem? For one, no threading whatsoever, no auth, all queries run in a single thread and on and on. A maintenance nightmare waiting to happen. That is a risk of a person that knows something, but not enough building tools by themselves.
I can take a verbal description from a meeting with five to ten people and put together something they can interact with in two weeks. That is a lot slower than Claude Code! Yet everywhere I’ve worked, this is more than fast enough.
Over two more weeks I can work with those same five to ten people (who often disagree or have different goals) and get a first draft of a feature or small, targeted product together. In those latter two weeks, writing code isn’t what takes time; working through what people think they mean verses what they are actually saying, mediating one group of them to another when they disagree (or mostly agree) is the work. And then, after that, we introduce a customer. Along the way I learn to become something of an expert in whatever the thing is and continue to grow the product, handing chunks of responsibility to other developers at which point it turns into a real thing.
I work with AI tooling and leverage AI as part of products, where it makes sense. There are parts of this cycle where it is helpful and time saving, but it certainly can’t replace me. It can speed up coding in the first version but, today, I end up going back and rewriting chunks and, so far, that eats up the wins. The middle bit it clearly can’t do, and even at the end when changes are more directed it tends toward weirdly complicated solutions that aren’t really practical.
If you have been in the industry for a few decades you will be able to think of several hundred "silver bullets" that made great promises - some even turned out to be great ideas, but none where the 10x revolution that they promised.
The article is a good summary of major movements through the decades without so much that whole point is lost in the details. I would have put in a slightly different set of things if I wanted to write that article, but the point would still stand and I would leave out many things that could be put in but would be too much noise.
I'm not familiar with Software Reuse but if it's about re-using software itself one advantage of a live codebase is that it's understood in the head of a human being. That means when an issue is opened, a person remembers if it's a new issue or not. It's not "just" semantic search where that person knows only if it's genuinely new or not (and thus can be closed) but rather why it exists in the first place. Is it the result of the current architecture, dependency choice, etc or rather simply a "shallow" bug that can be resolved with fixing a single function.
The potentially cool thing about LLM's is bootstrapping. No matter how much COBOL you wrote, COBOL didn't get better. LLM's can be used to make LLM's (and other software stuff) better. LLM's could be used to create their successor(s).
Of course, in the end, it won't do us humans any good, because when the Singularity AKA Rapture comes, we'll all be converted to Computronium. :-)
I am convinced that LLMs can't truly create really novel knowledge. They may even surface it in a way that looks novel, but not really create any new knowledge.
> "The name derived from the idea that The Last One was the last program that would ever need writing, as it could be used to generate all subsequent software."
That was released in 1981. Spoiler alert: it was not, in fact, the last one.
All the other attempts failed because they were just mindless conversions of formal languages to formal languages. Basically glorified compilers. Either the formal language wasn't capable enough to express all situations, or it was capable and thus it was as complex as the one thing it was designed to replace.
AI is different. You tell it in natural language, which can be ambiguous and not cover all the bases. And people are familiar with natural language. And it can fill in the missing details and disambiguate the others.
This has been known to be possible for decades, as (simplifying a bit) the (non-technical) manager can order the engineer in natural, ambiguous language what to do and they will do it. Now the AI takes the place of the engineer.
Also, I personally never believed before AI that programming will disappear, so the argument that "this has been hyped before" doesn't touch my soul.
I have no idea why this is so hard to understand. I'd like people to reply to me in addition to downvoting.
Programmers have enjoyed an occupation with solid stability and growing opportunities. AI challenging this virtually over night is a tough pill to swallow. Naturally, many subscribe to the hope that it will fail.
How far AI will succeed in replacing programmers remains to be seen. Personally I think many jobs will disappear, especially in the largest domains (web). But I think this will only be a fraction and not a majority. For now, AI is simply most useful when paired with a programmer.
A manager is not going to handle all the nitty gritty details, that an engineer knows, fine say, they can ask a LLM to make a web portal.
Does he know about SQL injection? XSS?
Maybe he knows slightly about security stuffs and asks the LLM to make a secure site with all the protection needed. But how the manager knows it works at all? If you figure out there's a issue with your critical part of the software, after your users data are stolen, how bad the fallback is going to be?
How good a tool is also depends on who's using it. Managers are not engineers obviously unless he was
an engineer before becoming a manager, but you are saying engineers are not needed. So, where's the engineer manager is going to come from? I'm sure we're not growing them in some engineering trees
> And it can fill in the missing details and disambiguate the others.
Are you suggesting “And Claude, make no mistakes” works?
Because otherwise you need an expert operating the thing. Yes, it can answer questions, but you need to know what exactly to ask.
> This has been known to be possible for decades, as (simplifying a bit) the (non-technical) manager can order the engineer in natural, ambiguous language what to do and they will do it
I have yet to see vibe coding work like this. Even expert devs with LLMs get incorrect output. Anytime you have to correct your prompt, that’s why your argument fails.
> All the other attempts failed because they were just mindless conversions of formal languages to formal languages.
This is just categorically false.
No-code tools didn't fail because they were "mindless conversions of formal languages to formal languages". They failed because the people who were supposed to benefit the most (non-developers) neither had the time nor desire to build stuff in the first place.
The thing about talking to computers is less the formality and more the specificity. People don't know what they want. To use an LLM effectively, you need to think about what you want with enough clarity to ask for it and check that you're getting it. That LLMs accept your wishes in the form of natural language instead of something with a LALR(1) grammar doesn't magically obviate the need for specificity and clarity in communication.
I spent the last two weeks at work building a whole system to deploy automated claude code agents in response to events and even before i finished it was already doing useful work and now it is automatically handling jira tickets and making PRs.
> The Eternal Promise: A History of Attempts at Manned Flight
Anyone banking that this technology isn't going to decrease the demand for programmers, or that it's going to offset the lost jobs with new, related ones ("prompt engineer") are kidding themselves. And frankly, it's probably a good thing, at some level, in that there are far too many people in tech with no business being there. How many "developers" are effectively just gluing bits of JavaScript together and juggling NPM dependencies, all day? And how many of them can't even accomplish this feat without a steady supply of Adderall? Ironically, these people seem to be some of the most enthusiastic AI adopters, so it's in some ways fitting that they'll likely be the first to be made redundant.
> There is every reason to believe that those who invest in deep understanding will continue to be valuable, regardless of what tools emerge.
I don't take issue with this, except that it's a false comfort when when you consider the demand will naturally ebb and individual workload will naturally escalate. In that light, I find it downright dishonest because the rewards for attaining deep knowledge will continue to evaporate; necessitating AI-assistance.
The reason is it different this time around is because the capabilities of LLMs have incentivized the professional class to betray the institutions that enabled their specializations. I am talking about the amazing minds at Adobe, Figma, and the FAANGS who are bridging agentic reasoners and diffusion models with domain-specific needs of their respective professional users.
Humans are class of beings, and the humans accelerating the advance of AI in creative tools are the reason that things are different this time. We have class traitors among us this time, and they're "just doing their jobs". For most, willful disbelief isn't even a factor. They think they're helping while each PR just brings them closer to unemployment.
Most of these "class traitors" live in high cost of living areas, and for them, the choice is "become unemployed within two weeks for not complying", or "become unemployed within a few years for complying". They are being betrayed by the shareholder class, and they in turn are betraying their customers and their species.
The only thing that we can do is to not make it worth their time in the long run. Don't let greed and fear slide. Don't hate someone for choosing their family and comfort over your own, hate the system that forces them to make that choice. Hold them accountable, but attack the system, instead of its hostages and victims.
Bridging software with domain-specific needs of its professional users is nothing new: that is how domain-specific professional software gets built. What is new is that the people doing this are being referred to hysterically as "class traitors", when the improvements they're working on will bring massive and widely available benefits to professionals the world over.
We have yet to invent ground breaking tech that transcends either human nature or the banal depravity that stems from the profit motive at scale. Prior history of major tech innovations therefore may have some insight to offer regarding expected outcomes of the current hype wave around AI. The notion that technology so cleanly breaks from underlying social paradigms as to be wholly unpredictable is one of the tech industries most persistently naive and destructive mythologies.
simonw|1 day ago
Which includes this excellent line:
> Unfortunately, the winds of change are sometimes irreversible. The continuing drop in cost of computers has now passed the point at which computers have become cheaper than people. The number of programmers available per computer is shrinking so fast that most computers in the future will have to work at least in part without programmers.
YeGoblynQueenne|1 day ago
wincy|1 day ago
This confused my teacher as he knew this guy wasn’t super technical, and asked him more about it. I may have the details not exactly right but the man said something like “I use lotus notes every day!”
The word programmer had a very different meaning 40 years ago.
chihuahua|1 day ago
danwills|16 hours ago
Despite all the astounding developments in AI/ML though, I still think there's still a critical need for the application of human/biological imagination and creativity. Sure the amount of leverage between thoughts and CPU cycles can be utterly giant now, but it doesn't seem to diminish the need (where performance or correctness/less-bugs are needed) for a full understanding of what the computer actually gets up to in the end.
For what it's worth, we do have an ML department at RSP and they are doing great! But I'm not sure we'd get very far if we tried to vibe-code the underlying pipeline, as it really requires full understanding of many interlocking pieces.
unknown|1 day ago
[deleted]
entrustai|9 hours ago
The relocation is genuinely valuable — each move makes the simple cases dramatically simpler. But the complexity doesn't disappear. It accumulates at the new boundary, which is why each wave creates a new class of specialists rather than eliminating specialists.
What's underappreciated about the current wave is where the complexity is relocating to. With LLMs generating code, the hard problem is no longer writing correct syntax — it's verifying that the generated output is correct, secure, and maintainable. That verification problem is arguably harder than the original coding problem, because you're now auditing code you didn't write, in a codebase shaped by decisions you didn't make, produced by a system whose reasoning you can't inspect.
The irony is that LLMs may be creating demand for a skill that programming culture has historically undervalued: careful, systematic verification of code you didn't write. That's closer to auditing than engineering. And it turns out auditing is hard.
NBJack|4 hours ago
After wading through an impressive number of compiler errors (again, it was technically compatible) and attempts to fix them, I eventually surrendered and acknowledged that at the very least, this was beyond my abilities.
I probably would had gotten much further just rewriting it from scratch.
cjfd|1 day ago
tkel|1 day ago
Democracy is about governance, not access.
A "democratized" LLM would be one in which its users collectively made decisions about how it was managed. Or if the companies that owned LLMs were ran democratically.
Havoc|1 day ago
What you say about big tech is true at same time though. I worry about what happens when China takes the lead and no longer feels the need to do open models. First hints already showing - advance access to ds4 only for Chinese hardware makers
xg15|1 day ago
YeGoblynQueenne|1 day ago
heliumtera|1 day ago
PeterWhittaker|1 day ago
I see an analog with AI-generated code: the disciplined among us know we are programming and consider error and edge cases, the rest don't.
Will the AIs get good enough so they/we won't have to? Or will people realize they are programming and discipline up?
analog31|1 day ago
The Mythical Man Month is just over half a century old, yet still reads like it was written yesterday.
rsynnott|10 hours ago
And then of course there's this: https://en.wikipedia.org/wiki/Growth_in_a_Time_of_Debt#Metho...
> Or will people realize they are programming and discipline up?
Well, they apparently haven't with spreadsheets, 50 years on, so I wouldn't be optimistic.
mold_aid|9 hours ago
Or will there be coding across disciplines, and attendant theories of literacies in context?
What I like about the OP is the consonance with literate practices, which has gone through similar generations of "our children don't know how to [...]" alongside of "our children will not need to [...] because of the machines."
unknown|1 day ago
[deleted]
nurettin|1 day ago
Worse, they were doing functional programming just by chaining formulas without side effects, surpassing the skills of most self-proclaimed programmers out there.
simonw|1 day ago
Somehow civilization continues to function!
Makes me a bit less terrified that untested vibe coded slop will sink the economy. It's not that different from how things work already.
hearsathought|1 day ago
Using excel in the traditional sense isn't the same as programming. Unless they were doing some VBA or something like that which the vast majority of excel/spreadsheet users don't.
> spreadsheet formulae
formulas. We aren't speaking latin here.
> I see an analog with AI-generated code: the disciplined among us know we are programming and consider error and edge cases, the rest don't.
Programming isn't really about edge cases or errors.
manithree|1 day ago
Will something come along some day that will actually drastically reduce the need for programmers/developers/software engineers? Maybe. Are we there yet? My LLM experience makes me seriously doubt it.
sevenseacat|9 hours ago
aNoob7000|1 day ago
A couple of years later, Microsoft came out with Visual Basic, and I thought, OMG, I'm toast. Secretaries are going to be writing code. I was a developer by this time, writing code in FoxPro and getting into PowerBuilder.
All this to say, "I've been in IT for many years, and companies promise a lot but rarely deliver completely on their promises." Do programmers and others in the tech field need to adapt? Yes. Is AI going to be disruptive to some extent? Yes. Are all jobs going away? No.
bluGill|1 day ago
Back in the 80's there were ads for tools to "dinosaurs" who everyone looked to when their 4GL language failed to solve the problem.
antonvs|1 day ago
getnormality|1 day ago
To ignore that pattern and say everything's going to be automated and humanity will be irrelevant seems to me to be... more of a death wish against human agency, than a prediction based on reality.
ekidd|1 day ago
The difference this time is that the thing they're trying to automate is intelligence. The goal is a machine that's as smart as a Nobel Prize winner or a good CEO, across all fields of human intellectual endeavor, and which works for dollars an hour. The goal is also for this machine to be infinitely copyable for the cost of some GPUs and hard drives.
The next goal after that will be to give that machine hands, so that it can do any physical labor or troubleshooting a human can do. And again, the goal is for the hands to be cheaper to produce and cheaper to automate than humans.
You may ask yourself, who would need humans in a future where all intellectual and physical tasks can be done better and cheaper by a machine? You may also ask yourself, who would control the machines? You may ask yourself, what leverage would ordinary humans have in a future that no longer needed them for anything? Or perhaps you would not ask those questions.
But this is the future investors are dreaming of, and the future that they're investing trillions of dollars to reach. That's the dream.
debo_|1 day ago
I have certainly never met anyone who works in "loom engineering" in my entire life.
stevenhuang|14 hours ago
Fundamentally unhinged? How presumptive of you to declare with confidence AI will never become more capable than humans.
But I suppose it's fitting. If after all that has happened your priors still have not budged then I'm sorry to say you will probably never understand this.
> I should be extremely skeptical about excitable tech guys predicting big things in short time frames.
Edit: I read your other comment. I don't disagree with you here.
prmph|11 hours ago
The reasons vary, but in general, just as some people don't want to touch maths (even if they might be good at it if they tried), some people loathe the very idea of being technical, either because they think it is beneath them, or they just don't see themselves that way.
And like the article explains, even when "programming" tools seem to become simpler to use, they still require technical specification, and once people feel like they are getting close to "programming" they check out.
ithora|11 hours ago
jleyank|1 day ago
Don’t facilitate losing your job.
marginalia_nu|1 day ago
unknown|1 day ago
[deleted]
hnlmorg|1 day ago
The only reason I remember this encounter so clearly was because he got rather annoyed, to the point of being aggressive, when I pointed out that most of the computing landscape was built on C and this wasn’t going to change any time soon.
Multiple decades later, and C-derived languages still rule the world. I do sometimes wonder if his opinion mellowed with time.
shiandow|1 day ago
Arguably programming is as much learning as it is writing code. This is part of the reason some people copy an entire API and don't realise they're not so much building useful code as building an understanding.
nz|1 day ago
To them, an LLM is indistinguishable from a programmer. From the point of view of authority, progress happens one meeting at a time. The reality is that there is a pyramid of experts beneath the authorities, that keep everything running smoothly, in spite of the best attempts of the authorities to demolish the foundation of the pyramid by "helping".
EDIT: to end on a positive note, it does not have to be this way. We just have to be willing to understand _how_ the organization we are a part of actually functions. And that means actually being curious instead of merely authoritative. I understand that curiosity is hard to maintain when you swim with sharks, so maybe don't swim with sharks.
BobBagwill|23 hours ago
There are bookoo other things people could be doing besides coding YASW [Yet Another Stupid Website].
snackbroken|23 hours ago
kopirgan|1 day ago
I recall Power builder in particular it was the rage.
coppsilgold|14 hours ago
If you are a good engineer you can dictate data structures to it too. It then performs even better.
I believe the writing is on the wall a this point, it does a very adequate job if I invest enough time in writing and refining the specs and give it the data structures (&/| database schemas) I want it to use. And there is no comparison in the number of hours I spend wrangling it and the number of the hours it would take me to do the code myself.
This is the worst it's going to be and it's already quite good, it wasn't that good a mere three months ago.
The main pitfall is trying to get an LLM to read your mind, in doing so you are putting too much load on whatever passes for their intelligence quotient. That isn't how you get good results or get a good measure of their capabilities.
0xcafefood|2 hours ago
Uber is 2010: this is the worst it's ever going to be.
There's some triumphalism here. What happens when training data becomes scarcer because open source as a paradigm was killed? What happens when investor cash flows elsewhere and training and inference need to become profitable on their own?
Wobbles42|4 hours ago
Meanwhile, search was better in the past and is at this point the best it's going to be.
Enshittification comes for all things.
manoDev|1 day ago
- Software engineering is a cost center, they are middlemen between the C-level ideas and a finished product.
- Software engineering is about figuring out how to automate a problem, exploring the domain, defining context, tradeoffs, and unlocking new capabilities in the process
nz|1 day ago
``` My own eyes spent countless nights observing, with curiosity and wonder and delight, the responses of a computer, as I commanded it with code, like a sorcerer casting spells. I could not have known, that this obedient machine, this silicon golem, was also, slowly and imperceptibly, enchanting me, and changing how my eyes would see.
At the time^21 , I was a mere fifteen years old, young enough, so that the gravity of life was weak enough, and the mind nimble enough, to allow me to explore without any material justification.
The computer was the believed and I was the believer.
A consequence of becoming obsessed^22 with computer programming, is that one starts to see new metaphors, algorithmic metaphors, everywhere one looks. This new metaphorical lense, belongs entirely to the third eye. Without this lense, I would look at a traffic jam, and see a traffic jam. With the lense, I would look at a traffic jam, and wonder if, and to what extent, the latency-throughput trade-off^23 was true for highways. Without the lense, I would read about social theory, and simply see the words. With the lense, I would ask if society was, a tree^24 , a graph^25, a tree of graphs, or a graph of trees^26.
To generalize, the computer programmer looks at something, and asks, _is this thing an algorithm, and if so, what kind_ ? The entire _trade_ of com-puter programming, it revolves around this question, around the discovery of metaphors that fit^27[13][14].
It is thus little surprise, when a computer programmer asks if (or sometimes asserts that) a certain kind of algorithm^28 is intelligence^29 , consciousness, or both.
The entire ritual of computer programming, is similar to the trade, in that it involves discovering metaphors, not as a means to an end, but as their own end. This ritual is difficult to explain to someone who has never practiced it. Imagine, instead of trying to find metaphors that bridge the real to the algorithmic, one tries to find metaphors that bridge the algorithmic to itself.
It is very similar to what mathematicians do, but it requires writing programs in a very principled and abstract way^30 .
This ritual, unlike the ritual of writing, and unlike the ritual of mathemat-ics, has a dominant material component (the computer) which can make your code, in addition to an _imaginary_ experience, a _material_ experience^31 . This makes the computer a medium — an artificial oracle or artificial hallucinogen — that can safely imagine the unimaginable. And like the oracle, the computer exists to provide insight^32.
Without the ritual of programming, there would be no field of chaos the-ory, nor complex systems (very important for economics and environmental sciences), and _certainly_ no elaborate fractals. Pure mathematics could only scratch the surface, because the mathematical ideas, of the mid 20th century, that our imaginations could access, were insufficient for exploring these sys-tems. Computers allow us, not unlike microscopes and telescopes, to magnify the informational dimension of nature [17].
Computers, and the arcane programming languages that make them obey, are magic machines, that created a new interaction between, two elements of the human psychic triad, the immaterial and material.
What is this triad, and what is its third element? The concept of the triad appears so frequently, in recorded human thought, and in the structure of language, that it is either some kind of adaptive ideal^33 , or a consequence of language itself^34, if not both. Pythagoras called _three_ perfection itself. Plato divided the world into three parts. And, even today, our modern shamans and sages, use triads to discuss the universe.
Roger Penrose has a traid consisting of physical, platonic, and mind. Lacan has a triad consisting of real, symbolic, and imaginary. Plato has a triad of good, truth, and beauty. Of the three, Lacan’s naming is the most self-explanatory.
In this essay, the _material_ is the real, and the _immaterial_ is the other two.
The _trade_ of programming is driven by the _real_, while the _ritual_ of pro-gramming is driven by the _imaginary_. A trade is pursued because of real, material concerns (such as covering the cost of living), while a ritual is pur-sued because of imaginary concerns — concerns that can, more precisely, be called _aesthetic_. ```
bdcravens|1 day ago
megiddo|17 hours ago
LLMs seem to be a significant step forward in converting language to code, but they don't seem to engage self-directed abstraction.
sfblah|1 day ago
But, the modal programmer at this point is some person who attended a front-end coding bootcamp for a few months and basically just knows how to chain together CSS selectors and React components. I do think these people are in big trouble.
So, while the core, say, 10% of people I think should remain in the system. This 90% periphery of pretty bad programmers will probably need to move on to other jobs.
iugtmkbdfil834|1 day ago
designerarvid|1 day ago
Just one example of how this has happened again and again.
bluGill|1 day ago
Every recession where there was mass lay-offs on programmers (not every recession hits programmers hard), there were many articles saying that whatever that latest thing [see article] was the cause of this and industry is getting rid of programmers they will never need again.
In every case of course "it is the economy stupid". The tools made little difference in the need for programmers. The tools that worked actually increased the need because things you wouldn't even attempt without the tools were now worth hiring extra people to do.
ryanjshaw|1 day ago
Only issue I saw after a month of building something complex from scratch with Opus 4.6 is poor adherence to high-level design principles and consistency. This can be solved with expert guardrails, I believe.
It won’t be long before AI employees are going to join daily standup and deliver work alongside the team with other users in the org not even realizing or caring that it’s an AI “staff member”.
It won’t be much longer after that when they will start to tech lead those same teams.
symfrog|1 day ago
In my opinion, attempting to hold the hand of the LLM via prompts in English for the 'last mile' to production ready code runs into the fundamental problem of ambiguity of natural languages.
From my experience, those developers that believe LLMs are good enough for production are either building systems that are not critical (e.g. 80% is correct enough), or they do not have the experience to be able to detect how LLM generated code would fail in production beyond the 'happy path'.
Roark66|1 day ago
For someone who is able to design an end to end system by themselves these tools offer a big time saving, but they come with dangers too.
Yesterday I had a mid dev in my team proudly present a Web tool he "wrote" in python (to be run on local host) that runs kubectl in the background and presents things like versions of images running in various namespaces etc. It looked very slick, I can already imagine the product managers asking for it to be put on the network.
So what's the problem? For one, no threading whatsoever, no auth, all queries run in a single thread and on and on. A maintenance nightmare waiting to happen. That is a risk of a person that knows something, but not enough building tools by themselves.
cmiles74|1 day ago
Over two more weeks I can work with those same five to ten people (who often disagree or have different goals) and get a first draft of a feature or small, targeted product together. In those latter two weeks, writing code isn’t what takes time; working through what people think they mean verses what they are actually saying, mediating one group of them to another when they disagree (or mostly agree) is the work. And then, after that, we introduce a customer. Along the way I learn to become something of an expert in whatever the thing is and continue to grow the product, handing chunks of responsibility to other developers at which point it turns into a real thing.
I work with AI tooling and leverage AI as part of products, where it makes sense. There are parts of this cycle where it is helpful and time saving, but it certainly can’t replace me. It can speed up coding in the first version but, today, I end up going back and rewriting chunks and, so far, that eats up the wins. The middle bit it clearly can’t do, and even at the end when changes are more directed it tends toward weirdly complicated solutions that aren’t really practical.
geraneum|1 day ago
That’s a bit… handwavy…!
bakugo|1 day ago
helsinkiandrew|1 day ago
You could argue that coding with LLM's is a form of software reuse, that removes some of its disadvantages.
bluGill|1 day ago
The article is a good summary of major movements through the decades without so much that whole point is lost in the details. I would have put in a slightly different set of things if I wanted to write that article, but the point would still stand and I would leave out many things that could be put in but would be too much noise.
utopiah|1 day ago
zozbot234|1 day ago
zahlman|10 hours ago
Seems like that little kerfuffle with all the 2-digit years in legacy COBOL code was a well-timed distraction.
BobBagwill|1 day ago
Of course, in the end, it won't do us humans any good, because when the Singularity AKA Rapture comes, we'll all be converted to Computronium. :-)
elzbardico|21 hours ago
unknown|1 day ago
[deleted]
unknown|1 day ago
[deleted]
debo_|1 day ago
faragon|6 hours ago
antonvs|1 day ago
> "The name derived from the idea that The Last One was the last program that would ever need writing, as it could be used to generate all subsequent software."
That was released in 1981. Spoiler alert: it was not, in fact, the last one.
bananaflag|1 day ago
All the other attempts failed because they were just mindless conversions of formal languages to formal languages. Basically glorified compilers. Either the formal language wasn't capable enough to express all situations, or it was capable and thus it was as complex as the one thing it was designed to replace.
AI is different. You tell it in natural language, which can be ambiguous and not cover all the bases. And people are familiar with natural language. And it can fill in the missing details and disambiguate the others.
This has been known to be possible for decades, as (simplifying a bit) the (non-technical) manager can order the engineer in natural, ambiguous language what to do and they will do it. Now the AI takes the place of the engineer.
Also, I personally never believed before AI that programming will disappear, so the argument that "this has been hyped before" doesn't touch my soul.
I have no idea why this is so hard to understand. I'd like people to reply to me in addition to downvoting.
danhau|1 day ago
How far AI will succeed in replacing programmers remains to be seen. Personally I think many jobs will disappear, especially in the largest domains (web). But I think this will only be a fraction and not a majority. For now, AI is simply most useful when paired with a programmer.
t_mahmood|1 day ago
Does he know about SQL injection? XSS?
Maybe he knows slightly about security stuffs and asks the LLM to make a secure site with all the protection needed. But how the manager knows it works at all? If you figure out there's a issue with your critical part of the software, after your users data are stolen, how bad the fallback is going to be?
How good a tool is also depends on who's using it. Managers are not engineers obviously unless he was an engineer before becoming a manager, but you are saying engineers are not needed. So, where's the engineer manager is going to come from? I'm sure we're not growing them in some engineering trees
ajshahH|1 day ago
Are you suggesting “And Claude, make no mistakes” works?
Because otherwise you need an expert operating the thing. Yes, it can answer questions, but you need to know what exactly to ask.
> This has been known to be possible for decades, as (simplifying a bit) the (non-technical) manager can order the engineer in natural, ambiguous language what to do and they will do it
I have yet to see vibe coding work like this. Even expert devs with LLMs get incorrect output. Anytime you have to correct your prompt, that’s why your argument fails.
mexicocitinluez|1 day ago
This is just categorically false.
No-code tools didn't fail because they were "mindless conversions of formal languages to formal languages". They failed because the people who were supposed to benefit the most (non-developers) neither had the time nor desire to build stuff in the first place.
quotemstr|1 day ago
rsynnott|10 hours ago
empath75|1 day ago
unknown|13 hours ago
[deleted]
miljanm|1 day ago
anonnon|16 hours ago
> The Eternal Promise: A History of Attempts at Manned Flight
Anyone banking that this technology isn't going to decrease the demand for programmers, or that it's going to offset the lost jobs with new, related ones ("prompt engineer") are kidding themselves. And frankly, it's probably a good thing, at some level, in that there are far too many people in tech with no business being there. How many "developers" are effectively just gluing bits of JavaScript together and juggling NPM dependencies, all day? And how many of them can't even accomplish this feat without a steady supply of Adderall? Ironically, these people seem to be some of the most enthusiastic AI adopters, so it's in some ways fitting that they'll likely be the first to be made redundant.
pixelsort|1 day ago
I don't take issue with this, except that it's a false comfort when when you consider the demand will naturally ebb and individual workload will naturally escalate. In that light, I find it downright dishonest because the rewards for attaining deep knowledge will continue to evaporate; necessitating AI-assistance.
The reason is it different this time around is because the capabilities of LLMs have incentivized the professional class to betray the institutions that enabled their specializations. I am talking about the amazing minds at Adobe, Figma, and the FAANGS who are bridging agentic reasoners and diffusion models with domain-specific needs of their respective professional users.
Humans are class of beings, and the humans accelerating the advance of AI in creative tools are the reason that things are different this time. We have class traitors among us this time, and they're "just doing their jobs". For most, willful disbelief isn't even a factor. They think they're helping while each PR just brings them closer to unemployment.
nz|1 day ago
The only thing that we can do is to not make it worth their time in the long run. Don't let greed and fear slide. Don't hate someone for choosing their family and comfort over your own, hate the system that forces them to make that choice. Hold them accountable, but attack the system, instead of its hostages and victims.
zozbot234|1 day ago
assaddayinh|38 minutes ago
[deleted]
nsjdjdkdz|1 day ago
[deleted]
prsheetraj|1 day ago
Havoc|1 day ago
elcapitan|1 day ago
g947o|1 day ago
forgetfreeman|1 day ago