I'm almost 50, and have been writing code professionally since the late 90s. I can pretty much see projects in my head, and know exactly what to build. I also get paid pretty well for what I do. You'd think I'd be the prototype for anti-AI.
I'm not.
I can build anything, but often struggle with getting bogged down with all the basic work. I love AI for speed running through all the boring stuff and getting to the good parts.
I liken AI development to a developer somewhere between junior and mid-level, someone I can given a paragraph or two of thought out instructions and have them bang out an hour of work. (The potential for then stunting the growth of actual juniors into tomorrow's senior developers is a serious concern, but a separate problem to solve)
I love AI for speed running through all the boring stuff and getting to the good parts.
In some cases, especially with the more senior devs in my org, fear of the good parts is why they're against AI. Devs often want the inherent safety of the boring, easy stuff for a while. AI changes the job to be a constant struggle with hard problems. That isn't necessarily a good thing. If you're actually senior by virtue of time rather than skill, you can only take on a limited number of challenging things one after another before you get exhausted.
Companies need to realise that AI to go faster is great, but there's still a cognitive impact on the people. A little respite from the hardcore stuff is genuinely useful sometimes. Taking all of that away will be bad for people.
That said, some devs hate the boring easy bits and will thrive. As with everything, individuals need to be managed as individuals.
I'm just slightly younger than you, but have the exact same sentiment. Hell, even moreso maybe, because what I realized is that "writing code to implement interesting ideas" is not really what I enjoy - it's coming up with the interesting ideas and experimenting with them. I couldn't care less about writing the code - and I only did it because I had to...if I wanted to see my idea come to life.
AI has also been a really good brainstorming partner - especially if you prompt it to disable sycophancy. It will tell you straight up when you are over-engineering something.
It's also wonderful at debugging.
So I just talk to my computer, brainstorm architectures and approaches, create a spec, then let it implement it. If it was a bad idea, we iterate. The iteration loop is so fast that it doesn't matter.
Did you end up regretting a design choice, but normally you'd live with it because so much code would have to be changed? Not with Agentic coding tools - they are great at implementing changes throughout the entire codebase.
And its so easy to branch out to technologies you're not an expert in, and still be really effective as you gain that expertise.
I honestly couldn't be happier than I am right now. And the tools get better every week, sometimes a couple times a week.
> I can build anything, but often struggle with getting bogged down with all the basic work. I love AI for speed running through all the boring stuff and getting to the good parts.
I'm in the same boat (granted, 10 years less) but can't really relate with this. By the time any part becomes boring, I start to automate/generalize it, which is very challenging to do well. That leaves me so little boring work that I speed run through it faster by typing it myself than I could prompt it.
The parts in the middle – non-trivial but not big picture – in my experience are the parts where writing the code myself constantly uncovers better ways to improve both the big picture and the automation/generalization. Because of that, there are almost no lines of code that I write that I feel I want to offload. Almost every line of code either improves the future of the software or my skills as a developer.
But perhaps I've been lucky enough to work in the same place for long. If I couldn't bring my code with me and had to constantly start from scratch, I might have a different opinion.
I have a couple of niche areas of non-coding interest where I'm using AI to code. It is so amazing to write rust and just add `todo!(...)` through out the boiler plate. The AI is miserable at implementing domain knowledge in those niche areas, but now I can focus on describing the domain knowledge (in real rust code because I can't describe it precisely enough in English + pseudo code), and then say "fill in the todos, write some tests make sure it compiles, and passes linting", verify the tests check things properly and I'm done.
I've struggled heavily trying to figure out how to get it to write the exactly correct 10 lines of code that I need for a particularly niche problem, and so I've kind of given up on that, but getting it to write the 100 lines of code around those magic 10 lines saves me so much trouble, and opens me up to so many more projects.
I find it best as a "personal assistant," that I can use to give me information -sometimes, highly focused- at a moment's notice.
> The potential for then stunting the growth of actual juniors into tomorrow's senior developers is a serious concern
I think it's a very real problem. I am watching young folks being frozen out of the industry, at the very beginning of their careers. It is pretty awful.
I suspect that the executives know that AI isn't yet ready to replace senior-levels, but they are confident that it will, soon, so they aren't concerned that there aren't any more seniors being crafted from youngsters.
Exactly. I tend to like Hotz, but by his description, every developer is also "a compiler", so it's a useless argument.
My life quality (as a startup cofounder wearing many different hats across the whole stack) would drop significantly if Cursor-like tools [1] were taken away from me, because it takes me a lot of mental effort to push myself to do the boring task, which leads to procrastination, which leads to delays, which leads to frustration. Being able to offload such tasks to AI is incredibly valuable, and since I've been in this space from "day 1", I think I have a very good grasp on what type of task I can trust it to do correctly. Here are some examples:
- Add logging throughout some code
- Turn a set of function calls that have gotten too deep into a nice class with clean interfaces
- Build a Streamlit dashboard that shows some basic stats from some table in the database
- Rewrite this LLM prompt to fix any typos and inconsistencies - yeah, "compiling" English instructions into English code also works great!
- Write all the "create index" lines for this SQL table, so that <insert a bunch of search usecases> perform well.
[1] I'm actually currently back to Copilot Chat, but it doesn't really matter that much.
Exactly. If you know how the whole thing works end to end, AI makes you incredibly dangerous. Anyone who specializes or never really learned how everything works is at a huge disadvantage.
However. There's also good news. AI is also an amazing tool for learning.
So what I see AI doing is simply separating people who want to put effort forth and those who don't.
> I can pretty much see projects in my head, and know exactly what to build.
This is where AI actually helps - you have a very precise vision of what you want, but perhaps you've forgotten about the specific names of certain API methods, etc. Maybe you don't want to implement all the cases by hand. Often validating the output can take just seconds when you know what it is you're looking for.
The other part of making the output do what you want is the ability to write a prompt that captures the most essential constraints of your vision. I've noticed the ability to write and articulate ideas well in natural language terms is the actual bottleneck for most developers. It takes just as much practice communicating your ideas as it does anything else to get good at it.
Yes, unfortunately the boring parts are what junior devs used to do so the senior devs could work on the good stuff. Now that AI is doing the boring stuff nobody has to hire those pesky jr developers anymore. Yay?
The problem is that junior developers are what we make senior developers with— so in 15 years, this is going to be yet another thing that the US used to be really good at, but is no longer capable of doing, just like many important trades in manufacturing. The manufacturers were all only concerned with their own immediate profit and made the basic sustainability of their workforce, let alone the health of the trades that supported their industries, a problem for everyone else to take care of. Well, everyone else did the same thing.
This resonates with me. I'm also around the same age and have the same amount of experience.
I love AI and use it for both personal and work tasks for two reasons:
1. It's a way to bounce around ideas without (as much) bias as a human. This is indispensable because it gives you a fast feedback mechanism and validates a path.
2. It saves me typing and time. I give it one-shot, "basic work" to do and it's able to do accomplish at least 80% of what I'd say is complete. Although it may not be 100% it's still a net positive given the amount of time it saves me.
It's not lost on me that I'm effectively being trained to always add guardrails, be very specific about the instructions, and always check the work of AI.
100% agree. I am interested in seeing how this will change how I work. I'm finding that I'm now more concerned with how I can keep the AI busy and how I can keep the quality of outputs high. I believe it has a lot to do with how my projects are structured and documented. There are also some menial issues (e.g. structuring projects to avoid merge conflicts becoming bottlenecks)
I expect that in a year my relationship with AI will be more like a TL working mostly at the requirements and task definition layer managing the work of several agents across parallel workstreams. I expect new development toolchains to start reflecting this too with less emphasis on IDEs and more emphasis on efficient task and project management.
I think the "missed growth" of junior devs is overblown though. Did the widespread adoption of higher-level really hurt the careers of developers missing out on the days when we had to do explicit memory management? We're just shifting the skillset and removing the unnecessary overhead. We could argue endlessly about technical depth being important, but in my experience this hasn't ever been truly necessary to succeed in your career. We'll mitigate these issues the same way we do with higher-level languages - by first focusing on the properties and invariants of the solutions outside-in.
>> developer somewhere between junior and mid-level,
Analogies to humans don't work that well. AI is super-human in some respects while also lacking the ability to continually work toward a goal over long periods of time. AI can do very little on its own - just short / scoped / supervised tasks.
However, sometimes the situation is reversed, AI is the teacher who provides some examples on how to do things or provides hints on how to explore a new area and knows how others have approached similar things. Then, sometimes, AI is an astute code reviewer, typically providing valuable feedback.
Anyway, I've stopped trying anthropomorphize AI and simply try to reason about it based on working with it. That means combinations of direct ChatGPT usage with copy / paste / amend type workflows, async style / full PR style usage, one-shot "hail Mary" type throw away PRs just to establish an initial direction as well as PR reviews of my own code. I'm using AI all the time, but never anything like how I would work with another human.
> I love AI for speed running through all the boring stuff and getting to the good parts.
But the issue is some of that speedrunning sometimes takes so much time, it becomes inefficient. It's slowly improving (gpt5 is incredible), but sometimes it get stuck on really mundane issue, and regress endlessly unless I intervene. And I am talking about straightforwars functional code.
> I can pretty much see projects in my head, and know exactly what to build.
I think you’re the best case support for AI coding. You know clearly what you want, so you know clearly what you don’t want. So if you had decent verbal dexterity you could prompt the AI model and manage to accomplish what you intended.
A lot of programming problems / programmer contexts don’t match that situation. Which is the problem with universalizing the potency of AI / benefits of AI coding.
same. 50’s, coding since the 90’s, make more $$ than I can spend in 3 lifetimes. one constant thing in my 3 decade career has been that absolute best people I worked with all had one common thread - absolute laziness. sounds strange but every other trait of a great SWE has not been universal except laziness.
the laziness manifest itself into productivity as crazy as this sounds. how? lazy people find a way to automate repetitive tasks. what I have learned from these over the years is that anything you do twice has to find a way to be automated as third time is around the corner :)
what does this have to do with AI? the AI has taken automation to another level allowing us to automate so much of our work that was not previously possible. I found myriad of ways to use AI and several of my best (lazy) co-workers have as well. I cannot imagine doing my work anymore without it, not because of any “magic” but because my lazy ass will be able to do all the things that I have automated out
Yes! Once I've figured out that this problem is best solved using parser combinators, and that I have a good idea of how to model the transformation, I'm so glad I can delegate work to the LLM code gen and focus on the specification, test cases, corner cases, etc;
This is a more extreme example of the general hacker news group think about AI.
Geohot is easily a 99.999 percentile developer, and yet he can’t seem to reconcile that the other 99.999 percent are doing something much more basic than he can ever comprehend.
It’s some kind of expert paradox, if everyone was as smart and capable as the experts, then they wouldn’t be experts.
I have come across many developers that behave like the AI. Can’t explain codebases they’ve built, can’t maintain consistency.
It’s like a aerospace engineer not believing that the person that designs the toys in an Kinder egg doesn’t know how fluid sims work.
First, the assertion that the best model of "AI coding" is that it is a compiler. Compilers deterministically map a formal language to another under a spec. LLM coding tools are search-based program synthesizers that retrieve, generate, and iteratively edit code under constraints (tests/types/linters/CI). That’s why they can fix issues end-to-end on real repos (e.g., SWE-bench Verified), something a compiler doesn’t do. Benchmarks now show top agents/models resolving large fractions of real GitHub issues, which is evidence of synthesis + tool use, not compilation.
Second, that the "programming language is English". Serious workflows aren’t "just English." They use repo context, unit tests, typed APIs, JSON/function-calling schemas, diffs, and editor tools. The "prompt" is often code + tests + spec, with English as glue. The author attacks the weakest interface, not how people actually ship with these tools.
Third, non-determinism isn't disqualifying. Plenty of effective engineering tools are stochastic (fuzzers, search/optimization, SAT/SMT with heuristics). Determinism comes from external specs: unit/integration tests, type systems, property-based tests, CI gates.
False dichotomy: "LLMs are popular only because languages/libraries are bad."
Languages are improving (e.g. Rust, Typescript), yet LLMs still help because the real bottlenecks are API lookup, cross-repo reading, boilerplate, migrations, test writing, and refactors, the areas where retrieval and synthesis shine. These are complementary forces, not substitutes.
Finally, no constructive alternatives are offered. "Build better compilers/languages" is fine but modern teams already get value by pairing those with AI: spec-first prompts, test-gated edits, typed SDK scaffolds, auto-generated tests, CI-verified refactors, and repo-aware agents.
A much better way to think about AI coding and LLMs is that they aren’t compilers. They’re probabilistic code synthesizers guided by your constraints (types, tests, CI). Treat them like a junior pair-programmer wired into your repo, search, and toolchain. But not like a magical English compiler.
Author here. I agree with this comment, but if I wrote more like this my blog post would get less traction.
"LLM coding tools are search-based program synthesizers," in my mind this is what compilers are. I think most compilers do far too little search and opt for heuristics instead, often because they don't have an integrated runtime environment, but it's the same idea.
"Plenty of effective engineering tools are stochastic," sure but while a SAT solver might use randomness and that might adjust your time to solve, it doesn't change the correctness of the result. And for something like a fuzzer, that's a test, which are always more of a best effort thing. I haven't seen a fuzzer deployed in prod.
"Determinism comes from external specs and tests," my dream is a language where I can specify what it does instead of how it does it. Like the concept of Halide's schedule but more generic. The computer can spend its time figuring out the how. And I think this is the kind of tools AI will deliver. Maybe it'll be with LLMs, maybe it'll be something else, but the key is that you need a fairly rigorous spec and that spec itself is the programming. The spec can even be constraint based instead of needing to specify all behavior.
I'm not at all against AI, and if you are using it at a level described in this post, like a tool, aware of its strengths and limitations, I think it can be a great addition to a workflow. I'm against the idea that it's a magical English compiler, which is what I see in public discourse.
People knock "English as a programming language", but in my opinion this is the whole value of AI programming: by the time you've expressed your design and constraints well enough that an LLM can understand it, then anyone can understand it, and you end up with a codebase that's way more maintainable than what we're used to.
The problem of course is when people throw away the prompt and keep the code, like the code is somehow valuable. This would be like everyone checking in their binaries and throwing away their source code every time, while arguments rage on HN about whether compilers are useful. (Meanwhile, compiler vendors compete on their ability to disassemble and alter binaries in response to partial code snippets.)
The right way to do AI programming is: English defines the program, generated code is exactly as valuable as compiler output is, i.e. it's the actual artifact that does the thing, so in one sense it's the whole point, but iterating on it or studying it in detail is a waste of time, except occasionally when debugging. It's going to take a while, but eventually this will be the only way anybody writes code. (Note: I may be biased, as I've built an AI programming tool.)
If you can explain what needs to be done to a junior programmer in less time than it takes to do it yourself, you can benefit from AI. But, it does require totally rethinking the programming workflow and tooling.
I think this gets to a fundamental problem with the way the AI labs have been selling and hyping AI. People keep on saying that the AI is actually thinking and it's not just pattern matching. Well, as someone that uses AI tools and develops AI tools, my tools are much more useful when I treat the AI as a pattern matching next-token predictor than an actual intelligence. If I accidentally slip too many details into the context, all of a sudden the AI fails to generalize. That sounds like pattern matching and next token prediction to me.
> This isn’t to say “AI” technology won’t lead to some extremely good tools. But I argue this comes from increased amounts of search and optimization and patterns to crib from, not from any magic “the AI is doing the coding”
* I can tell claude code to crank out some basic crud api and it will crank it out in a minute saving me an hour or so.
* I need an implementation of an algorithm that has been coded a million times on github, I ask the AI to do it and it cranks out a correct working implementation.
If I only use the AI in its wheelhouse it works very well, otherwise it sucks.
I think this comes down to levels of intelligence. Not knowledge, I mean intelligence. We often underestimate the amount of thinking/reasoning that goes into a certain task. Sometimes the AI can surprise you and do something very thoughtful, this often feels like magic.
Both CRUD and boilerplate are arguably a tooling issue. But there are also a bunch of things only AI will let you do.
My tests with full trace level logging enabled can get very verbose. It takes serious time for a human to parse where in the 100 lines of text the relevant part is.
Just telling an AI: "Run the tests and identify the root cause" works well enough, that nowadays it is always my first step.
Ofcourse, there is some truth in what you say. But business is desperate for new tech where they can redefine the order (who is big and who is small). There are floating billions which chase short term returns. Fund managers will be fired if they are not jumping on the new fad in the town. CIO's and CEO's will be fired if they are not jumping on AI. It's just nuclear arms race. It's good for none. but the other guy is on it, so you need to be too.
Think about this. Before there were cars on roads, people were just as much happy. Cars came, cities were redesigned for cars with buildings miles apart, and commuting miles became the new norm. You can no longer say cars are useless because the context around them has changed to make the cars a basic need.
AI does same thing. It changes the context in which we work. Everyone expects you use AI (and cars). It becomes a basic need, though a forced one.
To go further, hardly anything produced by science or technology is a basic need for humans. The context got twisted, making them basic needs. Tech solutions create the problems which they claim to solve. The problem did not exist before the solution came around. That's core driving force of business.
That METR study gets a lot of traction for its headline; and I doubt many people read the whole thing—it was long—but the data showed a 50% speed up for the one dev with the most experience with Cursor/AI, suggesting a learning curve and also wild statistical variation on a small sample set. An errata later suggested another dev who did not have a speedup had not represented their experience correctly, but still strongly draws into question the significance of the findings.
The specific time sucks measured in the study are addressable with improved technology like faster LLMs and improved methodology like running parallel agents—the study was done in March running Claude 3.7 and before Claude Code.
We also should value the perception of having worked 20% less even if you actually spent more time. Time flies when you’re having fun!
AI coding is the one thing that got my back to programming. I got to the point in life, when my ability to focus is reducing, and I prefer to send the remaining energy elsewhere. I kind of gave up on programming, just doing architecture and occasionally doing very small programming tasks. It all changed when I discovered Claude Code and saw that the way it works, is kind of how I work. I also use a lot of grep to find my way through a new codebase, I also debug stuff by adding logs to see the context, I also rely on automated tests to tell me something is broken. I'm still very good at reading code, I'm good at architecture, and with these tools, I feel I can safely delegate the boring bits of writing code and debugging trivial things to AI. Yes, it's slower than if I focused on the task myself, but the point is that I'd not be able to focus on the task myself.
I feel like many opinion pieces on AI coding are written from the perspective of highly experienced software engineers, often from a kind of ivory tower. (even that study they cite was based on “16 experienced open-source developers.”)
But for people who aren’t seasoned coders, these tools are incredibly valuable. I have some coding experience myself, but it’s never been my profession (I’m a visual artist). Now, I can accomplish in an afternoon what would otherwise take me days. Two months ago I left my job to work solo on my video game, and even though my budget is limited, I still make sure to keep Claude and ChatGPT. Also, being able to write something on my phone at 1 a.m. when I can’t sleep, send it to Codex, and then test it first thing in the morning at my computer feels pretty magical. It also takes away the worry of “what if this isn’t the best way to do it?” since refactoring parts of my codebase is now so easy. It helps not just with writing code, but also with removing the fear that might otherwise keep me from sitting down and getting the work done.
That said, I understand that much of the criticism is really aimed at the marketing hype around these tools and the broader “this will replace the engineers at your company” narrative.
I have a boring opinion. A cold take? served straight from the freezer.
He is right, however AI is still darn useful. He hints at why: patterns.
Writing a test suite for a new class when an existing one is in place is a breeze. It even can come up with tests you wouldnt have thought of or would have been too time pressed to check.
It also applies to non-test code too. If you have the structure it can knock a new one out.
You could have some lisp contraption that DRYs all the WETs so there is zero boilerplate. But in reality we are not crafting these perfect cosebases, we make readable, low-magic and boilerplatey code on tbe whole in our jobs.
> AI makes you feel 20% more productive but in reality makes you 19% slower. How many more billions are we going to waste on this?
Adderall is similar. It makes people feel a lot more productive, but research on its effectiveness[0] seems to show that, at best, we get only a mild improvement in productivity, and marked deterioration of cognitive abilities.
Kind of surprised by this take - I use openpilot often and also use claude code.
I kind of consider them the same thing. Openpilot can drive really well on highways for hours on end when nothing interesting is happening. Claude code can do straight forward refactors, write boilerplate, do scaffolding, do automated git bisects with no input from me.
Neither one is a substitute for the 'driver'. Claude code is like the level 2 self driving of programming.
I'm a 100% vibe-coder. AI/CS is not my field. I've made plenty of neat apps that are useful to me. Don't ask me how they work; they just do.
Sure the engineering may be abysmal, but it's good enough to work.
It only takes basic english to produce these results, plus complaining to the AI agent that "The GUI is ugly and overcrowded. Make it look better, and dark mode."
Want specs? "include a specs.md"
This isn't a 20% more productive feeling. It's productivity beyond what I will ever do on my own, given this is not my field.
This is all possible because AI was trained on the outstanding work of CS engineers like ya'll.
But the article is highly opinionated. It's like saying only phD's can be called scientists, or only programmers can be computer hackers. But in reality every human is a scientist and a hacker in the real world. The guy in a street corner in India came up with novel ways to make and sell his product, but never wrote a research paper on it. The guy on his fourth marriage noted a statistical correlation in the outcome when meeting women at a bar vs. at a church. The plant that grew in the crevice of a rock noted sunlight absorption was optimal at an angle of 78.3 degrees and grew in that direction.
> I'm a 100% vibe-coder. AI/CS is not my field. I've made plenty of neat apps that are useful to me.
This describes me pretty well too, though I do have a tiny bit of programming experience. I wrote maybe 5000 lines of code unassisted between 1995-2024. I didn't enjoy it for the most part, nor did I ever feel I was particularly good at it. On the more complex stuff I made, it might take several weeks of effort to produce a couple hundred lines of working code.
Flash forward to 2025 and I co-wrote (with LLMs) a genuinely useful piece of back office code to automate a logistics problem I was previously solving via a manual process in a spreadsheet. It would hardly be difficult for most people here to write this program, its just making some API calls, doing basic arithmetic, and displaying the results in a TUI. But I took a crack at it several times on my own and unfortunately between the API documentation being crap and my own lack of experience, I never got to the point where I could even make a single API call. LLMs got me over that hump and greatly assisted with writing the rest of the codebase, though I did write some of it by hand and worked through some debugging to solve issues in edge cases. Unlike OP, I do think I reasonably well understand what >90% of the code is doing.
> This isn't a 20% more productive feeling. It's productivity beyond what I will ever do on my own, given this is not my field.
So yeah, to the people here saying the above sentiment is BS - its not. For people who have never worked in programming or even in tech, these tools can be immensely useful.
I think it's like CMS and page builders enabling people to build their own websites without html and server knowledge. They're not making web developers disappear, instead there are more web developers now because those some of those people would eventually outgrow their page builders and need to hire web developers.
> Don't ask me how they work; they just do. Sure the engineering may be abysmal, but it's good enough to work.
I've worked on several projects from a few different engineering disciplines. Let me tell you from that experience alone, this is a statement that most of us dread to hear. We had nothing but pain whenever someone said something similar. We live by the code that nothing good is an accident, but is always the result of deliberate care and effort. Be it quality, reliability, user experience, fault tolerance, etc. How can you be deliberate and ensure any of those if you don't understand even the abstractions that you're building? (My first job was this principle applied to the extreme. The mission demanded it. Just documenting and recording designs, tests, versioning, failures, corrections and even meetings and decisions was a career in itself.) Am I wrong about this when it comes to AI? I could be. I concede that I can't keep up with the new trends to assess all of them. It would be foolish to say that I'm always right. But my experience with AI tools hasn't been great so far. It's far easier to delegate the work to a sufficiently mentored junior staff. Perhaps I'm doing something wrong. I don't know. But that statement I said earlier - it's a fundamental guiding principle in our professional lives. I find it hard to just drop it like that.
> But the article is highly opinionated. It's like saying only phD's can be called scientists, or only programmers can be computer hackers.
Almost every single quality professional in my generation - especially the legends - started those pursuits in their childhood under self-motivation (not as part of school curriculum even). You learn these things by pushing your boundary a little bit every day. You are a novice one day. You are the master on another. Are you absolutely pathetic at dancing? Try ten minutes a day. See what happens in ten years. Meanwhile, kids don't even care about others' opinion while learning. Nobody is gatekeeping you on account of your qualifications.
What they're challenging are the assumptions that vibe/AI coders seem to hold, but don't agree with their intuition. They are old fashioned developers. But their intuitions are honed over decades and they tend be surprisingly accurate for reputed developers like Geohotz. (There are numerous hyped up engineering projects out there that made me regret ignoring my own intuition!) It's even more valid if they can articulate their intuition into reasons. This is a very formal activity, even if they express them as blog posts. Geohotz clearly articulates why he thinks that AI copilots are nothing more than glorified compilers with a very leaky specification language. It means that you need to be very careful with your prompts, on top of tracking the interfaces, abstractions and interactions that the AI currently doesn't do at all for you. Perhaps it works for you at the scale you're trying. But lessons like the Therac-25 horror story [1] always remind us how bad things can go wrong. I just don't want to put that extra effort and waste my time reviewing AI generated code. I want to review code from a person whom I can ask for clarifications and provide critiques and feedback that they can follow later.
I'm 72, a dev for 40 years. I've lost a step or two. It's harder to buckle down and focus, but using AI tools have enabled me to keep building stuff. I can spec a project, have an agent build it then make sure it works. I just code for fun anyway.
There's a lot of complaining about current compilers / languages / codebase in similar posts, but barely any ideas for how to make them better. It doesn't seem surprising that people go for the easier problem (make the current process simpler with LLMs) than for the harder one (change the whole programming landscape to something new and actually make it better).
Even though I don’t buy that LLMs are going to replace developers and quite agree with what is said, this is more of a critique of LLMs as English-to-code translators. LLMs are very useful for many other things.
Researching concepts, for one, has become so much easier, especially for things where you don’t know anything yet and would have a hard time to even formulate a search engine query.
These articles are beyond the point of exhausting. Guys, just go use the tools and see if you like them and feel more capable with them. If you do, great, if you don’t, then stop.
I do agree with many points in the article, but not about the last part, namely that coding with AI assist makes you slower.
Personal experience (data points count = 1), as a somewhat seasoned dev (>30yrs of coding), it makes me WAY faster. I confess to not read the code produced at each iteration other than skimming through it for obvious architectural code smell, but I do read the final version line by line and make a few changes until I'm happy.
Long story short: things that would take me a week to put together now take a couple of hours. The vast bulk of the time saved is not having to identify the libraries I need, and not to have to rummage through API documentation.
By the author's implied definition of compiler, a human is also a compiler. (Coffee in, code out, so the saying goes.)
But code is distinct from design, and unlike compilers, humans are synthesizers of design. LLMs let you spend more time as system architect instead of code monkey.
Great short read. But this “ It’s why the world wasted $10B+ on self driving car companies that obviously made no sense.”
Not everything should make sense. Playing , trying and failing is crucial to make our world nicer. Not overthinking is key, see later what works and why.
[+] [-] bdcravens|6 months ago|reply
I'm not.
I can build anything, but often struggle with getting bogged down with all the basic work. I love AI for speed running through all the boring stuff and getting to the good parts.
I liken AI development to a developer somewhere between junior and mid-level, someone I can given a paragraph or two of thought out instructions and have them bang out an hour of work. (The potential for then stunting the growth of actual juniors into tomorrow's senior developers is a serious concern, but a separate problem to solve)
[+] [-] onion2k|6 months ago|reply
In some cases, especially with the more senior devs in my org, fear of the good parts is why they're against AI. Devs often want the inherent safety of the boring, easy stuff for a while. AI changes the job to be a constant struggle with hard problems. That isn't necessarily a good thing. If you're actually senior by virtue of time rather than skill, you can only take on a limited number of challenging things one after another before you get exhausted.
Companies need to realise that AI to go faster is great, but there's still a cognitive impact on the people. A little respite from the hardcore stuff is genuinely useful sometimes. Taking all of that away will be bad for people.
That said, some devs hate the boring easy bits and will thrive. As with everything, individuals need to be managed as individuals.
[+] [-] bbatchelder|6 months ago|reply
AI has also been a really good brainstorming partner - especially if you prompt it to disable sycophancy. It will tell you straight up when you are over-engineering something.
It's also wonderful at debugging.
So I just talk to my computer, brainstorm architectures and approaches, create a spec, then let it implement it. If it was a bad idea, we iterate. The iteration loop is so fast that it doesn't matter.
Did you end up regretting a design choice, but normally you'd live with it because so much code would have to be changed? Not with Agentic coding tools - they are great at implementing changes throughout the entire codebase.
And its so easy to branch out to technologies you're not an expert in, and still be really effective as you gain that expertise.
I honestly couldn't be happier than I am right now. And the tools get better every week, sometimes a couple times a week.
[+] [-] ttiurani|6 months ago|reply
I'm in the same boat (granted, 10 years less) but can't really relate with this. By the time any part becomes boring, I start to automate/generalize it, which is very challenging to do well. That leaves me so little boring work that I speed run through it faster by typing it myself than I could prompt it.
The parts in the middle – non-trivial but not big picture – in my experience are the parts where writing the code myself constantly uncovers better ways to improve both the big picture and the automation/generalization. Because of that, there are almost no lines of code that I write that I feel I want to offload. Almost every line of code either improves the future of the software or my skills as a developer.
But perhaps I've been lucky enough to work in the same place for long. If I couldn't bring my code with me and had to constantly start from scratch, I might have a different opinion.
[+] [-] timeinput|6 months ago|reply
I've struggled heavily trying to figure out how to get it to write the exactly correct 10 lines of code that I need for a particularly niche problem, and so I've kind of given up on that, but getting it to write the 100 lines of code around those magic 10 lines saves me so much trouble, and opens me up to so many more projects.
[+] [-] ChrisMarshallNY|6 months ago|reply
I find it best as a "personal assistant," that I can use to give me information -sometimes, highly focused- at a moment's notice.
> The potential for then stunting the growth of actual juniors into tomorrow's senior developers is a serious concern
I think it's a very real problem. I am watching young folks being frozen out of the industry, at the very beginning of their careers. It is pretty awful.
I suspect that the executives know that AI isn't yet ready to replace senior-levels, but they are confident that it will, soon, so they aren't concerned that there aren't any more seniors being crafted from youngsters.
Would suck, if they bet wrong, though…
[+] [-] curl-up|6 months ago|reply
My life quality (as a startup cofounder wearing many different hats across the whole stack) would drop significantly if Cursor-like tools [1] were taken away from me, because it takes me a lot of mental effort to push myself to do the boring task, which leads to procrastination, which leads to delays, which leads to frustration. Being able to offload such tasks to AI is incredibly valuable, and since I've been in this space from "day 1", I think I have a very good grasp on what type of task I can trust it to do correctly. Here are some examples:
- Add logging throughout some code
- Turn a set of function calls that have gotten too deep into a nice class with clean interfaces
- Build a Streamlit dashboard that shows some basic stats from some table in the database
- Rewrite this LLM prompt to fix any typos and inconsistencies - yeah, "compiling" English instructions into English code also works great!
- Write all the "create index" lines for this SQL table, so that <insert a bunch of search usecases> perform well.
[1] I'm actually currently back to Copilot Chat, but it doesn't really matter that much.
[+] [-] tom_m|6 months ago|reply
However. There's also good news. AI is also an amazing tool for learning.
So what I see AI doing is simply separating people who want to put effort forth and those who don't.
[+] [-] bob1029|6 months ago|reply
This is where AI actually helps - you have a very precise vision of what you want, but perhaps you've forgotten about the specific names of certain API methods, etc. Maybe you don't want to implement all the cases by hand. Often validating the output can take just seconds when you know what it is you're looking for.
The other part of making the output do what you want is the ability to write a prompt that captures the most essential constraints of your vision. I've noticed the ability to write and articulate ideas well in natural language terms is the actual bottleneck for most developers. It takes just as much practice communicating your ideas as it does anything else to get good at it.
[+] [-] DrewADesign|6 months ago|reply
The problem is that junior developers are what we make senior developers with— so in 15 years, this is going to be yet another thing that the US used to be really good at, but is no longer capable of doing, just like many important trades in manufacturing. The manufacturers were all only concerned with their own immediate profit and made the basic sustainability of their workforce, let alone the health of the trades that supported their industries, a problem for everyone else to take care of. Well, everyone else did the same thing.
[+] [-] thesurlydev|6 months ago|reply
I love AI and use it for both personal and work tasks for two reasons:
1. It's a way to bounce around ideas without (as much) bias as a human. This is indispensable because it gives you a fast feedback mechanism and validates a path.
2. It saves me typing and time. I give it one-shot, "basic work" to do and it's able to do accomplish at least 80% of what I'd say is complete. Although it may not be 100% it's still a net positive given the amount of time it saves me.
It's not lost on me that I'm effectively being trained to always add guardrails, be very specific about the instructions, and always check the work of AI.
[+] [-] haute_cuisine|6 months ago|reply
[+] [-] jb3689|6 months ago|reply
I expect that in a year my relationship with AI will be more like a TL working mostly at the requirements and task definition layer managing the work of several agents across parallel workstreams. I expect new development toolchains to start reflecting this too with less emphasis on IDEs and more emphasis on efficient task and project management.
I think the "missed growth" of junior devs is overblown though. Did the widespread adoption of higher-level really hurt the careers of developers missing out on the days when we had to do explicit memory management? We're just shifting the skillset and removing the unnecessary overhead. We could argue endlessly about technical depth being important, but in my experience this hasn't ever been truly necessary to succeed in your career. We'll mitigate these issues the same way we do with higher-level languages - by first focusing on the properties and invariants of the solutions outside-in.
[+] [-] osigurdson|6 months ago|reply
Analogies to humans don't work that well. AI is super-human in some respects while also lacking the ability to continually work toward a goal over long periods of time. AI can do very little on its own - just short / scoped / supervised tasks.
However, sometimes the situation is reversed, AI is the teacher who provides some examples on how to do things or provides hints on how to explore a new area and knows how others have approached similar things. Then, sometimes, AI is an astute code reviewer, typically providing valuable feedback.
Anyway, I've stopped trying anthropomorphize AI and simply try to reason about it based on working with it. That means combinations of direct ChatGPT usage with copy / paste / amend type workflows, async style / full PR style usage, one-shot "hail Mary" type throw away PRs just to establish an initial direction as well as PR reviews of my own code. I'm using AI all the time, but never anything like how I would work with another human.
[+] [-] 3abiton|6 months ago|reply
But the issue is some of that speedrunning sometimes takes so much time, it becomes inefficient. It's slowly improving (gpt5 is incredible), but sometimes it get stuck on really mundane issue, and regress endlessly unless I intervene. And I am talking about straightforwars functional code.
[+] [-] wwweston|6 months ago|reply
[+] [-] ssivark|6 months ago|reply
I think you’re the best case support for AI coding. You know clearly what you want, so you know clearly what you don’t want. So if you had decent verbal dexterity you could prompt the AI model and manage to accomplish what you intended.
A lot of programming problems / programmer contexts don’t match that situation. Which is the problem with universalizing the potency of AI / benefits of AI coding.
[+] [-] bdangubic|6 months ago|reply
the laziness manifest itself into productivity as crazy as this sounds. how? lazy people find a way to automate repetitive tasks. what I have learned from these over the years is that anything you do twice has to find a way to be automated as third time is around the corner :)
what does this have to do with AI? the AI has taken automation to another level allowing us to automate so much of our work that was not previously possible. I found myriad of ways to use AI and several of my best (lazy) co-workers have as well. I cannot imagine doing my work anymore without it, not because of any “magic” but because my lazy ass will be able to do all the things that I have automated out
[+] [-] vijucat|6 months ago|reply
[+] [-] matt3D|6 months ago|reply
Geohot is easily a 99.999 percentile developer, and yet he can’t seem to reconcile that the other 99.999 percent are doing something much more basic than he can ever comprehend.
It’s some kind of expert paradox, if everyone was as smart and capable as the experts, then they wouldn’t be experts.
I have come across many developers that behave like the AI. Can’t explain codebases they’ve built, can’t maintain consistency.
It’s like a aerospace engineer not believing that the person that designs the toys in an Kinder egg doesn’t know how fluid sims work.
[+] [-] dsiegel2275|6 months ago|reply
First, the assertion that the best model of "AI coding" is that it is a compiler. Compilers deterministically map a formal language to another under a spec. LLM coding tools are search-based program synthesizers that retrieve, generate, and iteratively edit code under constraints (tests/types/linters/CI). That’s why they can fix issues end-to-end on real repos (e.g., SWE-bench Verified), something a compiler doesn’t do. Benchmarks now show top agents/models resolving large fractions of real GitHub issues, which is evidence of synthesis + tool use, not compilation.
Second, that the "programming language is English". Serious workflows aren’t "just English." They use repo context, unit tests, typed APIs, JSON/function-calling schemas, diffs, and editor tools. The "prompt" is often code + tests + spec, with English as glue. The author attacks the weakest interface, not how people actually ship with these tools.
Third, non-determinism isn't disqualifying. Plenty of effective engineering tools are stochastic (fuzzers, search/optimization, SAT/SMT with heuristics). Determinism comes from external specs: unit/integration tests, type systems, property-based tests, CI gates.
False dichotomy: "LLMs are popular only because languages/libraries are bad." Languages are improving (e.g. Rust, Typescript), yet LLMs still help because the real bottlenecks are API lookup, cross-repo reading, boilerplate, migrations, test writing, and refactors, the areas where retrieval and synthesis shine. These are complementary forces, not substitutes.
Finally, no constructive alternatives are offered. "Build better compilers/languages" is fine but modern teams already get value by pairing those with AI: spec-first prompts, test-gated edits, typed SDK scaffolds, auto-generated tests, CI-verified refactors, and repo-aware agents.
A much better way to think about AI coding and LLMs is that they aren’t compilers. They’re probabilistic code synthesizers guided by your constraints (types, tests, CI). Treat them like a junior pair-programmer wired into your repo, search, and toolchain. But not like a magical English compiler.
[+] [-] georgehotz|6 months ago|reply
"LLM coding tools are search-based program synthesizers," in my mind this is what compilers are. I think most compilers do far too little search and opt for heuristics instead, often because they don't have an integrated runtime environment, but it's the same idea.
"Plenty of effective engineering tools are stochastic," sure but while a SAT solver might use randomness and that might adjust your time to solve, it doesn't change the correctness of the result. And for something like a fuzzer, that's a test, which are always more of a best effort thing. I haven't seen a fuzzer deployed in prod.
"Determinism comes from external specs and tests," my dream is a language where I can specify what it does instead of how it does it. Like the concept of Halide's schedule but more generic. The computer can spend its time figuring out the how. And I think this is the kind of tools AI will deliver. Maybe it'll be with LLMs, maybe it'll be something else, but the key is that you need a fairly rigorous spec and that spec itself is the programming. The spec can even be constraint based instead of needing to specify all behavior.
I'm not at all against AI, and if you are using it at a level described in this post, like a tool, aware of its strengths and limitations, I think it can be a great addition to a workflow. I'm against the idea that it's a magical English compiler, which is what I see in public discourse.
[+] [-] inimino|6 months ago|reply
The problem of course is when people throw away the prompt and keep the code, like the code is somehow valuable. This would be like everyone checking in their binaries and throwing away their source code every time, while arguments rage on HN about whether compilers are useful. (Meanwhile, compiler vendors compete on their ability to disassemble and alter binaries in response to partial code snippets.)
The right way to do AI programming is: English defines the program, generated code is exactly as valuable as compiler output is, i.e. it's the actual artifact that does the thing, so in one sense it's the whole point, but iterating on it or studying it in detail is a waste of time, except occasionally when debugging. It's going to take a while, but eventually this will be the only way anybody writes code. (Note: I may be biased, as I've built an AI programming tool.)
If you can explain what needs to be done to a junior programmer in less time than it takes to do it yourself, you can benefit from AI. But, it does require totally rethinking the programming workflow and tooling.
[+] [-] mccoyb|6 months ago|reply
[+] [-] intothemild|6 months ago|reply
[+] [-] vmg12|6 months ago|reply
> This isn’t to say “AI” technology won’t lead to some extremely good tools. But I argue this comes from increased amounts of search and optimization and patterns to crib from, not from any magic “the AI is doing the coding”
* I can tell claude code to crank out some basic crud api and it will crank it out in a minute saving me an hour or so.
* I need an implementation of an algorithm that has been coded a million times on github, I ask the AI to do it and it cranks out a correct working implementation.
If I only use the AI in its wheelhouse it works very well, otherwise it sucks.
[+] [-] KoolKat23|6 months ago|reply
[+] [-] athrowaway3z|6 months ago|reply
My tests with full trace level logging enabled can get very verbose. It takes serious time for a human to parse where in the 100 lines of text the relevant part is.
Just telling an AI: "Run the tests and identify the root cause" works well enough, that nowadays it is always my first step.
[+] [-] zkmon|6 months ago|reply
Think about this. Before there were cars on roads, people were just as much happy. Cars came, cities were redesigned for cars with buildings miles apart, and commuting miles became the new norm. You can no longer say cars are useless because the context around them has changed to make the cars a basic need.
AI does same thing. It changes the context in which we work. Everyone expects you use AI (and cars). It becomes a basic need, though a forced one.
To go further, hardly anything produced by science or technology is a basic need for humans. The context got twisted, making them basic needs. Tech solutions create the problems which they claim to solve. The problem did not exist before the solution came around. That's core driving force of business.
[+] [-] amirhirsch|6 months ago|reply
The specific time sucks measured in the study are addressable with improved technology like faster LLMs and improved methodology like running parallel agents—the study was done in March running Claude 3.7 and before Claude Code.
We also should value the perception of having worked 20% less even if you actually spent more time. Time flies when you’re having fun!
[+] [-] lukaslalinsky|6 months ago|reply
[+] [-] richardfulop|6 months ago|reply
But for people who aren’t seasoned coders, these tools are incredibly valuable. I have some coding experience myself, but it’s never been my profession (I’m a visual artist). Now, I can accomplish in an afternoon what would otherwise take me days. Two months ago I left my job to work solo on my video game, and even though my budget is limited, I still make sure to keep Claude and ChatGPT. Also, being able to write something on my phone at 1 a.m. when I can’t sleep, send it to Codex, and then test it first thing in the morning at my computer feels pretty magical. It also takes away the worry of “what if this isn’t the best way to do it?” since refactoring parts of my codebase is now so easy. It helps not just with writing code, but also with removing the fear that might otherwise keep me from sitting down and getting the work done.
That said, I understand that much of the criticism is really aimed at the marketing hype around these tools and the broader “this will replace the engineers at your company” narrative.
[+] [-] giveita|6 months ago|reply
He is right, however AI is still darn useful. He hints at why: patterns.
Writing a test suite for a new class when an existing one is in place is a breeze. It even can come up with tests you wouldnt have thought of or would have been too time pressed to check.
It also applies to non-test code too. If you have the structure it can knock a new one out.
You could have some lisp contraption that DRYs all the WETs so there is zero boilerplate. But in reality we are not crafting these perfect cosebases, we make readable, low-magic and boilerplatey code on tbe whole in our jobs.
[+] [-] ChrisMarshallNY|6 months ago|reply
Adderall is similar. It makes people feel a lot more productive, but research on its effectiveness[0] seems to show that, at best, we get only a mild improvement in productivity, and marked deterioration of cognitive abilities.
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC6165228/
[+] [-] andrewchambers|6 months ago|reply
I kind of consider them the same thing. Openpilot can drive really well on highways for hours on end when nothing interesting is happening. Claude code can do straight forward refactors, write boilerplate, do scaffolding, do automated git bisects with no input from me.
Neither one is a substitute for the 'driver'. Claude code is like the level 2 self driving of programming.
[+] [-] jrm4|6 months ago|reply
It's just like "Are robots GOOD or BAD at building things?"
WHAT THINGS?
[+] [-] hereme888|6 months ago|reply
Sure the engineering may be abysmal, but it's good enough to work.
It only takes basic english to produce these results, plus complaining to the AI agent that "The GUI is ugly and overcrowded. Make it look better, and dark mode."
Want specs? "include a specs.md"
This isn't a 20% more productive feeling. It's productivity beyond what I will ever do on my own, given this is not my field.
This is all possible because AI was trained on the outstanding work of CS engineers like ya'll.
But the article is highly opinionated. It's like saying only phD's can be called scientists, or only programmers can be computer hackers. But in reality every human is a scientist and a hacker in the real world. The guy in a street corner in India came up with novel ways to make and sell his product, but never wrote a research paper on it. The guy on his fourth marriage noted a statistical correlation in the outcome when meeting women at a bar vs. at a church. The plant that grew in the crevice of a rock noted sunlight absorption was optimal at an angle of 78.3 degrees and grew in that direction.
[+] [-] ozim|6 months ago|reply
[+] [-] ac29|6 months ago|reply
This describes me pretty well too, though I do have a tiny bit of programming experience. I wrote maybe 5000 lines of code unassisted between 1995-2024. I didn't enjoy it for the most part, nor did I ever feel I was particularly good at it. On the more complex stuff I made, it might take several weeks of effort to produce a couple hundred lines of working code.
Flash forward to 2025 and I co-wrote (with LLMs) a genuinely useful piece of back office code to automate a logistics problem I was previously solving via a manual process in a spreadsheet. It would hardly be difficult for most people here to write this program, its just making some API calls, doing basic arithmetic, and displaying the results in a TUI. But I took a crack at it several times on my own and unfortunately between the API documentation being crap and my own lack of experience, I never got to the point where I could even make a single API call. LLMs got me over that hump and greatly assisted with writing the rest of the codebase, though I did write some of it by hand and worked through some debugging to solve issues in edge cases. Unlike OP, I do think I reasonably well understand what >90% of the code is doing.
> This isn't a 20% more productive feeling. It's productivity beyond what I will ever do on my own, given this is not my field.
So yeah, to the people here saying the above sentiment is BS - its not. For people who have never worked in programming or even in tech, these tools can be immensely useful.
[+] [-] neurostimulant|6 months ago|reply
[+] [-] croes|6 months ago|reply
If the apps runs locally it doesn’t matter, if it‘s connected to the net it could be the seed for the next Mirai bot network.
[+] [-] suddenlybananas|6 months ago|reply
[+] [-] goku12|6 months ago|reply
I've worked on several projects from a few different engineering disciplines. Let me tell you from that experience alone, this is a statement that most of us dread to hear. We had nothing but pain whenever someone said something similar. We live by the code that nothing good is an accident, but is always the result of deliberate care and effort. Be it quality, reliability, user experience, fault tolerance, etc. How can you be deliberate and ensure any of those if you don't understand even the abstractions that you're building? (My first job was this principle applied to the extreme. The mission demanded it. Just documenting and recording designs, tests, versioning, failures, corrections and even meetings and decisions was a career in itself.) Am I wrong about this when it comes to AI? I could be. I concede that I can't keep up with the new trends to assess all of them. It would be foolish to say that I'm always right. But my experience with AI tools hasn't been great so far. It's far easier to delegate the work to a sufficiently mentored junior staff. Perhaps I'm doing something wrong. I don't know. But that statement I said earlier - it's a fundamental guiding principle in our professional lives. I find it hard to just drop it like that.
> But the article is highly opinionated. It's like saying only phD's can be called scientists, or only programmers can be computer hackers.
Almost every single quality professional in my generation - especially the legends - started those pursuits in their childhood under self-motivation (not as part of school curriculum even). You learn these things by pushing your boundary a little bit every day. You are a novice one day. You are the master on another. Are you absolutely pathetic at dancing? Try ten minutes a day. See what happens in ten years. Meanwhile, kids don't even care about others' opinion while learning. Nobody is gatekeeping you on account of your qualifications.
What they're challenging are the assumptions that vibe/AI coders seem to hold, but don't agree with their intuition. They are old fashioned developers. But their intuitions are honed over decades and they tend be surprisingly accurate for reputed developers like Geohotz. (There are numerous hyped up engineering projects out there that made me regret ignoring my own intuition!) It's even more valid if they can articulate their intuition into reasons. This is a very formal activity, even if they express them as blog posts. Geohotz clearly articulates why he thinks that AI copilots are nothing more than glorified compilers with a very leaky specification language. It means that you need to be very careful with your prompts, on top of tracking the interfaces, abstractions and interactions that the AI currently doesn't do at all for you. Perhaps it works for you at the scale you're trying. But lessons like the Therac-25 horror story [1] always remind us how bad things can go wrong. I just don't want to put that extra effort and waste my time reviewing AI generated code. I want to review code from a person whom I can ask for clarifications and provide critiques and feedback that they can follow later.
[1] https://thedailywtf.com/articles/the-therac-25-incident
[+] [-] dmh2000|6 months ago|reply
[+] [-] viraptor|6 months ago|reply
[+] [-] Eikon|6 months ago|reply
Researching concepts, for one, has become so much easier, especially for things where you don’t know anything yet and would have a hard time to even formulate a search engine query.
[+] [-] mindwok|6 months ago|reply
[+] [-] ur-whale|6 months ago|reply
Personal experience (data points count = 1), as a somewhat seasoned dev (>30yrs of coding), it makes me WAY faster. I confess to not read the code produced at each iteration other than skimming through it for obvious architectural code smell, but I do read the final version line by line and make a few changes until I'm happy.
Long story short: things that would take me a week to put together now take a couple of hours. The vast bulk of the time saved is not having to identify the libraries I need, and not to have to rummage through API documentation.
[+] [-] sltr|6 months ago|reply
By the author's implied definition of compiler, a human is also a compiler. (Coffee in, code out, so the saying goes.)
But code is distinct from design, and unlike compilers, humans are synthesizers of design. LLMs let you spend more time as system architect instead of code monkey.
[+] [-] runningmike|6 months ago|reply
Not everything should make sense. Playing , trying and failing is crucial to make our world nicer. Not overthinking is key, see later what works and why.