(no title)
reb | 5 months ago
The plan-build-test-reflect loop is equally important when using an LLM to generate code, as anyone who's seriously used the tech knows: if you yolo your way through a build without thought, it will collapse in on itself quickly. But if you DO apply that loop, you get to spend much more time on the part I personally enjoy, architecting the build and testing the resultant experience.
> While the LLMs get to blast through all the fun, easy work at lightning speed, we are then left with all the thankless tasks
This is, to me, the root of one disagreement I see playing out in every industry where AI has achieved any level of mastery. There's a divide between people who enjoy the physical experience of the work and people who enjoy the mental experience of the work. If the thinking bit is your favorite part, AI allows you to spend nearly all of your time there if you wish, from concept through troubleshooting. But if you like the doing, the typing, fiddling with knobs and configs, etc etc, all AI does is take the good part away.
PessimalDecimal|5 months ago
The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.
A software engineer's primary job isn't producing code, but producing a functional software system. Most important to that is the extremely hard to convey "mental model" of how the code works and an expertise of the domain it works in. Code is a derived asset of this mental model. And you will never know code as well as a reader and you would have as the author for anything larger than a very small project.
There are other consequences of not building this mental model of a piece of software. Reasoning at the level of syntax is proving to have limits that LLM-based coding agents are having trouble scaling beyond.
danpat|5 months ago
This feels very true - but also consider how much code exists for which many of the current maintainers were not involved in the original writing.
There are many anecdotal rules out there about how much time is spent reading code vs writing. If you consider the industry as a whole, it seems to me that the introduction of generative code-writing tools is actually not moving the needle as far as people are claiming.
We _already_ live in a world where most of us spend much of our time reading and trying to comprehend code written by others from the past.
What's the difference between a messy codebase created by a genAI, and a messy codebase where all the original authors of the code have moved on and aren't available to ask questions?
mattlutze|5 months ago
In any of my teams with moderate to significant code bases, we've always had to lean very hard into code comments and documentation, because a developer will forget in a few months the fine details of what they've previously built. And further, any org with turnover needs to have someone new come in and be able to understand what's there.
I don't think I've met a developer that keeps all of the architecture and design deeply in their mind at all times. We all often enough need to go walk back through and rediscover what we have.
Which is to say... if the LLM generator was instead a colleague or neighboring team, you'd still need to keep up with them. If you can adapt those habits to the generative code then it doesn't seem to be a bit leap.
jstummbillig|5 months ago
Why? Code has always been the artifact. Thinking about and understanding the domain clearly and solving problems is where the intrinsic value is at (but I'd suspect that in the future this, too, will go away).
noosphr|5 months ago
You can describe what the code should do with natural language.
I've found that using literate programming with agent calls to write the tests first, then code, then the human refining the description of the code, and going back to 1 is surprisingly good at this. One of these days I'll get around to writing an emacs mode to automate it because right now it's yanking and killing between nearly a dozen windows.
Of course this is much slower than regular development but you end up with world class documentation and understanding of the code base.
jay_kyburz|5 months ago
The role of the programmer would then be to test if the rules are being applied correctly. If not, there are no bugs to fix, you simply clarify the business rules and ask for a new program.
I like to imagine what it must be like for a non technical business owner who employees programmers today. There is a meeting where a process or outcome is described, and a few weeks / months / years a program is delivered. The only way to know if it does what was requested is to poke it a bit and see if it works. The business owner has no metal modal of the code and can't go in and fix bugs.
update: I'm not suggesting I believe AI is anywhere near being this capable.
KoolKat23|5 months ago
enraged_camel|5 months ago
All code is temporary and should be treated as ephemeral. Even if it lives for a long time, at the end of the day what really matters is data. Data is what helps you develop the type of deep understanding and expertise of the domain that is needed to produce high quality software.
In most problem domains, if you understand the data and how it is modeled, the need to be on top of how every single line of code works and the nitty-gritty of how things are wired together largely disappears. This is the thought behind the idiom “Don’t tell me what the code says—show me the data, and I’ll tell you what the code does.”
It is therefore crucial to start every AI-driven development effort with data modeling, and have lots of long conversations with AI to make sure you learn the domain well and have all your questions answered. In most cases, the rest is mostly just busywork, and handing it off to AI is how people achieve the type of productivity gains you read about.
Of course, that's not to say you should blindly accept everything the AI generates. Reading the code and asking the AI questions is still important. But the idea that the only way to develop an understanding of the problem is to write the code yourself is no longer true. In fact, it was never true to begin with.
posix86|5 months ago
And as we mapped put this landscape, hadn't there been countless situations where things felt dumb and annoying, and then situation in sometimes they became useful, and sometimes they remained dumb? Something you thought is making you actively loosing brain cells as you're doing them, because you're doing them wrong?
Or are you to claim that every hurdle you cross, every roadblock you encounter, every annoyance you overcome has pedagogical value to your career? There are so many dumb things out there. And what's more, there's so many things that appear dumb at first and then, when used right, become very powerful. AI is that: Something that you can use to shoot yourself in the foot, if used wrong, but if used right, it can be incredibly powerful. Just like C++, Linux, CORS, npm, tcp, whatever, everything basically.
halfadot|5 months ago
No it isn't. There's literally nothing about the process that forces you to skip understanding. Any such skips are purely due to the lack of will on the developer's side. This lack of will to learn will not change the outcomes for you regardless of whether you're using an LLM. You can spend as much time as you want asking the LLM for in-depth explanations and examples to test your understanding.
So many of the criticisms of coding with LLMs I've seen really do sound like they're coming from people who already started with a pre-existing bias, fiddled with with for a short bit (or worse, never actually tried it at all) and assumed their limited experience is the be-all end-all of the subject. Either that, or they're typical skill issues.
weego|5 months ago
_fat_santa|5 months ago
Here's mine, I use Cline occasionally to help me code but more and more I find myself just coding by hand. The reason is pretty simple which is with these AI tools you for the most part replace writing code with writing a prompt.
I look at it like this, if writing the prompt, and the inference time is less than what it would take me to write the code by hand I usually go the AI route. But this is usually for refactoring tasks where I consider the main bottleneck to be the speed at which my fingers can type.
For virtually all other problems it goes something like this: I can do X task in 10 minutes if i code it manually or I can prompt AI to do it and by the time I finish crafting the prompt and execute, it takes me about 8 minutes. Yes that's a savings of 2 minutes on that task and that's all fine and good assuming that the AI didn't make a mistake, if I have to go back and re-prompt or manually fix something, then all of a sudden the time it took me to complete that task is now 10-12 minutes with AI. Here the best case scenario is I just spent some AI credits for zero time savings and worse case is I spent AI credits AND the task was slower in the end.
With all sorts of tasks I now find myself making this calculation and for the most part, I find that doing it by hand is just the "safer" option, both in terms of code output but also in terms of time spent on the task.
didibus|5 months ago
I'm convinced I spend more time typing and end up typing more letters and words when AI coding than when not.
My hands are hurting me more from the extra typing I have to do now lol.
I'm actually annoyed they haven't integrated their voice to text models inside their coding agents yet.
rapind|5 months ago
That being said, these agents may still just YOLO and ignore your instructions on occasion, which can be a time suck, so sometimes I still get my hands dirty too :)
bccdee|5 months ago
I don't think anyone's saying that about technology in general. Many safety-oriented technologies force people to be more careful, not less. The argument is that this technology leads people to be careless.
Personally, my concerns don't have much to do with "the part of coding I enjoy." I enjoy architecture more than rote typing, and if I had a direct way to impose my intent upon code, I'd use it. The trouble is that chatbot interfaces are an indirect and imperfect vector for intent, and when I've used them for high-level code construction, I find my line-by-line understanding of the code quickly slips away from the mental model I'm working with, leaving me with unstable foundations.
I could slow down and review it line-by-line, picking all the nits, but that moves against the grain of the tool. The giddy "10x" feeling of AI-assisted coding encourages slippage between granular implementation and high-level understanding. In fact, thinking less about the concrete elements of your implementation is the whole advantage touted by advocates of chatbot coding workflows. But this gap in understanding causes problems down the line.
Good automation behaves in extremely consistent and predictable ways, such that we only need to understand the high-level invariants before focusing our attention elsewhere. With good automation, safety and correctness are the path of least resistance.
Chatbot codegen draws your attention away without providing those guarantees, demanding best practices that encourage manually checking everything. Safety and correctness are the path of most resistance.
godelski|5 months ago
We can't optimize for an objective truth when that objective truth doesn't exist. So while doing our best to align our models they must simultaneously optimize they ability to deceive us. There's little to no training in that loop where outputs are deeply scrutinized, because we can't scale that type of evaluation. We end up rewarding models that are incorrect in their output.
We don't optimize for correctness, we optimize for the appearance of correctness. We can't confuse the two.
The result is: when LLMs make errors, those errors are difficult for humans you detect.
This results in a fundamentally dangerous tool, does it not? Tools that when they error or fail they do so safely and loudly. Instead this one fails silently. That doesn't mean you shouldn't use the tool but that you need to do so with an abundance of caution.
Actually the big problem I have with coding with LLMs is that it increases my cognitive load, not decreases it. Bring over worked results in carelessness. Who among us does not make more mistakes when they are tired or hungry?That's the opposite of lazy, so hopefully answers OP.
lelanthran|5 months ago
This argument is wearing a little thin at this point. I see it multiples times a day, rephrased a little bit.
The response, "How well do you think your thinking will go if you had not spent years doing the 'practice' part?", is always followed by either silence or a non-sequitor.
So, sure, keep focusing on the 'thinking' part, but your thinking will get more and more shallow without sufficient 'doing'
t0mas88|5 months ago
kristianbrigman|5 months ago
My assembly has definitely rotted and I doubt I could do it again without some refreshing but it's been replaced with other higher-level skills, some which are general like using correct data structures and algorithms, and others that are more specific like knowing some pandas magic and React Flow basics.
I expect this iteration I'll get a lot better at systems design, UML, algorithm development, and other things that are slightly higher level. And probably reverse-engineering as well :) The computer engineering space is still vast IMHO....
johnfn|5 months ago
Or perhaps you’re asking how people will become good at delegation without doing? I don’t know — have you been “doing” multiple years of assembly? If not, how are you any good at Python (or whatever language you currently use?). Probably you’d say you don’t need to think about assembly because it has been abstracted away from you. I think AI operates similarly by changing the level of abstraction you can think at.
jayd16|5 months ago
rwmj|5 months ago
erichocean|5 months ago
sciencejerk|5 months ago
Consider leaving a disclaimer next time. Seems like you have a vested interest in the current half-baked generation of AI products succeeding
moffkalast|5 months ago
As someone writing lots of research code, I do get caught being careless on occasion since none of it needs to work beyond a proof of concept, but overall being able to just write out a spec and test an idea out in minutes instead of hours or days has probably made a lot of things exist that I'd otherwise never be arsed to bother with. LLMs have improved enough in the past year that I can easily 0-shot lots of ad-hoc visualization stuff or adapters or simple simulations, filters, etc. that work on the first try and with probably fewer bugs than I'd include in the first version myself. Saves me actual days and probably a carpal tunnel operation in the future.
closeparen|5 months ago
visarga|5 months ago
This same error in thinking happens in relation to AI agents too. Even if the agent is perfect (not really possible) but other links in the chain are slower, the overall speed of the loop still does not increase. To increase productivity with AI you need to think of the complete loop, reorganize and optimize every link in the chain. In other words a business has to redesign itself for AI, not just apply AI on top.
Same is true for coding with AI, you can't just do your old style manual coding but with AI, you need a new style of work. Maybe you start with constraint design, requirements, tests, and then you let the agent loose and not check the code, you need to automate that part, it needs comprehensive automated testing. The LLM is like a blind force, you need to channel it to make it useful. LLM+Constraints == accountable LLM, but LLM without constraints == unaccountable.
overfeed|5 months ago
It does not! If you're using interactive IDE AI, you spend your time keeping the AI on the rails, and reminding it what the original task is. If you're using agents, then you're delegating all the the mid-level/tactical thinking, and perhaps even the planning, and you're left with the task of writing requirements granular enough for an intern to tackle, but this hews closer to "Business Analyst" than "Software Engineer"
Marha01|5 months ago
malyk|5 months ago
HiPhish|5 months ago
I think this might simply be how the human brain works. Take autonomous driving as an example: while the car drives on its own the human driver is supposed to be alert and step in if needed. But does that work? Or will the driver's mind wander off because the car has been driving properly for the last half hour? My gut feeling is that it's inevitable that we'll eventually just shut out everything that goes smoothly and by the time it doesn't it might be too late.
We are not that different from our ancestors who used to roam the forests, trying to eat before they get eaten. In such an environment there is constantly something going on, some critters crawling, some leaves rustling, some water flowing. It would drive us crazy if we could not shut out all this regular noise. It's only when an irregularity appears that our attention must spring into action. When the leaves rustle differently than they are supposed to there is a good chance that there is some prey or a predator to be found. This mechanism only works if we are alert. The sounds of the forest are never exactly the same, so there is constant stimulation to keep up on our toes. But if you are relaxing in your shelter the tension is gone.
My fear is that AI is too good, to the point where it makes us feel like being in our shelter rather than in the forest.
halfcat|5 months ago
Yes. Productivity accelerates at an exponential rate, right up until it drives off a cliff (figuratively or literally).
rhetocj23|5 months ago
I view the story of LLMs akin to the Concorde. Something catastrophic will happen that will be too big to ignore and all trust will implode.
didibus|5 months ago
I think this depends. I prefer the thinking bit, but it's quite difficult to think without the act of coding.
It's how white boarding or writing can help you think. Being in the code helps me think, allows me to experiment, uncover new learnings, and evolve my thinking in the process.
Though maybe we're talking about thinking of different things? Are you thinking in the sense of what a PM thinks about ? User features, user behavior, user edge cases, user metrics? Or do you mean thinking about what a developer thinks about, code clarity, code performance, code security, code modularization and ability to evolve, code testability, innovative algorithms, innovative data-structure, etc. ?
nativeit|5 months ago
But the result of that thinking would hardly ever align neatly with whatever an LLM is doing. The only time it wouldn’t be working against me would be drafting boilerplate and scaffolding project repos, which I could already automate with more prosaic (and infinitely more efficient) solutions.
Even if it gets most of what I had in mind correct, the context switching between “creative thinking” and “corrective thinking” would be ruinous to my workflow.
I think the best case scenario in this industry will be workers getting empowered to use the tools that they feel work best for their approach, but the current mindset that AI is going to replace entire positions, and that individual devs should be 10x-ing their productivity is both short-sighted and counterproductive in my opinion.
strogonoff|5 months ago
— OSS exploded on the promise that software you voluntarily contributed to remains to benefit the public, and that a large corporation cannot tomorrow simply take your work and make it part of their product, never contributing anything back. Commercially operated LLMs threaten OSS both by laundering code and by overwhelming maintainers with massive, automatically produced and sometimes never read by a human patches and merge requests.
— Being able to claim that any creative work is merely a product of an LLM (which is a reality now for any new artist, copywriter, etc.) removes a large motivator for humans to do fully original creative work and is detrimental to creativity and innovation.
— The ends don’t justify the means, as a general philosophical argument. Large-scale IP theft had been instrumental at the beginning of this new wave of applied ML—and it is essentially piracy, except done by the powerful and wealthy against the rest of us, and for profit rather than entertainment. (They certainly had the money to license swaths of original works for training, yet they chose to scrape and abuse the legal ambiguity due to requisite laws not yet existing.)
— The plain old practical “it will drive more and more people out of jobs”.
— Getting everybody used to the idea that LLMs now mediate access to information increases inequality (making those in control of this tech and their investors richer and more influential, while pushing the rest—most of whom are victims of the aforementioned reverse piracy—down the wealth scale and often out of jobs) more than it levels the playing field.
— Diluting what humanity is. Behaving like a human is how we manifest our humanness to others, and how we deserve humane treatment from them; after entities that walk and talk exactly like a human would, yet which we can be completely inhumane to, become commonplace, I expect over time this treatment will carry over to how humans treat each other—the differentiator has been eliminated.
— It is becoming infeasible to operate open online communities due to bot traffic that now dwarves human traffic. (Like much of the above, this is not a point against LLMs as technology, but rather the way they have been trained and operated by large corporate/national entities—if an ordinary person wanted to self-host their own, they would simply not have the technical capability to cause disruption at this scale.)
This is just what I could recall off the top of my head.
m0rde|5 months ago
I'm curious for more thoughts on "will drive more and more people out of jobs”. Isn't this the same for most advances in technology (e.g., steam engine, computers s, automated toll plazas, etc.). In some ways, it's motivation for making progress; you get rid of mundane jobs. The dream is that you free those people to do something more meaningful, but I'm not going to be that blindly optimistic :) still, I feel like "it's going to take jobs" is the weakest of arguments here.
benoau|5 months ago
Isn't "AI coding" trained almost entirely on open source code and published documentation?
LittleCloud|5 months ago
I don't know... that seems like a false dichotomy to me. I think I could enjoy both but it depends on what kind of work. I did start using AI for one project recently: I do most of the thinking and planning, and for things that are enjoyable to implement I still write the majority of the code.
But for tests, build system integration, ...? Well that's usually very repetitive, low-entropy code that we've all seen a thousand times before. Usually not intellectually interesting, so why not outsource that to the AI.
And even for the planning part of a project there can be a lot of grunt work too. Haven't you had the frustrating experience of attempting a re-factoring and finding out midway it doesn't work because of some edge case. Sometimes the edge case is interesting and points to some deeper issue in the design, but sometimes not. Either way it sure would be nice to get a hint beforehand. Although in my experience AIs aren't at a stage to reason about such issues upfront --- no surprise since it's difficult for humans too --- of course it helps if your software has an oracle for if the attempted changes are correct, i.e. it is statically-typed and/or has thorough tests.
bluefirebrand|5 months ago
Because it still needs to be correct, and AI still is not producing correct code
mhitza|5 months ago
My strong belief after almost twenty years of professional software development is that both us and LLMs should be following the order: build, test, reflect, plan, build.
Writing out the implementation is the process of materializing the requirements, and learning the domain. Once the first version is out, you can understand the limits and boundaries of the problem and then you can plan the production system.
This is very much in line with Fred Brooks' "build one to throw away" (written ~40 years ago in the "The Mythical Man-Month". While often quoted, if you never read his book, I urge you to do so, it's both entertaining, and enlightening on our software industry), startup culture (if you remove the "move fast break things" mantra), and governmental pilot programs (the original "minimum viable").
bgwalter|5 months ago
Using "AI" is just like speed reading a math book without ever doing single exercise. The proponents rarely have any serious public code bases.
rhetocj23|5 months ago
And this should not be a surprise at all. Humans are optimisers of truth NOT maximisers. There is a subtle and nuanced difference. Very few actually spend their entire existence being maximizers, its pretty exhausting to be of that kind.
Optimising = we look for what feels right or surpassses some threshold of "looks about right". Maximizing = we think deeply and logically reason to what is right and conduct tests to ensure it is so.
Now if you have the discipline to choose when to shift between the two modes this can work. Most people do not though. And therein lies the danger.
cgh|5 months ago
belter|5 months ago
You described the current AI Bubble.
AnotherGoodName|5 months ago
That's not to say that LLMs as good as some of the more outrageous claims. You do still need to do a lot of work to implement code. But if you're not finding value at all it honestly reflects badly on you and your ability to use tools.
The craziest thing is i see the above type of comment on linked in regularly. Which is jaw dropping. Prospective hiring managers will read it and think "Wow you think advertising a lack of knowledge is helpful to your career?" Big tech co's are literally firing people with attitudes like the above. There's no room for people who refuse to adapt.
I put absolute LLM negativity right up there with comments like "i never use a debugger and just use printf statements". To me it just screams you never learnt the tool.
abustamam|5 months ago
Yeah I'm actually quite surprised that so many people are just telling AI to do X without actually developing a maintainable plan to do so first. It's no wonder that so many people are anti-vibe-coding — it's because their exposure to vibe coding is just telling Replit or Claude Code to do X.
I still do most of my development in my head, but I have a go-to prompt I ask Claude code when I'm stuck: "without writing any code, and maintaining existing patterns, tell me how to do X." it'd spit out some stuff, I'd converse with it to make sure it is a feasible solution that would work long term, then I tell it to execute the plan. But the process still starts in my head, not with a prompt.
elicash|5 months ago
stein1946|5 months ago
Which industries are those? What does that mastery look like?
> There's a divide between people ...
No, there is not. If one is not willing to figure out a couple of ffmpeg flags, comb through k8s controller code to see what is possible and fix that booting error in their VMs then failure in "mental experiences" is certain.
The most successful people I have met in this profession are the ones who absolutely do not tolerate magic and need to know what happens from the moment they press the ON on their machine, till the moment they turn is OFF again.
jmull|5 months ago
Pretty clearly that’s not the divide anyone’s talking about, right?
Your argument should maybe be something about thinking about the details vs thinking about the higher level. (If you were to make that argument, my response would be: both are valuable and important. You can only go so far working at one level. There are certainly problems that can be solved at one level, but also ones that can’t.)
wat10000|5 months ago
My experience with low level systems programming is that it’s like working with a developer who is tremendously enthusiastic but has little skill and little understanding of what they do or don’t understand. Time I would have spent writing code is replaced by time spent picking through code that looks superficially good but is often missing key concepts. That may count as “thinking” but I wouldn’t categorize it as the good kind.
Where it excels for me is as a superpowered search (asking it to find places where we play a particular bit-packing game with a particular type of pointer works great and saves a lot of time) and for writing one-off helper scripts. I haven’t found it useful for writing code I’m going to ship, but for stuff that won’t ship it can be a big help.
It’s kind of like an excavator. If you need to move a bunch of dirt from A to B then it’s great. If you need to move a small amount of dirt around buried power lines and water mains, it’s going to cause more trouble than it’s worth.
Balinares|5 months ago
It's also been my experience that AI will speed up the easy / menial stuff. But that's just not the stuff that takes up most of my time in the first place.
chamomeal|5 months ago
I actually end up using LLMs in the planning phase more often than the writing phase. Cursor is super good at finding relevant bits of code in unfamiliar projects, showing me what kind of conventions and libraries are being used, etc.
ChrisMarshallNY|5 months ago
New-fangled compiled languages...
Or who use modern, strictly-typed languages.
New-fangled type-safe languages...
As someone that has been coding since it was wiring up NAND gates on a circuit board, I'm all for the new ways, but there will definitely be a lot of mistakes, jargon, and blind alleys; just like every other big advancement.
martin-t|5 months ago
Imagine an AI as smart as some of the smartest humans, able to do everything they intellectually do but much faster, cheaper, 24/7 and in parallel.
Why would you spend any time thinking? All you'll be doing it is the things an AI can't do - 1) feeding it input from the real world and 2) trying out its output in the real world.
1) Could be finding customers, asking them to describe their problem, arranging meetings, driving to the customer's factory to measure stuff and take photos for the AI, etc.
2) Could be assembling the prototype, soldering, driving it to the customer's factory, signing off the invoice, etc.
None of that is what I as a programmer / engineer enjoy.
If actual human-level AI arrives, it'll do everything from concept to troubleshooting, except the parts where it needs presence in the physical world and human dexterity.
If actual human-level AI arrives, we'll become interfaces.
gspr|5 months ago
Why would I want a fuzzy, vague, imprecise, up-to-interpretation programming language? I already have to struggle with that in documentation, specifications, peers and – of course – myself. Why would I take the one precise component and make it suffer from the same?
This contrasts of course with tasks such as search, where I'm not quite able to precisely express what I want. Here I find LLMs to be a fantastic advance. Same for e.g. operations between imprecise domains, like between natural languages.
jaredklewis|5 months ago
Does this divide between "physical" and "mental" exist? Programming languages are formal languages that allow you to precisely and unambiguously express your ideas. I would say that "fiddling" with the code (as you say) is a kind of mental activity.
If there is actually someone out there that only dislikes AI coding assistants because they enjoy the physical act of typing and now have to do less of it (I have not seen this blog post yet), then I might understand your point.
latexr|5 months ago
Are you genuinely saying you never saw a critique of AI on environmental impact, or how it amplifies biases, or how it widens the economic gap, or how it further concentrates power in the hands of a few, or how it facilitates the dispersion of misinformation and surveillance, directly helping despots erode civil liberties? Or, or, or…
You don’t have to agree with any of those. You don’t even have to understand them. But to imply anti-AI arguments “hinge on the idea that technology forces people to be lazy/careless/thoughtless” is at best misinformed.
Go grab whatever your favourite LLM is and type “critiques of AI”. You’ll get your takes.
jayd16|5 months ago
The energy cost is nonsensical unless you pin down a value out vs value in ratio and some would argue the output is highly valuable and the input cost is priced in.
I don't know if it will end up being a concentrated power. It seems like local/open LLMs will still be in the same ballpark. Despite the absurd amounts of money spent so far the moats don't seem that deep.
Baking in bias is a huge problem.
The genie is out of the bottle as far as people using it for bad. Your own usage won't change that.
kiitos|5 months ago
What about if the "knowing/understanding" bit is your favorite part?
swiftcoder|5 months ago
What makes you regard this as an anti-AI take? To my mind, this is a very pro-AI take
analog8374|5 months ago
AI can only recycle the past.
grim_io|5 months ago
Since we don't know what else might already exist in the world without digging very deep, we fool ourselves into thinking that we do something very original and unique.
Vegenoid|5 months ago
> Just as tech leads don't just write code but set practices for the team, engineers now need to set practices for AI agents. That means bringing AI into every stage of the lifecycle
The technology doesn't force people to be careless, but it does make it very easy to be careless, without having to pay the costs of that carelessness until later.
layer8|5 months ago
pluto_modadic|5 months ago
resonious|5 months ago
Though I will also dogpile on the "thankless tasks" remark and say that the stuff that I have AI blast through is very thankless. I do not enjoy changing 20 different files to account for a change in struct definition.
raincole|5 months ago
I honestly don't know how one can use Claude Code (or other AI agents) in a 'coding first thinking later' manner.
pg3uk|5 months ago
A dev that spends an undue amount of time fiddling with knobs and configs probably sucks. Their mind isn't on the problem that needs to be solved.
croes|5 months ago
It’s not force but simply human nature. We invent tools to do less. That’s the whole point of tools.
giantg2|5 months ago
I'm not impressed by AI because it generates slop. Copilot can't write a thorough working test suite to save it's life. I think we need a design and test paradigm to properly communicate with AI for it to build great software.
benterix|5 months ago
Not forces, encourages.
nimithryn|5 months ago
You can do this in Python, or you can do this in English. But at the end of the day the engineer must input the same information to get the same behavior. Maybe LLMs make this a bit more efficient but even in English it is extremely hard to give exact specification without ambiguity (maybe even harder than Python in some cases).
EGreg|5 months ago
1) Bad actors using AI at scale to do bad things
2) AI just commodifying everything and making humans into zoo animals
specproc|5 months ago
I'm on a small personal project with it intentionally off, and I honestly feel I'm moving through it faster and certainly having a better time. I also have a much better feel for the code.
These are all just vibes, in the parlance of our times, but it's making me question why I'm bothering with LLM assisted coding.
Velocity is rarely the thing in my niche, and I'm not convinced babysitting an agent is all in all faster. It's certainly a lot less enjoyable, and that matters, right?
add-sub-mul-div|5 months ago
_heimdall|5 months ago
otabdeveloper4|5 months ago
AI isn't a technology. (No more than asking your classmate to do your homework for you is a "technology".)
Please don't conflate between AI and programming tools. AI isn't a tool, it is an oracle. There's a huge fundamental gap here that cannot be bridged.
solumunus|5 months ago
haskellshill|5 months ago
agentcoops|5 months ago
Yet every time that someone here earnestly testifies to whatever slight but real use they’ve found of AI, an army of commentators appears ready to gaslight them into doubting themselves, always citing that study meant to have proven that any apparent usefulness of AI is an illusion.
At this point, even just considering the domain of programming, there’s more than enough testimony to the contrary. This doesn’t say anything about whether there’s an AI bubble or overhype or anything about its social function or future. But, as you note, it means these cardboard cutout critiques of AI need to at least start from where we are.
blehn|5 months ago
Eh, physical and mental isn't the divide — it's more like people who enjoy code itself as a craft and people who simply see it as a means to an end (the application). Much like a writer might labor over their prose (the code) while telling a story (the application). Writing code is far more than the physical act of typing to those people.
martin-t|5 months ago
Here's a couple points which are related to each other:
1) LLMs are statistical models of text (code being text). They can only exist because huge for-profit companies ingested a lot of code under proprietary, permissive and copyleft licenses, most of which at the very least require attribution, some reserve rights of the authors, some give extra rights to users.
LLM training mixes and repurposes the work of human authors in a way which gives them plausible deniability against any single author, yet the output is clearly only possible because of the input. If you trained an LLM on only google's source code, you'd be sued by google and it would almost certainly reproduce snippets which can be tracked down to google's code. But by taking way, way more input data, the blender cuts them into such fine pieces that the source is undetectable, yet the output is clearly still based on the labor of other people who have not been paid.
Hell, GPT3 still produced verbatim snippets of inverse square root and probably other well known but licensed code. And github has a checkbox which scans for verbatim matches so you don't accidentally infringe copyright by using copilot in a way which is provable. Which means they take extra care to make it unprovable.
If I "write a book" by taking an existing book but replacing every word with a synonym, it's still plagiarism and copyright infringement. It doesn't matter if the mechanical transformation is way more sophisticated, the same rules should apply.
2) There's no opt out. I stopped writing open source over a year ago when it became clear all my code is unpaid labor for people who are much richer than me and are becoming richer at a pace I can't match through productive work because they own assets which give them passive income. And there's no license I can apply which will stop this. I am not alone. As someone said, "Open-Source has turned into a form of unpaid internship"[0]. It might lead to a complete death of open source because nobody will want to see their work fed into a money printing machine (subscription based LLM services) and get nothing in return for their work.
> But if you like the doing, the typing, fiddling with knobs and configs, etc etc, all AI does is take the good part away.
I see quite the opposite. For me, what makes programming fun is deeply understanding a problem and coming up with a correct, clear to understand, elegant solution. But most problems a working programmer has are just variations of what other programmers had. The remaining work is prompting the LLMs in the right way that they produce this (describing the problem instead of thinking about its solutions) and debugging bugs LLMs generated.
A colleague vibe coded a small utility. It's useful but it's broken is so many ways, the UI falls apart when some text gets too long, labels are slightly incorrect and misleading, some text handle decimal numbers in weird ways, etc. With manually written code, a programmer would get these right the right time. Potential bugs become obvious as you're writing the code because you are thinking about it. But they do not occur to someone prompting an LLM. Now I can either fix them manually which is time consuming and boring, or I can try prompting an LLM about every single one which is less time consuming but more boring and likely to break something else.
Most importantly, using an LLM does not give me deeper understanding of the problem or the solution, it keeps knowledge locked in a black box.
[0]: https://aria.dog/barks/forklift-certified-license/
nchmy|5 months ago
nenenejej|5 months ago
At least when doing stuff the old way you learn something if you waste time.
That said AI is useful enough and some poker games are +EV.
So this is more caution-AI than anti-AI take. It is more an anti-vibe-koolaid take.
lukaslalinsky|5 months ago
trepaura|5 months ago