> it is sad to me just how much people are trying to automate away programming and delegating it to a black box
I take it you're not using a compiler to generate machine code, then?
Scratch that, I guess you're not using a modern microprocessor to generate microcode from a higher-level instruction set either?
Wait, real programm^Wartists use a magnetised needle and a steady hand.
Programming has always been about finding the next black box that is both powerful and flexible. You might be happy with the level of abstraction you have settled on, but it's just as arbitrary as any other level.
Even the Apollo spacecraft programmers at MIT had a black box: they offloaded the weaving of core rope memory to other people. Programming is not necessarily about manually doing the repetitive stuff. In some sense, I'd argue that's antithetical to programming -- even if it makes you feel artistic!
The thing is, all these stacks are built by people and verified against specifications. When they failed to perform the way they should, we fixed them.
Plus, all the parts are deterministic in this stack. Their behavior is fixed for a given input, and all the parts are interpretable, readable, verifiable and observable.
LLMs are none of that. They are stochastic probability machines, which are nondeterministic. We can't guarantee their output's correctness, and we can't fix them to guarantee correct output. They are built on tons of (unethically sourced) data, which has no correctness and quality guarantees.
Some people will love LLMs, and/or see programming as a task/burden they have to complete. Some of us love programming for the sake of it, and earn money by doing it that way, too.
So putting LLMs to the same bucket with a deterministic, task specific programming tool is both wrong, and disservice to both.
I'm also strongly against LLMs, not because of the tech, but because of how they are trained and how their shortcomings are hid and they're put forward as the "oh the savior of the woeful masses, and the silver bullet of all thy problems", and it's neither of them.
LLMs are just glorified tech demos which shows what stochastic parrots can pose as accomplishing when you feed the whole world to them.
>> I take it you're not using a compiler to generate machine code, then?
The dismissive glibness of your comment makes me wonder if it's worth it trying to point out the obvious error in the analogy you're making. Compilers translate, LLMs generate. They are two completely different things.
When you write a program in a high-level language and pass it to a compiler, the compiler translates your program to machine code, yes. But when you prompt an LLM to generate code, what are you translating? You can pretend that you are "translating natural language to code" but LLMs are not translators, they're generators, and what you're really doing is providing a prefix for the generated string. You can generate strings form an LLM with an empty prefix; but try asking a compiler to compile an empty program.
>> Even the Apollo spacecraft programmers at MIT had a black box: they offloaded the weaving of core rope memory to other people.
There is no "black box" here. Programmers created the program and handed it over to others to code it up. That's like hiring someone to type your code for you at a keyboard, following your instructions to do so. You have to stretch things very far to see this as anything like compilation.
Also, really, compilers are not black boxes. Just because most people treat them as a scary unknowable thing doesn't mean that's what they are. LLms are "black boxes" because no matter how much we peer at their weights, arrays of numerical values, there's nothing we can ... er ... glean from them. They're incomprehensible to humans. Not so the code of a compiler. Even raw binary is comprehensible, with some experience.
I get what you're trying to say, but I don't entirely agree. Raising levels of abstraction is generally a good thing. But up until now, those have mostly been deterministic. We can be mostly confident that the compiler will generate correct machine code based on correct source code. We can be mostly confident that the magnetised needle does the right thing.
I don't think this is true for LLMs. Their output is not deterministic (up for discussion). Their weights and the sources thereof are mostly unknown to us. We cannot really be confident that an LLM will produce correct output based on correct input.
Thank you for saying this. It's always baffled me that people will decry ChangeX as unnatural and wrong when it happens in their lifetime, but happily build their lives upon NearlyIdenticalChangeY so long as it came before them.
I don't think that this is a fair comparison because at some point the nature of the craft actually does change.
To give an analogy, a carpenter might be happy with hand tools, happy with machine tools, happy with plywood, and happy with MDF. For routine jobs they may be happy to buy pre-fabbed cabinets.
But for them to employ an apprentice (AI in this example) and outsource work to them - suddenly they are no longer really acting as a carpenter, but a kind of project manager.
edit: I agree that LLMs in their current state don't really fundamentally change the game - the point I am trying to make is that it's completely understandable that everyone has their own "stop" point. Otherwise, we'd all live in IKEA mansions.
This is a very poor analogy. It's not a matter of abstractions, it's a matter of getting someone or something else to do the work, while you mostly watch and fix any errors you're able to catch.
This is a qualitatively different kind of abstraction. All other abstractions still require the programmer to express the solution in a formal language, while LLMs are allowing the user to express the solution in natural language. It's no longer programming, but much more like talking to a programmer as a manager.
You can learn how compilers work and understand how they do what they do. Nobody understands what’s in those billions of parameters, and no one ever will.
The question is about the abstraction being understandable and predictable. All the examples you have follow that, LLMs throw that out of the window.
>> Scratch that, I guess you're not using a modern microprocessor to generate microcode from a higher-level instruction set either?
Hell, I design gate level logic -> map it to instructions -> use them in C for the very LLMs and can fully understand[0] every aspect of it (if it doesn't behave as expected, that is a bug) but I cannot fathom or predict how the LLMs behave when i use them even though I know their architecture and implementation.
[0] Admittedly I treat the tools I use during the process, like cad tools, compiler, as black boxes, however I know that if I want to or the need arises, I can debug/understand them.
I guess we are currently in the special situation, that we as human programmers can understand the output of a coding LLM. That's because programming languages are designed to be human readable. And we had an incentive to learn those languages.
I imagine that machine learning powered coding will evolve to an even blacker box, than it is today: It will transform requirements to CPU instructions (or GPU instructions, netlists, ...). Why bother to follow those indirections, that are just convenience layers for those weak carbon units (urgh)?
Simultaneously, automation will likely lead to fewer skilled programmers in the future, because there will be fewer incentives to become one.
Together those effects could lead to a situation where we are condemned to just watch.
If LLMs were actually good for programming, I would consider it, but they just aren't. Especially when we are talking about "assistants" and stuff like that. I feel like I live in an alternate reality when it comes to the AI hype. I have to wonder if people are just that bad at programming or if they have a financial incentive here.
There are a handful of cases where LLMs are useful, mainly because Google is horrifically bad at bringing up useful search results, it can help in that regard... or when you can't find the right words to describe a problem.
What I would like to see out of an AI tool, is something that gobbles up the documentation to another programming tool or language, and spits it back out when it is relevant, or some context aware question and answers like "where in the code base does XYZ originate" or w/e. the difference is having a tool that assists me VS having a tool spit out a bunch of garbage code. Its the difference between using a tool, and being used by a tool
> I have to wonder if people are just that bad at programming or if they have a financial incentive here.
I have similar feelings to you, but I want to be careful about making assumptions. That being said, I see so many people making hyperbolic claims about the productivity gains of llms and a huge amount (though not all) of the time, they are doing low value work that betrays their inexperience and/or lack of ability.
I have yet to see a good example of where an llm invented a novel solution to an important problem in programming. Until that happens -- and I'm not saying it won't -- I remain extremely skeptical about the grandiose claims. This is particularly true of the companies selling llm products who make vague claims about productivity benefits. Who is more productive, the person who solves the most leet code problems in a month or the person who implements a new compiler in the same time frame? The former will almost surely have the most lines of code, but they have done nothing of direct value. I point this out because of how often productivity is measured in lines of code and/or time to complete a problem with a known solution.
So for me, when people brag about how much more productive they are with llms, I wonder, ok, well what are you building? I feel like llms are as likely going to make people build fragile bridges to nowhere at scale as anything truly revolutionary.
There's a large continuum between great and crap, and it sounds like you've placed a rather high bar to even consider using it.
I don't like BASH scripting. I wanted to automate a certain task and dump it in a justfile for convenient reference.
Learning BASH scripting would be a poor use of my time - I didn't value the knowledge I would gain.
Using Google to piece together everything I needed would have been very painful. Painful enough that I simply didn't bother in the past.
Asking an LLM solved the problem for me. It took about 6 iterations, because I had somewhat underspecified and the scripts it returned, while correct, had side effects I didn't like.
But even though it took several iterations it was infinitely more satisfying than the other options. Every time it failed I would explain to it what went wrong and it would amend the script.
It's like having an employee do the work for me, but much much cheaper.
That's the power of LLMs. They enable me to do things that just weren't worth the time in the past.
Would I use it for my main programming work? No. But does it increase my productivity? Definitely.
I know jack squat about programming. I could at one point do “hello world” in Python, if I recall correctly. Thanks to ChatGPT I now have scripts to make my life easier in a bunch of ways and a growing high-level understanding of how they work. I can’t program, an I won’t claim to. But I can be useful. Thanks to LLMs. (And before the catastrophists arrive: I know enough to be wary of the risks of running scripts I don’t understand, which is why I make a point of understanding how they function before I proceed)
I'm starting to think that the people that moan the most about LLMs being terrible, might just be terrible at writing good queries.
Like everything else: garbage in, garbage out.
EDIT: I was not aiming this comment directly at you. But I've had a couple of devs try to convince me that tools like ChatGPT or Claude is garbage, and then use extremely short queries as proof.
"Write me a website with [list of specs]", and then when it either fails or spits out half-baked results, they go "See? It's garbage!"
On the other hand I've seen non-coders create usable tools, by breaking up the problem and inputting good queries for each of those sub-tasks.
>If LLMs were actually good for programming, I would consider it, but they just aren't. Especially when we are talking about "assistants" and stuff like that. I feel like I live in an alternate reality when it comes to the AI hype. I have to wonder if people are just that bad at programming or if they have a financial incentive here.
solid points there.
it is surely some of both reasons. for the bad programmers, it will be the former. for those invested in llms, it will be the latter, that is financial incentives - to the tune of billions or millions or close to millions, depending upon whether you are an investor in or founder of a top llm company, or are working in such a company, or in a non-top company. it's the next gold rush, obviously, after crypto and many others before. picks and shovels, anyone?
and, more so for those for whom there are financial incentives, they will strenuously deny your statements, with all kinds of hand waving, expressions of outrage, ridicule, diversionary statements, etc.
that's the way the world goes. not with a bang but a whimper. ;)
I asked both ChatGPT 4o and Claude 3.5 Sonnet how many letters there are in the word strawberry and both answered “There are two r’s in the word strawberry”. When I asked “are you sure?” ChatGPT listed the letters one by one and then said yes, there are indeed two. Claude apologized for the mistake and said the correct answer is one.
If the LLM cannot even solve such a simple question, something a young child can do, and confidently gives you incorrect answers, then I’m not sure how someone could possibly trust it for complex tasks like programming.
I’ve used them both for programming and have had mixed results. The code is always mediocre at BEST but downright wrong and buggy at worst. You must review and understand everything it writes. Sometimes it’s worth iteratively getting it to generate stuff and you fix it or tell it what to fix, but often I’m far quicker just doing it myself.
That’s not to say that it isn’t useful. It’s great as a tool to augment learning from documentation. It’s great at making pros and cons lists. It’s great as a rubber duck. It can be helpful to set you on a path by giving some code snippets or examples. But the code it generates should NEVER be used verbatim without review and editing, at best it’s a throwaway proof of concept.
I find them useful, but the thoughts that people use them as an alternative to knowing how to program or thinking about the problem themselves, that scares me.
I also find programming extremely enjoyable, my means of expression, and an art form. I have hundreds of side projects in my archive, maybe five of which have ever been used by another human. It's all for the sake of coding. Many of them are sizeable and many are not but they are almost all done as a creative outlet, for the joy of doing it or to satisfy a curiosity.
But I don't know man, I love coding with LLMs. It just opens up more things, I think on some projects I actually spend MORE time on traditional coding than I did in the past, because I used an LLM to write scripts to automate some tedious data processing required for the project. And there's also projects where the LLM gets me from 0 to 60 and then I rather quickly write the code I actually care about writing, and may or may not end up replacing all the LLM written code.
I'm sure it heavily depends on exactly what types of project interest you. The fact that LLMs and diffusion have both become fixations of mine also means I have a lot more data processing involved in lots of my projects, and LLMs are quite good at custom data processing scripts.
I suppose my suggestion to the author would be that perhaps their projects aren't amenable to LLMs in the way they want and that's fine, but don't lose hope that there are kindred spirits out there just because so many people love LLM coding; some of us are both and that may be more about what types of projects we do.
This is a post that just reads as if the author is still in the “honeymoon” stage of their career where programming is seen as this extremely liberating and highly creative endeavour that no other mortal can comprehend.
I get the feeling and I was there too, but, writing code has always been a means to an end which is to deliver business value. Painting it as this almost abstract creative process is just… not true. While there are many ways to attack a given problem, the truth is once you factor in efficiency, correctness, requirements and the patterns your team uses then the search space of acceptable implementations reduces a lot.
Learn a couple of design patterns, read a couple of blogs and chat with your team and that’s all you need.
Letting an LLM write down the correct and specific ideas you tell it to based on what I wrote earlier means your free time to do code reviews, attend important meetings, align on high level aspects and help your team members all which multiply the value you deliver only through code.
Let LLMs automate the code writing so I can finally program in peace, I say!
I get that at some point you have to put food on the table, but why conflate the enterprisey, economical, object-oriented mess of things with your hobby?
You can, in theory, still program elegant little side projects with no pretense of business value or any customer besides, maybe, yourself.
I find that my work-coding and hobby-coding are different enough that they don't even feel like the same activity
This sounds like a nightmare to me. The last thing I want out of my work day is to attend more "important meetings" and "multiply" my value. This is the kind of thinking that makes us less human, just widgets that are interchangeable. No thanks.
I still very much felt like I was creatively crafting this [0] project even though the entire approach used the Claude project feature. I had to hand-write some sections but for the most part I was just instructing, reading, refining, and then copying and pasting. I was the one who instructed the use of a bash parser and operating on the AST for translation between text and GUI. I was the one who instructed the use of a plugin architecture to enforce decoupling. I was the one who suggested every feature and the look of the GUI. The goal was to create an experimental UI for creating and analyzing bash pipelines. The goal was not to do a lot of typing!
These high level abstractions are where I find the most joy from programming. Perhaps for some there is still some modicum of enjoyment from writing a for loop but for most people twenty years into a career there's nothing but the feeling of grinding out the minutia.
There's still a lot of room for better abstractions when it comes to interfacing with computing devices. I'd love to write my own operating system, CLI interface, terminal, and scripting language, etc from scratch and to my own personal preferences. I don't imagine I could ever have the time to handcraft such a vast undertaking. I do imagine that within a few decades I will be able to guide a computing assistant through the entire process and with great joy!
English, and other languages, are vague and imprecise. I've never understood why folks think they can write code "more efficiently" with a prompt rather than code? Are people willing to give up control? Let the LLM decide what is best? The same is true for generative art -- you get something, but you only have marginal control over what. I think this will always be something that is useful for the simplest things, simplest apps, simplest art, etc.. A race to the bottom for the bottom of the complexity stack. As problems become more complicated, it would take a great deal more prompt language to specify the behavior than code.
LLM output can't really be trusted so I need to "proof read" it and convince myself that it is correct. In the language I use every day and have a high degree of fluency, it's faster for me to simply write what's in my head than to proof read unknown code. So how can LLMs make me more productive in actual programming?
I use an LLM to generate ideas, to rubber duck, to get a lead on unknowns, and to generate boilerplate occasionally. So I do everything except replace the coding part because that's what requires the most precision, and LLMs are bad at precision. And yet, people claim massive productivity gains in specifically coding. What am I missing?
For me LLMs are like programming power tools. Use them wrong and you can hurt yourself. Use them right and you can accomplish far more in the same amount of time.
People that refuse to program with AI or intellisense or any other assistance are like carpenters who refuse to build furniture with power saws and power drills. Which is perfectly fine, but IMO that choice doesn't really affect the artistry of the final product
I use an LLM precisely BECAUSE I want to focus on the art. Like Davinci would use apprentices.
LLMs can do mindless drudgery just as well as I can, but in seconds instead of hours. There's nothing about remembering syntax, boilerplate code, forgetting a semicolon, googling the most common way of doing something, or combining some documentation to fill in the gaps that's even remotely "art" to me.
I never ask an LLM for what I'm artfully creating. I ask it for what I know it'll get instantly right, so I can move on to my next thought.
I have a lot of different thoughts as to why using an LLM feels "off". One I've been thinking about as of late is that it feels flawed to measure productivity by code velocity, i.e. lines of code written per hour.
Like, ideally, it shouldn't really take that much code to implement a thing. I like to think of programming as writing a bunch of levers, starting with simple levers for simple jobs, incrementally ratcheting up to larger levers lifting the smaller levels. Before too long, it'll feel as though you've written a lever capable of lifting the world...or at least one that makes an otherwise wickedly difficult project reasonably manageable.
If you say that LLMs make you more productive because it allowed you to finish a project that would otherwise take forever to write, then I'm skeptical that an LLM is the best solution. I mean, it's a solution at least, but I can't help but wonder if there's a better solution.
If the problem is that you lack the understanding to take on such a project, then perhaps what we really need are better tools for understanding. I myself have found that LLMs are great for gaining a quick understanding of languages that otherwise have sparse information for beginners, but I have to wonder if perhaps there's a better way.
If, on the other hand, the problem is that writing that much code would take forever, then I have to wonder if the real solution is that we need a better way to turn programming languages into patterns (levers) and turn said patterns into larger patterns (larger levers)
A partial solution works, but only partially well, and occasionally has consequences one has to reckon with
I'm of the opposite opinion: I've started enjoying programming much more after embracing LLMs.
* They are great for overcoming procrastination. As soon as I don't feel like doing something or a task feels tedious I can just delegate it to an LLM. If it doesn't solve it outright it at least makes me overcome the initial feeling of dread for the task.
* They give me better solutions than I initially had in mind. LLMs have no problem adding laborious safeguards against edge-cases that I either didn't think of or that I assessed wouldn't be worth it if I did it manually. E.g. something that is unlikely and would normally go to the backlog instead. I've found that my post-LLM code is much more robust from the get go.
* They let me try out different approaches easily. They have no problem rewriting the whole solution using another paradigm, again and again. They are tireless.
* They let me focus on the creative parts that I enjoy. This surprised me since I've always thought of myself as someone who loves programming but it turns out that it is only a small subset of programming I love. The rest I'm happy to delegate away.
> This surprised me since I've always thought of myself as someone who loves programming but it turns out that it is only a small subset of programming I love.
I am the same, and why many of my personal projects end up stranded. Once I've solved the tricky bit, the rest often isn't that motivating as it's usually variations on a common theme.
I held off LLMs for a long time, but recently been playing with them. They can certainly confidently generate junk, but in most cases it's good enough. And like you say can be used as a driver to keep going. In that regard they can be useful.
This is exactly how I use LLMs - I can automate the really boring parts. "Can you write me a Swift codable struct for the following JSON" will save my fingers and precious mental energy for the important and interesting parts.
It's like having a junior dev that doesn't complain and gets the work done immediately.
AI code suggestions as I type are however a different beast. It's easy to introduce subtle bugs when the suggestion "kinda looks right" but in fact the LLM had zero understanding of the context because it can't read my mind.
Same, these are all great points that I find as well. LLMs have made me a way more productive programmer, but a lot of that is because I already was an alright programmer and know how to take advantage of the strengths and weaknesses of the LLM. I think your last bullet point is most poignant, using Claude 3.5 I've been able to do tons of GUI and web programming, things I absolutely despise and refuse to do if I'm writing code by hand.
I sort of understand some of the vitriol that I see on HN but it is incredibly overblown. I don't really get a lot of the criticisms. LLMs aren't deterministic? Neither are humans. LLMs write bugs they can't fix? So do humans. LLMs are only good at being junior programmer copy paste machines? So are lots of humans.
My current project is training an LLM to do superoptimization and it's working exceedingly well so far. If you asked anyone on hacker news if that's a good idea, they'd probably say no.
I do sometimes get the impression that there will be a generational gap in ability to code between millennials and zoomers.
We had an overdemand for devs during late ZIRP early COVID leading to bootcamps and self taught pulling a lot of untrained into the industry. Many of them have left the industry.
Add to that the whole data science bubble and it’s bursting where we had tons of degrees and job openings for sort-of-devs. Lot of those jobs are gone now too.
Don’t forget the pull of “product management” and its demise outside big tech.
Now we have hiring freezes and juniors leaning on LLMs instead of actually spending an hour trying to solve problems.
I feel the same. I understand why others think ai is just another tool like intellisense, but for me intellisense and any other automatic refactor is a fixed algorithm that I understand and that I know exactly what it is doing, and I know that it is correct.
With ai I need to review the output but not because there may be some issues I didn't noticed, but because that may be issues the tool itself didn't noticed, so it's less of an "apply this specific change" but more of an "apply some change"
With intelliSense, I disregard about 98% of the suggestions. Do people do that with generated code? Doubtful. With an LLM, even that last 2% requires more effort because it creates weird and irrational bugs that I have to review.
Exactly, now replace code with AI generated art, photos, drawings, videos, music. Your employers couldn't care less if its convincing enough to ship. even better now that it only takes seconds to minutes.
We are at the cusp of creative destruction and we are only getting started. Ironically, blue collar jobs seem safe as there hasn't been a humanoid revolution and what I see in the white collar field is what blue collar workers experienced before the automation and offshoring of jobs
To have all these thoughts, I think you'd have to have never really used an LLM to help you code, or to be almost comically closed-minded when you do. What they feel like when you actually use them is a combination of a better SO and a very prescient auto-completer. It does not at all feel like delegating programming work to a robot. No loss of artistry comes into play, and it's damn useful.
In an ideal world, our abstractions would be so perfect that there would be no mundane boiler-platey parts of a program; you'd use the abstractions to construct software from a high level and leave details be. But our abstractions are very far from perfect: there's all kinds of boring code you just have to write because, well, your program has to work. And generally that code is, if you look, most of your code. This because making good abstractions is really hard and constructing fresh ones is often more work than just typing out the different cases. If you think this is mistaken, I'd gently suggest you take a fresh look at your own code.
Anyway, that's where LLMs come in. They help write the boring code. They're pretty good at it in some cases, and very bad at it in others. When they're good at it, it's because what the code should do is sort of overspecified; it's clear from context what, say, this function has to do to be correct, and the LLM is able to see and understand that context, and thus generate the right code to implement it. This code is boring because it is in some vague sense unnecessary; if it couldn't be otherwise, why do you have to write it at all? Well you do, and the LLM has taken care of it for you.
You can call this work the LLM is displacing "art", but I wouldn't. It's more the detritus of art performed in a specific way, the manual process required to physically make the art given the tools available.
You could object that the LLMs will get better in the sense that not only that they will make fewer mistakes, but they will be able to take on increased scope, pushing closer to what I'd consider the "real" decisions of a program. If this happens -- and I hope it does -- then we should reevaluate our lofty opinions of ourselves as artists, or at least artists whose artistry is genuinely valuable.
Author needs to get into Bret Victor. Has no idea how much more fun he could be having.
Programming is a step on the way to access to the state space of information. When we get to that stage, programming will seem like a maze of syntax, that has its own idiosyncrasies that force you into corners or regions in the state space, just like any DAW plugin or 3D tool, or any tool at all that exists.
This might be the most "zoomed in" take on programming I've ever read (where a zoomed out take understands that software usually just enables a business to do business). I almost thought it was satire.
I feel like you have to drop this kind of thinking to get anywhere past intermediate, not to mention you become a nightmare to anyone who has even a touch of pragmatism about them.
I used to work as a programmer, but have since pivoted into analysis - so I just view programming as another tool in my toolbox to solve problems. My main goal is to deliver insights and answer questions.
Sad to say, LLMs have made me lazy coder for the past two years or so. But I do deliver/finish work much faster, so my incentives to keep using LLMs as coding co-pilots are overshadowing my incentives to write code the "old way".
And for what it is worth, sometimes these posts read like modern luddite confessions - the rants just sound too personal.
[+] [-] kqr|1 year ago|reply
I take it you're not using a compiler to generate machine code, then?
Scratch that, I guess you're not using a modern microprocessor to generate microcode from a higher-level instruction set either?
Wait, real programm^Wartists use a magnetised needle and a steady hand.
Programming has always been about finding the next black box that is both powerful and flexible. You might be happy with the level of abstraction you have settled on, but it's just as arbitrary as any other level.
Even the Apollo spacecraft programmers at MIT had a black box: they offloaded the weaving of core rope memory to other people. Programming is not necessarily about manually doing the repetitive stuff. In some sense, I'd argue that's antithetical to programming -- even if it makes you feel artistic!
[+] [-] bayindirh|1 year ago|reply
Plus, all the parts are deterministic in this stack. Their behavior is fixed for a given input, and all the parts are interpretable, readable, verifiable and observable.
LLMs are none of that. They are stochastic probability machines, which are nondeterministic. We can't guarantee their output's correctness, and we can't fix them to guarantee correct output. They are built on tons of (unethically sourced) data, which has no correctness and quality guarantees.
Some people will love LLMs, and/or see programming as a task/burden they have to complete. Some of us love programming for the sake of it, and earn money by doing it that way, too.
So putting LLMs to the same bucket with a deterministic, task specific programming tool is both wrong, and disservice to both.
I'm also strongly against LLMs, not because of the tech, but because of how they are trained and how their shortcomings are hid and they're put forward as the "oh the savior of the woeful masses, and the silver bullet of all thy problems", and it's neither of them.
LLMs are just glorified tech demos which shows what stochastic parrots can pose as accomplishing when you feed the whole world to them.
[+] [-] YeGoblynQueenne|1 year ago|reply
The dismissive glibness of your comment makes me wonder if it's worth it trying to point out the obvious error in the analogy you're making. Compilers translate, LLMs generate. They are two completely different things.
When you write a program in a high-level language and pass it to a compiler, the compiler translates your program to machine code, yes. But when you prompt an LLM to generate code, what are you translating? You can pretend that you are "translating natural language to code" but LLMs are not translators, they're generators, and what you're really doing is providing a prefix for the generated string. You can generate strings form an LLM with an empty prefix; but try asking a compiler to compile an empty program.
>> Even the Apollo spacecraft programmers at MIT had a black box: they offloaded the weaving of core rope memory to other people.
You're referring to core rope memory:
https://en.wikipedia.org/wiki/Core_rope_memory
There is no "black box" here. Programmers created the program and handed it over to others to code it up. That's like hiring someone to type your code for you at a keyboard, following your instructions to do so. You have to stretch things very far to see this as anything like compilation.
Also, really, compilers are not black boxes. Just because most people treat them as a scary unknowable thing doesn't mean that's what they are. LLms are "black boxes" because no matter how much we peer at their weights, arrays of numerical values, there's nothing we can ... er ... glean from them. They're incomprehensible to humans. Not so the code of a compiler. Even raw binary is comprehensible, with some experience.
[+] [-] elric|1 year ago|reply
I don't think this is true for LLMs. Their output is not deterministic (up for discussion). Their weights and the sources thereof are mostly unknown to us. We cannot really be confident that an LLM will produce correct output based on correct input.
[+] [-] rstat1|1 year ago|reply
[+] [-] csallen|1 year ago|reply
[+] [-] throwaway22032|1 year ago|reply
To give an analogy, a carpenter might be happy with hand tools, happy with machine tools, happy with plywood, and happy with MDF. For routine jobs they may be happy to buy pre-fabbed cabinets.
But for them to employ an apprentice (AI in this example) and outsource work to them - suddenly they are no longer really acting as a carpenter, but a kind of project manager.
edit: I agree that LLMs in their current state don't really fundamentally change the game - the point I am trying to make is that it's completely understandable that everyone has their own "stop" point. Otherwise, we'd all live in IKEA mansions.
[+] [-] kif|1 year ago|reply
[+] [-] rodrigosetti|1 year ago|reply
[+] [-] milemi|1 year ago|reply
[+] [-] eternauta3k|1 year ago|reply
[+] [-] sifar|1 year ago|reply
>> Scratch that, I guess you're not using a modern microprocessor to generate microcode from a higher-level instruction set either?
Hell, I design gate level logic -> map it to instructions -> use them in C for the very LLMs and can fully understand[0] every aspect of it (if it doesn't behave as expected, that is a bug) but I cannot fathom or predict how the LLMs behave when i use them even though I know their architecture and implementation.
[0] Admittedly I treat the tools I use during the process, like cad tools, compiler, as black boxes, however I know that if I want to or the need arises, I can debug/understand them.
[+] [-] Palomides|1 year ago|reply
[+] [-] lagrange77|1 year ago|reply
I imagine that machine learning powered coding will evolve to an even blacker box, than it is today: It will transform requirements to CPU instructions (or GPU instructions, netlists, ...). Why bother to follow those indirections, that are just convenience layers for those weak carbon units (urgh)?
Simultaneously, automation will likely lead to fewer skilled programmers in the future, because there will be fewer incentives to become one.
Together those effects could lead to a situation where we are condemned to just watch.
[+] [-] sweeter|1 year ago|reply
There are a handful of cases where LLMs are useful, mainly because Google is horrifically bad at bringing up useful search results, it can help in that regard... or when you can't find the right words to describe a problem.
What I would like to see out of an AI tool, is something that gobbles up the documentation to another programming tool or language, and spits it back out when it is relevant, or some context aware question and answers like "where in the code base does XYZ originate" or w/e. the difference is having a tool that assists me VS having a tool spit out a bunch of garbage code. Its the difference between using a tool, and being used by a tool
[+] [-] norir|1 year ago|reply
I have similar feelings to you, but I want to be careful about making assumptions. That being said, I see so many people making hyperbolic claims about the productivity gains of llms and a huge amount (though not all) of the time, they are doing low value work that betrays their inexperience and/or lack of ability.
I have yet to see a good example of where an llm invented a novel solution to an important problem in programming. Until that happens -- and I'm not saying it won't -- I remain extremely skeptical about the grandiose claims. This is particularly true of the companies selling llm products who make vague claims about productivity benefits. Who is more productive, the person who solves the most leet code problems in a month or the person who implements a new compiler in the same time frame? The former will almost surely have the most lines of code, but they have done nothing of direct value. I point this out because of how often productivity is measured in lines of code and/or time to complete a problem with a known solution.
So for me, when people brag about how much more productive they are with llms, I wonder, ok, well what are you building? I feel like llms are as likely going to make people build fragile bridges to nowhere at scale as anything truly revolutionary.
[+] [-] BeetleB|1 year ago|reply
I don't like BASH scripting. I wanted to automate a certain task and dump it in a justfile for convenient reference.
Learning BASH scripting would be a poor use of my time - I didn't value the knowledge I would gain.
Using Google to piece together everything I needed would have been very painful. Painful enough that I simply didn't bother in the past.
Asking an LLM solved the problem for me. It took about 6 iterations, because I had somewhat underspecified and the scripts it returned, while correct, had side effects I didn't like.
But even though it took several iterations it was infinitely more satisfying than the other options. Every time it failed I would explain to it what went wrong and it would amend the script.
It's like having an employee do the work for me, but much much cheaper.
That's the power of LLMs. They enable me to do things that just weren't worth the time in the past.
Would I use it for my main programming work? No. But does it increase my productivity? Definitely.
[+] [-] Arn_Thor|1 year ago|reply
[+] [-] TrackerFF|1 year ago|reply
I'm starting to think that the people that moan the most about LLMs being terrible, might just be terrible at writing good queries.
Like everything else: garbage in, garbage out.
EDIT: I was not aiming this comment directly at you. But I've had a couple of devs try to convince me that tools like ChatGPT or Claude is garbage, and then use extremely short queries as proof.
"Write me a website with [list of specs]", and then when it either fails or spits out half-baked results, they go "See? It's garbage!"
On the other hand I've seen non-coders create usable tools, by breaking up the problem and inputting good queries for each of those sub-tasks.
[+] [-] fuzztester|1 year ago|reply
solid points there.
it is surely some of both reasons. for the bad programmers, it will be the former. for those invested in llms, it will be the latter, that is financial incentives - to the tune of billions or millions or close to millions, depending upon whether you are an investor in or founder of a top llm company, or are working in such a company, or in a non-top company. it's the next gold rush, obviously, after crypto and many others before. picks and shovels, anyone?
and, more so for those for whom there are financial incentives, they will strenuously deny your statements, with all kinds of hand waving, expressions of outrage, ridicule, diversionary statements, etc.
that's the way the world goes. not with a bang but a whimper. ;)
sorry, t.s. eliot.
https://en.m.wikipedia.org/wiki/The_Hollow_Men
[+] [-] _xiaz|1 year ago|reply
[+] [-] dkersten|1 year ago|reply
If the LLM cannot even solve such a simple question, something a young child can do, and confidently gives you incorrect answers, then I’m not sure how someone could possibly trust it for complex tasks like programming.
I’ve used them both for programming and have had mixed results. The code is always mediocre at BEST but downright wrong and buggy at worst. You must review and understand everything it writes. Sometimes it’s worth iteratively getting it to generate stuff and you fix it or tell it what to fix, but often I’m far quicker just doing it myself.
That’s not to say that it isn’t useful. It’s great as a tool to augment learning from documentation. It’s great at making pros and cons lists. It’s great as a rubber duck. It can be helpful to set you on a path by giving some code snippets or examples. But the code it generates should NEVER be used verbatim without review and editing, at best it’s a throwaway proof of concept.
I find them useful, but the thoughts that people use them as an alternative to knowing how to program or thinking about the problem themselves, that scares me.
[+] [-] Tiberium|1 year ago|reply
[+] [-] furyofantares|1 year ago|reply
But I don't know man, I love coding with LLMs. It just opens up more things, I think on some projects I actually spend MORE time on traditional coding than I did in the past, because I used an LLM to write scripts to automate some tedious data processing required for the project. And there's also projects where the LLM gets me from 0 to 60 and then I rather quickly write the code I actually care about writing, and may or may not end up replacing all the LLM written code.
I'm sure it heavily depends on exactly what types of project interest you. The fact that LLMs and diffusion have both become fixations of mine also means I have a lot more data processing involved in lots of my projects, and LLMs are quite good at custom data processing scripts.
I suppose my suggestion to the author would be that perhaps their projects aren't amenable to LLMs in the way they want and that's fine, but don't lose hope that there are kindred spirits out there just because so many people love LLM coding; some of us are both and that may be more about what types of projects we do.
[+] [-] brunooliv|1 year ago|reply
[+] [-] _xiaz|1 year ago|reply
You can, in theory, still program elegant little side projects with no pretense of business value or any customer besides, maybe, yourself.
I find that my work-coding and hobby-coding are different enough that they don't even feel like the same activity
[+] [-] happyraul|1 year ago|reply
[+] [-] williamcotton|1 year ago|reply
These high level abstractions are where I find the most joy from programming. Perhaps for some there is still some modicum of enjoyment from writing a for loop but for most people twenty years into a career there's nothing but the feeling of grinding out the minutia.
There's still a lot of room for better abstractions when it comes to interfacing with computing devices. I'd love to write my own operating system, CLI interface, terminal, and scripting language, etc from scratch and to my own personal preferences. I don't imagine I could ever have the time to handcraft such a vast undertaking. I do imagine that within a few decades I will be able to guide a computing assistant through the entire process and with great joy!
[0] https://github.com/williamcotton/guish
[+] [-] danjl|1 year ago|reply
[+] [-] frje1400|1 year ago|reply
I use an LLM to generate ideas, to rubber duck, to get a lead on unknowns, and to generate boilerplate occasionally. So I do everything except replace the coding part because that's what requires the most precision, and LLMs are bad at precision. And yet, people claim massive productivity gains in specifically coding. What am I missing?
[+] [-] umvi|1 year ago|reply
People that refuse to program with AI or intellisense or any other assistance are like carpenters who refuse to build furniture with power saws and power drills. Which is perfectly fine, but IMO that choice doesn't really affect the artistry of the final product
[+] [-] geor9e|1 year ago|reply
LLMs can do mindless drudgery just as well as I can, but in seconds instead of hours. There's nothing about remembering syntax, boilerplate code, forgetting a semicolon, googling the most common way of doing something, or combining some documentation to fill in the gaps that's even remotely "art" to me.
I never ask an LLM for what I'm artfully creating. I ask it for what I know it'll get instantly right, so I can move on to my next thought.
[+] [-] Bjorkbat|1 year ago|reply
Like, ideally, it shouldn't really take that much code to implement a thing. I like to think of programming as writing a bunch of levers, starting with simple levers for simple jobs, incrementally ratcheting up to larger levers lifting the smaller levels. Before too long, it'll feel as though you've written a lever capable of lifting the world...or at least one that makes an otherwise wickedly difficult project reasonably manageable.
If you say that LLMs make you more productive because it allowed you to finish a project that would otherwise take forever to write, then I'm skeptical that an LLM is the best solution. I mean, it's a solution at least, but I can't help but wonder if there's a better solution.
If the problem is that you lack the understanding to take on such a project, then perhaps what we really need are better tools for understanding. I myself have found that LLMs are great for gaining a quick understanding of languages that otherwise have sparse information for beginners, but I have to wonder if perhaps there's a better way.
If, on the other hand, the problem is that writing that much code would take forever, then I have to wonder if the real solution is that we need a better way to turn programming languages into patterns (levers) and turn said patterns into larger patterns (larger levers)
A partial solution works, but only partially well, and occasionally has consequences one has to reckon with
[+] [-] Kiro|1 year ago|reply
* They are great for overcoming procrastination. As soon as I don't feel like doing something or a task feels tedious I can just delegate it to an LLM. If it doesn't solve it outright it at least makes me overcome the initial feeling of dread for the task.
* They give me better solutions than I initially had in mind. LLMs have no problem adding laborious safeguards against edge-cases that I either didn't think of or that I assessed wouldn't be worth it if I did it manually. E.g. something that is unlikely and would normally go to the backlog instead. I've found that my post-LLM code is much more robust from the get go.
* They let me try out different approaches easily. They have no problem rewriting the whole solution using another paradigm, again and again. They are tireless.
* They let me focus on the creative parts that I enjoy. This surprised me since I've always thought of myself as someone who loves programming but it turns out that it is only a small subset of programming I love. The rest I'm happy to delegate away.
[+] [-] magicalhippo|1 year ago|reply
I am the same, and why many of my personal projects end up stranded. Once I've solved the tricky bit, the rest often isn't that motivating as it's usually variations on a common theme.
I held off LLMs for a long time, but recently been playing with them. They can certainly confidently generate junk, but in most cases it's good enough. And like you say can be used as a driver to keep going. In that regard they can be useful.
[+] [-] willtemperley|1 year ago|reply
It's like having a junior dev that doesn't complain and gets the work done immediately.
AI code suggestions as I type are however a different beast. It's easy to introduce subtle bugs when the suggestion "kinda looks right" but in fact the LLM had zero understanding of the context because it can't read my mind.
[+] [-] verditelabs|1 year ago|reply
I sort of understand some of the vitriol that I see on HN but it is incredibly overblown. I don't really get a lot of the criticisms. LLMs aren't deterministic? Neither are humans. LLMs write bugs they can't fix? So do humans. LLMs are only good at being junior programmer copy paste machines? So are lots of humans.
My current project is training an LLM to do superoptimization and it's working exceedingly well so far. If you asked anyone on hacker news if that's a good idea, they'd probably say no.
[+] [-] steveBK123|1 year ago|reply
We had an overdemand for devs during late ZIRP early COVID leading to bootcamps and self taught pulling a lot of untrained into the industry. Many of them have left the industry.
Add to that the whole data science bubble and it’s bursting where we had tons of degrees and job openings for sort-of-devs. Lot of those jobs are gone now too.
Don’t forget the pull of “product management” and its demise outside big tech.
Now we have hiring freezes and juniors leaning on LLMs instead of actually spending an hour trying to solve problems.
Interesting times.
[+] [-] TrianguloY|1 year ago|reply
With ai I need to review the output but not because there may be some issues I didn't noticed, but because that may be issues the tool itself didn't noticed, so it's less of an "apply this specific change" but more of an "apply some change"
[+] [-] danjl|1 year ago|reply
[+] [-] dakiol|1 year ago|reply
For work stuff? I couldn’t care less if the code comes from me, my colleagues or an LLM. As long as it works and it’s secure, we’ll ship it.
Folks, career != job.
[+] [-] pajeets|1 year ago|reply
We are at the cusp of creative destruction and we are only getting started. Ironically, blue collar jobs seem safe as there hasn't been a humanoid revolution and what I see in the white collar field is what blue collar workers experienced before the automation and offshoring of jobs
[+] [-] layer8|1 year ago|reply
[+] [-] icambron|1 year ago|reply
In an ideal world, our abstractions would be so perfect that there would be no mundane boiler-platey parts of a program; you'd use the abstractions to construct software from a high level and leave details be. But our abstractions are very far from perfect: there's all kinds of boring code you just have to write because, well, your program has to work. And generally that code is, if you look, most of your code. This because making good abstractions is really hard and constructing fresh ones is often more work than just typing out the different cases. If you think this is mistaken, I'd gently suggest you take a fresh look at your own code.
Anyway, that's where LLMs come in. They help write the boring code. They're pretty good at it in some cases, and very bad at it in others. When they're good at it, it's because what the code should do is sort of overspecified; it's clear from context what, say, this function has to do to be correct, and the LLM is able to see and understand that context, and thus generate the right code to implement it. This code is boring because it is in some vague sense unnecessary; if it couldn't be otherwise, why do you have to write it at all? Well you do, and the LLM has taken care of it for you.
You can call this work the LLM is displacing "art", but I wouldn't. It's more the detritus of art performed in a specific way, the manual process required to physically make the art given the tools available.
You could object that the LLMs will get better in the sense that not only that they will make fewer mistakes, but they will be able to take on increased scope, pushing closer to what I'd consider the "real" decisions of a program. If this happens -- and I hope it does -- then we should reevaluate our lofty opinions of ourselves as artists, or at least artists whose artistry is genuinely valuable.
[+] [-] Retr0id|1 year ago|reply
[+] [-] youssefabdelm|1 year ago|reply
Author needs to get into Bret Victor. Has no idea how much more fun he could be having.
Programming is a step on the way to access to the state space of information. When we get to that stage, programming will seem like a maze of syntax, that has its own idiosyncrasies that force you into corners or regions in the state space, just like any DAW plugin or 3D tool, or any tool at all that exists.
[+] [-] PUSH_AX|1 year ago|reply
I feel like you have to drop this kind of thinking to get anywhere past intermediate, not to mention you become a nightmare to anyone who has even a touch of pragmatism about them.
[+] [-] TrackerFF|1 year ago|reply
Sad to say, LLMs have made me lazy coder for the past two years or so. But I do deliver/finish work much faster, so my incentives to keep using LLMs as coding co-pilots are overshadowing my incentives to write code the "old way".
And for what it is worth, sometimes these posts read like modern luddite confessions - the rants just sound too personal.
[+] [-] spaceman_2020|1 year ago|reply
Like what great ideological purpose are you serving if you delegate writing tailwind boilerplate to a LLM, or a basic axios get request?
Does your code become impure if instead of copying code from the documentation you get an LLM to do it?
[+] [-] matwood|1 year ago|reply