top | item 43723135

AI is turning us into glue

65 points| lswainemoore | 10 months ago |lincoln.swaine-moore.is

58 comments

order

rglover|10 months ago

Really enjoyed this post.

> Putting aside existential risks, I don't see a future where a lot of jobs don't cease to exist.

I'm personally betting on the plateau effect with LLMs. There are two plateaus I see coming that will require humans to fix no matter what we do:

1. The LLMs themselves plateau. We're already seeing new models get worse, not better at writing code (e.g., Sonnet 3.5 seems to be better than 3.7 at coding). This could be a temporary fluke, or, an inherent reality of how LLMs work (where I tend to land).

2. Humans will plateau. First, humans themselves will see their skills atrophy as they defer more and more to AI than struggling to solve problems (and by extension, learn new things). Second, humans will be disincentivized to create new forms of programming and write about them, so eventually the inputs to the LLM become stale.

Short-term, this won't appear to be true, but long-term (on the author's 10+ year scale), it will be frightening. Doubly so when systems that were primarily or entirely "vibe coded" start to break in ways that the few remaining humans responsible for maintaining them don't understand (and can't prompt their way out of).

And that's where I think the future work will be: in fixing or replacing systems unintentionally being broken by the use of AI. So, you'll either be an "AI mess fixer" or more entrepreneurial doing "artisan, hand-crafted software."

Either of those I expect to be fairly lucrative.

sarreph|10 months ago

On your second point - I don’t agree that humans in general will plateau. I think instead the _gap_ between humans who crave to create and learn, and those who are ostensibly potatoes, will be magnified.

I see it a bit like the creator economy, where you have these maker vs consumer tranches of people.

Havoc|10 months ago

> eventually the inputs to the LLM become stale.

Seems plausible to me that they could just keep writing python 3.13 till the end of time.

If you take say assembly - we didn’t stop writing it because it stopped working.

As a functional building block programming seems feature complete

7speter|10 months ago

The thing about being an “AI mess fixer” will be that you’ll still need experience that fuels the creativity to solve problems generated by the AI.

abletonlive|10 months ago

This feels like a "if i say it enough, people will agree and it will be true" kind of comment. Almost none of these propositions check out or even make sense. I literally can't distinguish between reddit commenters and HN commenters. An unoriginal HN complaint but frustrating to witness over time.

1. Plateau != Regress. Why point to regressions as evidence of plateau? Why only look at a single model and minor version? we are clearly still at AI infancy, regressions are to be expected from time to time.

2. Where's the evidence of this? Humans are using AI to branch out and dip their toes into things that they wouldn't have fathomed doing before. How would that lead you to "disincentivized"?

> Doubly so when systems that were primarily or entirely "vibe coded" start to break in ways

So in this fantasy everybody is vibe coding resilient code/systems that lasts for 10+ years and everybody stops learning how to code, and after a decade or so, they start breaking and everybody is in trouble? This world you're creating wouldn't stand up to the critique of sci-fi readers.

I'm sorry but if we can vibe code systems that last 10+ years and nobody is learning anything because they are performing so well, then that's a job well done by OpenAI and co. We're probably set as a civilization.

myhf|10 months ago

Why do articles like this always say things like "I've used LLMs to get some stuff done faster" and then go on to describe how LLMs get them to spend more time and money to do a worse job? You don't need LLMs to frustrate you into lowering your standards, the power to do that was within you all along.

arctek|10 months ago

Has anyone actually measured this yet?

Much of this feels like when they did studies on people who take mushrooms for example feel like they are more productive, but when you actually measure it they aren't. It's just their perception.

To me the biggest issue is that search has been gutted out and so for many questions the best results come from asking an LLM. But this is far different from using it to generate entire codebases.

Animats|10 months ago

The "glue" comment here reflects a view from someone who does mostly software work. That's been the situation since mechanized production lines were first built. The job of the humans is not direct labor. It's to monitor the machinery, restart it, and fix it.

Power looms were probably the first devices like this. Somebody has to thread the loom, but then it mostly runs by itself.[1] Production lines with lots of stations will have shutdowns, where a drill bit broke or there's dirt on a lens or some consumable ran out. Exceptions are hard to automate, and factory design focuses on minimizing exceptions and bypassing stuck cells.

It's helpful to understand how a factory works when watching how software development is changing. There's commonality.

So the phrase "vibe coding" is only two months old.[2] How widespread will it be in two years?

[1] https://www.youtube.com/watch?v=WyRW9XOuUdU

[2] https://en.wikipedia.org/wiki/Vibe_coding

spacebanana7|10 months ago

AI is unlikely to take away jobs from software engineers. There’s no natural upper bound on the amount of software people can consume - unlike cars, food or houses.

Software engineers ultimately are people with “will to build”. Just as hedge fund people have a “will to trade”. The code or tooling is just a means to an end.

spencerflem|10 months ago

Huh, I have the opposite feeling- that people already have most of the software they want a this point.

turtlebits|10 months ago

"I like fixing thorny bugs". Not me. Any tool that can get me to the solution faster is always welcome. IME, AI does well handling the boring parts.

minimaxir|10 months ago

It depends on the thorny bug. I like fixing bugs where the solution is to implement something clever and I learn something in the process. I don't like fixing bugs where I forget a comma or do a subtle one-off error.

Most thorny bugs fall into the latter in my experience.

dehrmann|10 months ago

And both are valid. Some people like building new products and features, some would rather fix existing ones.

felipeccastro|10 months ago

I’ve been having a different experience. Asking Claude to fix the bug again and again is annoying, so I’m still working on “pull pieces at a time, understanding each” so I do fix the bug myself when it’s faster to do so. In fact, the majority of times I’ve been using the LLM to build tiny libraries for me to avoid the need for the LLM in the running app. Kind of like StackOverflow on steroids. I don’t feel as the glue, but only having a superior tooling to get info I need fast.

eximius|10 months ago

I'm still pretty pessimistic on all this. Just today, I had what should have been an obvious win for an LLM coding assistant to help me. I was writing a go function that converts one very long struct into a second very long struct. The transformation was almost entirely wrapping the fields of the first struct in a wrapper in a completely rote way. If FieldA was an int on src, I wanted a dest{ FieldA: Wrapper{ Value: src.FieldA, Ratio: src.FieldA/Constant }, ... }.

It couldn't do it. I prefilled in all the fields (hundreds) and told it just to populate them, but it tried to hallucinate new fields, it would do one or two then both delete the fields I had added and add a comment saying 'then do the rest'. I tried a bunch of different prompts.

I can see how some vibe coders could make useful things, but most of my attempts to use LLMs in anything not-from-scratch are exercises in frustration.

dinfinity|10 months ago

Which one?

Can we please make it a convention that whenever anybody posts anything about some LLM experience they had, that they include which model and UI driving it they used?

Parent's post is like saying: I tried to send an email with a new email program and it didn't work.

m4rtink|10 months ago

There is a story about this by Stanislaw Lem: "Elsewhere Tichy meets a race of aliens (called "Indioci" in the Polish original, "Phools" in the English translation) who, desiring perfect harmony in their lives, entrust themselves to a machine, which converts them into shiny discs to be arranged in pleasant patterns across their planet." - https://en.m.wikipedia.org/wiki/Ijon_Tichy#Stories

(Not glue, but close enough.)

cadamsdotcom|10 months ago

Nothing stopping anyone fixing thorny bugs for fun! And hobby computing is more accessible now than ever.

If you build stuff for others AI (mostly) removes typing and debugging from the equation; that frees you to think harder about what you’re building and how to make it most useful. And because you’re generally done sooner you can get the thing into your users’ hands sooner, increasing the iterations.

It’s win-win.

newbie578|10 months ago

Well written, I agree with the basic premise of the idea, I just think the changes will be even more dramatic.

A lot of us are stationary, thinking stuff and other people around us will be automated, but not us, “I am special”, well I fear a lot of people will find out just how much special they unfortunately are (not).

Havoc|10 months ago

> I don't see a future where a lot of jobs don't cease to exist.

And the complete lack of a game plan of a societal level is starting to get worrying.

If we’re going to UBI this then we’re going to need a bit more of a plan than some toy studies

thomastraum|10 months ago

these well articulated articles will soon turn into pure despair. Happened to me.

m0llusk|10 months ago

Really interesting that I have so far had hardly any use for code generators except when some glue was needed. Possibly this new revolution may be headed in multiple conflicting directions simultaneously?

Gigachad|10 months ago

On the plus side, at least when I'm old and not so mentally sharp, my personal AI can tell me when I'm being scammed or why the wifi isn't working.

plsbenice34|10 months ago

You really don't believe it will be the one scamming you?

stavros|10 months ago

So there's a wrong way and a right way to code with LLMs. The wrong way is to ask the LLM to write a bunch of code you don't understand, and to keep asking it to write more and more code to fix the problems each iteration has. That will lead to a massive tower built on sand, where everything is brittle and collapses at the slightest gust of wind.

The right way is to have it autocomplete a few lines at a time for you. You avoid writing all the boilerplate, you don't need to look up APIs, you get to write lines in a tenth of the time it would normally take, but you still have all the context of what's happening where. If there's a bug, you don't need to ask the LLM to fix it, you just go and look, you spot it, and you fix it yourself, because it's usually something dumb.

The second way wins because you don't let the LLM make grand architectural choices, and all the bugs are contained in low-level code, which is generally easy to fix if the functions, their inputs, and their outputs are sane.

I like programming as much as the next person, but I'm really not lamenting the fact that I don't have to be looking up parameters or exact function names any more. Especially something like Cursor makes this much easier, because it can autocomplete as you type, rather than in a disconnected "here's a diff you won't even read" way.

skydhash|10 months ago

The best completion are those that let you avoiding mistyping variable names or figure out some dependency (automatically import the modules, restrict to the current scope/structure). Those has been solved for decades now. And you can get a dumb ones by doing a list of all symbols in the project directory, removing common keyword and punctuation, and do some kind of matching for filtering. The other side of the spectrum is the kind of code indexing IDEA and LSP server do.

Then you got into code boilerplate, and if you find yourself doing this a lot, that's a signal to start refactoring, add some snippets to your editor (error handling in go), write some code generators, or lament the fact that your language can't do metaprogramming.

> but I'm really not lamenting the fact that I don't have to be looking up parameters or exact function names any more.

That's a reckless attitude to have, especially if the function have drastic behavior switch like mutating argument or returning a fresh copy. All you do is assume that it behaves in certain way while the docs you haven't read will have the related warning label.

eacapeisfutuile|10 months ago

That level of autocomplete has been around for many years before LLM for pretty much any used language

PorterBHall|10 months ago

But I thought it was going to turn us into paper clips.

garof|10 months ago

Nothing about horses?

matthewmueller|10 months ago

Very well-articulated article on a shared feeling!

holtkam2|10 months ago

Hot take: we were already glue. We take in ideas / directives from product people and turn that into instructions for a computer to use to build a software package.

The only difference in a “vibe coding” world is that now these “instructions” that we pass to the computer are in English, not Java.

eacapeisfutuile|10 months ago

Not entirely, because the snippets you get when vibe coding are derived from actual coding.

blogabegonija|10 months ago

Guys like these need dmt. srlsly.

LoganDark|10 months ago

I would highly caution against recommending DMT to random strangers. It is not for the faint of heart and it is also nowhere near a magic fix-all. Also, its routes of administration mostly suck (smoking/vaping or MAOIs).