top | item 46262816

AI and the ironies of automation – Part 2

256 points| BinaryIgor | 3 months ago |ufried.com | reply

120 comments

order
[+] ripe|3 months ago|reply
I really like this author's summary of the 1983 Bainbridge paper about industrial automation. I have often wondered how to apply those insights to AI agents, but I was never able to summarize it as well as OP.

Bainbridge by itself is a tough paper to read because it's so dense. It's just four pages long and worth following along:

https://ckrybus.com/static/papers/Bainbridge_1983_Automatica...

For example, see this statement in the paper: "the present generation of automated systems, which are monitored by former manual operators, are riding on their skills, which later generations of operators cannot be expected to have."

This summarizes the first irony of automation, which is now familiar to everyone on HN: using AI agents effectively requires an expert programmer, but to build the skills to be an expert programmer, you have to program yourself.

It's full of insights like that. Highly recommended!

[+] yannyu|3 months ago|reply
I think it's even more pernicious than the paper describes as cultural outputs, art, and writing aren't done to solve a problem, they're expressions that don't have a pure utility purpose. There's no "final form" for these things, and they change constantly, like language.

All of these AI outputs are both polluting the commons where they pulled all their training data AND are alienating the creators of these cultural outputs via displacement of labor and payment, which means that general purpose models are starting to run out of contemporary, low-cost training data.

So either training data is going to get more expensive because you're going to have to pay creators, or these models will slowly drift away from the contemporary cultural reality.

We'll see where it all lands, but it seems clear that this is a circular problem with a time delay, and we're just waiting to see what the downstream effect will be.

[+] BinaryIgor|3 months ago|reply
Yes! One could argue that we might end up with programmers (experts) going through a training of creating software manually first, before becoming operators of AI, and then also spending regularly some of their working time (10 - 20%?) on keeping these skills sharp - by working on purely education projects, in the old school way; but it begs the question:

Does it then really speeds us up and generally makes things better?

[+] fuzzfactor|3 months ago|reply
>skills, which later generations of operators cannot be expected to have.

You can't ring more true than this. For decades now.

For a couple years there I was able to get some ML together and it helped me get my job done, never came close to AI, I only had kilobytes of memory anyway.

By the time 1983 rolled around I could see the writing on the wall, AI was going to take over a good share of automation tasks in a more intelligent way by bumping the expert systems up a notch. Sometimes this is going to be a quantum notch and it could end up like "expertise squared" or "productivity squared" [0]. At the rarefied upper bound. Using programmable electronics to multiply the abilities of the true expert whilst simultaneously the expert utilized their abilities to multiply the effectiveness of the electronics. Maybe only reaching the apex when the most experienced domain expert does the programming, or at least runs the show.

Never did see that paper, but it was obvious to many.

I probably mentioned this before, but that's when I really bucked down for a lifetime of experimental natural science across a very broad range of areas which would be more & more suitable for automation. While operating professionally within a very narrow niche where personal participation would remain the source of truth long enough for compounding to occur. I had already been a strong automation pioneer in my own environment.

So I was always fine regardless of the overall automation landscape, and spent the necessary decades across thousands of surprising edge cases getting an idea how I would make it possible for someone else to even accomplish some of these difficult objectives, or perhaps one day fully automate. If the machine intelligence ever got good enough. Along with the other electronics, which is one of the areas I was concentrating on.

One of the key strategies did turn out to be outliving those who had extensive troves of their own findings, but I really have not automated that much. As my experience level becomes less common, people seem to want me to perform in person with greater desire every decade :\

There's related concepts for that too, some more intelligent than others ;)

[0] With a timely nod to a college room mate who coined the term "bullshit squared"

[+] agumonkey|3 months ago|reply
I kinda fear that this is an economic plane stall, we're tilting upward so much, the underlying conditions are about to dissolve

And I'd add, that recent LLMs magic (i admit they reached a maturity level that is hard to deny) is also a two edged sword, they don't create abstraction often, they create a very well made set of byproducts (code, conf, docs, else) to realize your demand, but people right now don't need to create new improved methods, frameworks, paradigms because the LLM doesn't have our mental constraints.. (maybe later reasoning LLMs will tackle that, plausibly)

[+] frabonacci|3 months ago|reply
The author's conclusion feels even more relevant today: AI automation doesn’t really remove human difficulty—it just moves it around, often making it harder to notice and more risky. And even after a human steps in, there’s usually a lot of follow-up and adjustment work left to do. Thanks for surfacing these uncomfortable but relevant insights
[+] Legend2440|3 months ago|reply
>the present generation of automated systems, which are monitored by former manual operators, are riding on their skills, which later generations of operators cannot be expected to have.

But we are in the later generation now. All the 1983 operators are now retired, and today's factory operators have never had the experience of 'doing it by hand'.

Operators still have skills, but it's 'what to do when the machine fails' rather than 'how to operate fully manually'. Many systems cannot be operated fully manually under any conditions.

And yet they're still doing great. Factory automation has been wildly successful and is responsible for why manufactured goods are so plentiful and inexpensive today.

[+] naveen99|3 months ago|reply
I mean how did you get an expert programmer before ? Surely it can’t be harder to learn to program with ai than without ai. It’s written in the book of resnet.

You could swap out ai with google or stackoverflow or documentation or unix…

[+] startupsfail|3 months ago|reply
The same argument was there about needing to be an expert programmer in assembly language to use C, and then same for C and Python, and then Python and CUDA, and then Theano/Tensorflow/Pytorch.

And yet here we are, able to talk to a computer, that writes Pytorch code that orchestrates the complexity below it. And even talks back coherently sometimes.

[+] nuancebydefault|3 months ago|reply
The article discusses basically 2 new problems with using agentic AI:

- When one of the agents does something wrong, a human operator needs to be able to intervene quickly and needs to provide the agent with expert instructions. However since experts do not execute the bare tasks anymore, they forget parts of their expertise quickly. This means the experts need constant training, hence they will have little time left to oversee the agent's work.

- Experts must become managers of agentic systems, a role which they are not familiar with, hence they are not feeling at home in their job. This problem is harder to be determined as a problem by people managers (of the experts) since they don't experience that problem often first hand.

Indeed the irony is that AI provides efficiency gains, which as they become more widely adopted, become more problematic because they outfit the necessary human in the loop.

I think this all means that automation is not taking away everyone's job, as it makes things more complicated and hence humans can still compete.

[+] grvdrm|3 months ago|reply
Your first problem doesn’t feel new at all. Reminded me of a situation several years ago. What was previous Excel report was automated into PowerBI. Great right? Time saved. Etc.

But the report was very wrong for months. Maybe longer. And since it was automated, the instinct to check and validate was gone. And tracking down the problem required extra work that hadn’t been part of the Excel flow

I use this example in all of my automation conversations to remind people to be thoughtful about where and when they automate.

[+] asielen|3 months ago|reply
The way you put that makes be think of the current challenge younger generations are having with technology in general. Kids who were raised on touch screen interfaces vs kids in older generations who were raised on computers that required more technical skill to figure out.

In the same way, when everything just works, there will be no difference, but when something goes wrong, the person who learned the skills before will have a distinct advantage.

The question is if AI gets good enough that slowing down occasionally to find a specialist is tenable. It doesn't need to be perfect, it just needs to be predicably not perfect.

Expertw will always be needed, but they may be more like car mechanics, there to fix hopefully rare issues and provide a tune up, rather than building the cars themselves.

[+] delaminator|3 months ago|reply
I used to be a maintenance data analyst in a welding plant welding about 1 million units per month.

I was the only person in the factory who was a qualified welder.

[+] layer8|3 months ago|reply
They also made the point that the less frequent failures become, the more tedious it is for the human operator to check for them, giving the example of AI agents providing verbose plans of what they intend to do that are mostly fine, but will occasionally contain critical failures that the operator is supposed to catch.
[+] DiscourseFan|3 months ago|reply
That's how it tends to go, automation removes some parts of the work but creates more complexity. Sooner or later that will also be automated away, and so on and so forth. AGI evangelists ought to read Marx's Capital.
[+] z_|3 months ago|reply
This is a thought provoking piece.

“But at what cost?”

We’ve all accepted calculators into our lives as being faster and correct when utilized correctly (Minus Intel tomfoolery), but we emphasize the need to know how to do the math in educational settings.

Any post education adult will confirm when confronted with an irregular math problem (or a skill) that there is a wait time to revive the ability.

Programming automation having the potential skill decay AND being critical path is … worth thinking about.

[+] xorcist|3 months ago|reply
Comparisons with deterministic tools such as calculators will always lead astray. There is no comparable situation where faced with a new problem the AI will just give up. If there is the need for an expert, the need is always there, because there is no indication external to the process that the process will fail.
[+] singpolyma3|3 months ago|reply
Calculators don't do math, they do calculating. Which is to say, they don't think for you. There's not much value in being able to quickly compute some expression in a world with calculators. But there's a huge value in knowing how to know which numbers to feed into the calculation.
[+] eastbound|3 months ago|reply
We already have generational programming decay. At 25 years old, kids fresh out of uni can’t write a string.contains() routine. They all use .stream() in Java. Matter of generation, fashion and skills to learn. And concerning the programming of C drivers, Apple is the last company to write a filesystem and they already can’t find anyone able to do it.
[+] jiehong|3 months ago|reply
This irony of automation has been dealt with in the aviation industry for pilot for years: auto pilots can actually land the plane in many cases, and do fly the plane on most of the cruise.

Yet, pilots are constantly trained on actual scenarios, and are expected to land airplanes manually monthly (and during take off too).

This ensures pilots maintain their skills, while the auto pilot helps most of the time.

On top of that, plane commands often are half automatic already, aka they are assisted (but not by LLMs!), so it’s a complex comparison.

[+] libraryofbabel|3 months ago|reply
Yes, but (to write the second half of your post for you!) regulation and incentives are very different in the aviation industry, because safety and planning for long-tail risks is paramount. Therefore airlines can afford to have their pilots spend thousands of hours training on manual control in various scenarios. By contrast, I don’t think the average software development org will encourage its engineers to hand-roll a sizable proportion of their code, if (still a big if) there are major productivity costs in doing so. Rushing the Next Big Feature out the door will almost always beat out long-term investment in dev training, unfortunately.

Don’t get me wrong - manual practice is in some sense the correct solution, and I plan to try and do it myself in the next decade to make sure my skills stay sharp. But I don’t see the industry broadly encouraging it, still less making it mandatory as aviation does.

Addendum: as you probably know, even in aviation, this is hard to get right. (This is sometimes called the “children of the magenta” problem, but it’s really Bainbridge again.) The most famous example is perhaps Air France Flight 447[0], where the pilots put the plane into a stall at 35,000ft when they reacted poorly after the autopilot disconnecting, and did not even realize they had stalled the plane. Of course, that crash itself led to more regulations around training in manual scenarios too.

[0] https://admiralcloudberg.medium.com/the-long-way-down-the-cr...

[+] Animats|3 months ago|reply
Bainbridge [1] is interesting, but dated. A more useful version of that discussion from slightly later is "Children of the Magenta", [2] an airline chief pilot talking to his pilots about cockpit automation and how to use it. Requires a basic notion of aviation jargon.

There's been progress since then. Although the details are not widely publicized, enough pilots of the F-22, F-35, or the Gripen have talked about what modern fighter cockpit automation is like. The real job of fighter pilots is to fight and win battles, not drive the airplane. A huge amount of effort has been put into simplifying the airplane driving job so the pilot can focus on killing targets. The general idea today is that the pilot puts the pointy end in the right direction and the control systems take care of the details. An F-22 pilot has been quoted as saying that the F-22 is far less fussy than a Cessna as a flying machine.

For the F-35, which has a VTOL configuration (B) and a carrier-landing configuration (C), much effort was put into making VTOL landing and carrier landing easy. Not because pilots can't learn to do it, but because training tended to obsess on those tasks. The hard part of Harrier (the only previous successful VTOL fighter) was learning to land the unstable beast without crashing. There were still a lot of Harrier crashes.

The hard part of Naval aviator training is landing on a carrier deck. Neither of these tasks has anything to do with the real job of taking a bite out of the enemy, but they consumed most of the training time. So, for the F-35, both of those tasks have enough computer-added stability to make them much easier. One of the stranger features of the F-35 is that it has two main controls, called "inceptors", which correspond to throttle and stick. In normal flight, they mostly work like throttle and stick. But in low-speed hover, the "throttle" still controls speed while the "stick" controls attitude, even though the "stick" is affecting engine speed and the "throttle" is affecting control surfaces in that mode. So the pilot doesn't have to manage the strange transitions of a VTOL craft directly.

This refocuses pilot training on using the sensors and weapons to do something to the enemy. Classic training is mostly about the last few minutes of getting home safely.

As AI for programming advances, we should expect to devote more user time to analyzing the tactical problem, rather than driving the bus.

[1] https://ckrybus.com/static/papers/Bainbridge_1983_Automatica...

[2] https://www.youtube.com/watch?v=5ESJH1NLMLs

[+] everdrive|3 months ago|reply
I can feel the skill atrophy creeping in. My very first instinct is go use the LLM. I think much like forcing yourself to exercise, eat right, and avoid social media / distractions, this will be a new modern skillset; do you have the discipline to avoid becoming useless without an LLM? A small few will be great at this, the middle of the bell curve will do "well enough," and you know the story for the rest.
[+] andy99|3 months ago|reply
I’ve been using LLMs to code for some time and I look at it differently.

I ask myself if I need to understand the code, and if the answer is yes I don’t use an LLM. It’s not a matter of discipline, it’s a sober view of what the minimal amount of work for me is.

[+] jason_oster|3 months ago|reply
I have wasted too much time wishing I could find the motivation to work on coding projects. And there are times that I was able to force myself to just get started. Spin up the flywheel and let momentum carry me.

But I'm talking about a consistent problem for more than 25 years. AI agents didn't do this to me. At least in my anecdotal case, this isn't atrophy. It's just the way it has always been. Now I actually have much less friction in getting a project going. I can just type a few of my thoughts at an agent and away it goes. The momentum is almost free, now.

[+] delaminator|3 months ago|reply
I haven't written any code in 6 months. But I can still remember how to code in 6502 machine code from the 1980s.
[+] vips7L|3 months ago|reply
This just sounds like addiction to the dopamine of instant gratification.
[+] didibus|3 months ago|reply
A good read, but it reminds me that people see the programmer as being there to identify when the AI makes an error or a mistake.

But in my use of AI agents as a programmer and also for other work. I would say that, while yes, you also have to look for mistakes or errors, most of the time I spend is on programming the AI still.

The AI agent has no idea what it must produce, what it's meant to do, when it can alter something existing to enable something new, etc.

And this is true for both functional and non-functional requirements.

Unlike in traditional manufacturing, you've already built your manufacturing pipeline for a precise output, you've got your CAD designs done, you ran your simulations, you've calibrated everything already for what you want.

So most of the work remains that of programming the machine.

[+] dsjoerg|3 months ago|reply
> Typically, before people are put in a leadership role directing humans, they will get a lot of leadership training teaching them the skills and tools needed to lead successfully.

I question this.

[+] sublimefire|3 months ago|reply
Good discussion of the paper and the observations and ironies. A thing to note is that we do have software factories already, with a bunch of automation in place and folks being trained to deal with incidents. The pools of agents just elevate what we currently have but the tools are still lacking severely. IMO the tools need to improve for us to move forward as it is difficult to observe the decisions of agents when they fall apart.

Also, by and large the current AI tools are not in the critical path yet, well except those drones that lock on targets to eliminate them in case of interference, and even then it is ML. Agents can not be in that path due to predictability challenges yet.

[+] jinwoo68|3 months ago|reply
"Most companies are efficiency-obsessed."

But what most of them do is not to be more efficient but to be shown to be more efficient. The main reason they are so obsessed with AI is because they want to send the signal that they are pursuing to be more efficient, whether they succeed or not.

[+] theologic|3 months ago|reply
Peter Drucker popularized the phrase "Efficiency is doing things right; effectiveness is doing the right things."

Being a credibly efficient at doing the wrong things, turns out to be a massive issue inside of most companies. What's interesting is I do think that AI gives opportunity to be massively more effective because if you have the right LLM, that's trained right, you can explore a variety of scenarios much faster than what you can do by yourself. However, we hear very little about this as a central thrust of how to utilize AI into the work space.

[+] steveBK123|3 months ago|reply
I think for most non-coding tasks we are still in the "convincing liar" stage, and not even at the "its right 99.9% of the time and humans need to quickly detect the 0.1% errors" problem. I think a lot of the HN crowd misses this because they are programmers using it for programming.

I work at a firm that has given AI tooling to non-developer data analyst type people who otherwise live & die in excel. Much of their day job involves reading PDFs. I occasionally will use some of the firms AI tooling for PDF summarizing/parsing/interrogation/etc type tasks and remain consistently underwhelmed.

Stuff like taking 10 PDFs each with a simple 30 row table per PDF, with the same title in each file, it ends up puking on 3-4 out of 10 with silent failures. Row drops, duplicating data, etc. When you point out its missed rows, it goes back and duplicates rows to get to the correct row count.

Using it to interrogate standard company filings PDfs that it has been specially trained on and it gave very convincing answers which were wrong because it has silently truncated its search context to only recent year financial filings. Nowhere did it show this limitation to the user. It only became apparent after researching the 4th or 5th company when it decided to caveat its answer with its knowledge window. This invalidated the previous answers as questions such as "when was the first X" or "have they ever reported Y" were operating on incomplete information.

Most users of these tool are not that technical, and are going to be much more naive in taking the answers for fact without considering the context.

[+] Terr_|3 months ago|reply
I'm convinced the best use of these systems will be an explicit two-phase process where they just help people prototype and see and learn how to command regular software.

For example, imagine describing what files you want to find, and getting back a command-line string of find/grep piping. It doesn't execute anything without confirmation, it doesn't "summarize" the results, it's just a narrow tutor to help people in a translation step. A tool for learning that, ideally, eventually puts itself out of a job.

Returning to your PDF scenario: The LLM could help people weave together regular tools of "find regions with keywords" and "extract table as spreadsheet" and "cross-reference two spreadsheets using column values", etc.

[+] Animats|3 months ago|reply
There are a few issues here.

It's useful to think about AI-driven coding assistants in terms of the SAE levels of automation for automatic driving.

- Level 0 - totally manual

- Level 1 - a bit of assistance, such as cruise control

- Level 2 - speed and steering control that requires constant supervision by a human driver. This is where most of the commercial systems are now.

- Level 3 - Level 2, but reliable enough that the human driver doesn't need to supervise constantly. Able to bring the vehicle to a safe stop by itself. Mercedes -Benz Drive Pilot is supposedly level 3. Handoff between computers and human remains a problem. Human still liable for accidents.

- Level 4 - Full automation, but not under all conditions. Waymo is Level 4. Human just selects the destination.

- Level 5 - Full automation, at least as capable as human drivers under all conditions. Not yet seen.

What we're looking at with the various programming assistance AI systems ls Level 2 or Level 3 competence. These are the most troublesome levels. Who's in charge? Who's to blame?

The need for such programming assistance systems may be transient, as it clearly is in automotive. Eventually, everybody in automotive will get to Level 4 or better, or drop out due to competitive pressure.

[+] analog8374|3 months ago|reply
I spent years creating automated drawing machines. But I can still draw better than any of them with my hand. Not as quickly tho.
[+] demorro|3 months ago|reply
These observations were made 40 years ago. I suspect we have solved many of these problems now and have close to fully automated manufacturing and flight systems, or close enough that the training trade-off is worth it.

However, this took 40 years and actual fatalities. We should keep that in mind when we're pushing the AI acceleration pedal down ever harder.

[+] alexgotoi|3 months ago|reply
The automation irony: we build AI to reduce human workload, but end up creating systems that need constant human supervision anyway. Classic.

What's interesting is this mirrors every automation wave. We thought assembly lines would eliminate human work - instead they just changed what work meant. AI's doing the same, just at software speed instead of industrial speed.

Long-term I'm optimistic - automation creates more than it destroys, always has. Short-term though? Messy transition for anyone whose job is 'being the interface layer.

Will include this thread in my next issue of https://hackernewsai.com/

[+] wesammikhail|3 months ago|reply
Our of curiosity, does anyone know of a good writeup / blog post made by someone in the industry that revolves around reducing orchestration error rates? Would love to read some more about the topic and I'm looking for a few good resources.
[+] dloranc|3 months ago|reply
What do you mean by orchestration?
[+] bdangubic|3 months ago|reply
> “ If it does not work properly, you need better prompts” is the usual response if someone struggles with directing agents successfully

so much this!

[+] throwaway613745|3 months ago|reply
If your process is shit, you're just automating shit at lightning speed.

If you're bad at your job, you're automating it at lightning speed.

You need have good business process and be good at your job without AI in order to have any chance in hell of being successful with it. The idea that you can just outsource your thinking to the AI and don't need to actually understand or learn anything new anymore is complete delusion.