top | item 46719616

(no title)

scroot | 1 month ago

These posts claiming that "we will review the output" etc., and that claim software engineers will still need to apply their expertise and wisdom to generated outputs, never seem to think this all the way through. Those who write such articles might indeed have enough experience and deep knowledge to evaluate AI outputs. What of subsequent generations of engineers? What about the forthcoming wave of people who may never attain the (required) deep knowledge, because they've been dependent on these generation tools during the course of their own education?

The structures of our culture combined with what generative AI necessarily is means that expertise will fade generationally. I don't see a way around that, and I see almost no discussion of ameliorating the issue.

discuss

order

mpalmer|1 month ago

The solution is to find a way to use these tools in such a way that saves us huge amounts of time but still forces us to think and document our decisions. Then, teach these methods in school.

Self-directed, individual use of LLMs for generating code is not the way forward for industrial software production.

dfxm12|1 month ago

I don't understand how this is a new or unique problem. Regardless of when or where (or if!) my coworkers got their degrees, before or after access to AI tools, some of them are intellectually curious. Some do their job well. Some are in over their head & are improving. Some are probably better suited for other lines of work. It's always been an organizational function to identify & retain folks who are willing and able to grow into the experience and knowledge required for the role they currently have and future roles where they may be needed.

Academically, this is a non factor as well. You still learned your multiplication tables even though calculators existed, right?

entropicdrifter|1 month ago

Agreed. This is a moral panic because people are learning and adapting in new ways.

Aristotle blamed literacy for intellectual laziness among the youth compared to the old methods of memorization

8organicbits|1 month ago

Another thing I keep thinking about is that review is harder than writing code. A casual LGTM is suitable for peer review, but applying deep context and checking for logic issues requires more thought. When I write code, I usually learn something about software or the context. "Writing is thinking" in a way that reading isn't.

entropicdrifter|1 month ago

Personally, I'm not as worried about this as an issue going forward.

When you look at technical people who grew up with the imperfect user interfaces/computers of the 80s, 90s and 00s before the rise of smartphones and tablets, you see people who have a naturally acquired knack for troubleshooting and organically gaining understanding of computers despite (in most cases) never being grounded in the low-level mathematical underpinnings of computer science.

IMO, the imperfections of modern AI are likely going to lead to a new generation of troubleshooters who will organically be forced to accumulate real understanding from a top-down perspective in much the same vein. It's just going to cost us all an absurd amount of electricity.

candiddevmike|1 month ago

This is why you aren't seeing GenAI used more in law firms. Lawyers can be disbarred by erroneous hallucinations, so they're all extremely cautious about using them. Imagine if there was that kind of accountability in our profession.

echelon|1 month ago

The invention of calculators did not cause society to collapse.

Smart and industrious people will focus energy on economically important problems. That has always been the case.

Everything will work out just fine.

id|1 month ago

>software engineers will still need to apply their expertise and wisdom to generated outputs

And in my experience they don't really do that. They trust that it'll be good enough.