top | item 42990994

(no title)

wvoch235 | 1 year ago

And if those people they bring in who knows what they’re doing aren’t using AI after a certain point, that company isn’t going to make it either.

OP is talking about learning, we all did it at some point, and yes you were a liability too. Learning done through an LLM is the closest you can get to 1:1 training without having it.

The article though also largely talks about more experienced engineers not using it as a tool to increase their leverage. In which case it’s not all that different than throwing a task off to a JR engineer and reviewing this work. Even a JR Engineer who is well read on the latest research.

But most people don’t do research. Considering 70% of software engineers mainly write crud or mobile apps these days… this liability argument is really looking shaky.

Liability from what? taking out staging or even making a bug out to prod? Should you be learning space flight control systems as you code them? No. Should a jr engineer us it to learn how to do a left join that’s getting code reviewed anyway? Yeah that’s probably going to be faster than them polling team resources. Just the same as hitting google.

Startups for instance are usually more incentivized to move fast than deliver a bug free product. Large companies usually are too. That’s why software always needs updates.

I am sure we can all enumerate fields and projects where it’s problematic and dangerous to human life to accept ai output. But the vast majority of people on this site don’t work in those fields, and companies in those fields should already have (and likely do already have) control processes in place.

If you’re a company with no control processes, and you’re terrified by the prospect of ai code because of legitimate danger to your users… and you think it’s you… the engineer… holding back the gates… your company also is not going to make it.

discuss

order

No comments yet.