top | item 46330411

(no title)

faxmeyourcode | 2 months ago

While I agree with the premise of the article, even if it was a bit shallow, this claim made at the beginning is also still true:

> Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished.”

Software engineering pre-LLMs will never, ever come back. Lots of folks are not understanding that. What we're doing at the end of 2025 looks so much different than what we were doing at the end of 2024. Engineering as we knew it a year or two ago will never return.

discuss

order

maccard|2 months ago

Does it?

I use AI as a smart auto complete - I’ve tried multiple tools on multiple models and I still _regularlt_ have it dump absolute nonsense into my editor - in thr best case it’s gone on a tangent, but in the most common case it’s assumed something (often times directly contradicting what I’ve asked it to do), gone with it, and lost the plot along the way. Of course when I correct it it says “you’re right, X doesn’t exist so we need to do X”…

Has it made me faster? Yes. Had it changed engineering - not even close. There’s absolutely no world where I would trust what I’ve seen out of these tools to run in the real world even with supervision.

faxmeyourcode|2 months ago

Unfortunately this is a skill issue. And it's a totally different skill than reading and writing code, building solid systems, and general software engineering at large. That is annoying, but where we're currently at.

Assume you're writing code manually, and you personally make a mistake. It's often worthwhile to create a mechanism that prevents that class of mistake from cropping up again. Adding better LSP or refactoring support to your editor, better syntax highlighting, better type checking, etc.

That same exact game of whack a mole now has to be done for you and whatever agent you're building with. Some questions to ask: What caused the hallucination? Did you have the agent lay out its plan before it writes any code? Ask you questions and iterate on a spec before implementation? Have you given it all of the necessary tools, test harnesses, and context it needs to complete a request that you've made to it? How do you automate this so that it's impossible for these pieces to be missing for the next request? Are you using the right model for the task at hand?

geitir|2 months ago

When you have that hair raising “am I crazy why are people touting ai” feeling, it’s good to look at their profile. Oftentimes they’re caught up in some ai play. Also it’s good to remember yc has heavy investment in gen ai so this site is heavily biased