(no title)
faxmeyourcode | 2 months ago
> Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished.”
Software engineering pre-LLMs will never, ever come back. Lots of folks are not understanding that. What we're doing at the end of 2025 looks so much different than what we were doing at the end of 2024. Engineering as we knew it a year or two ago will never return.
maccard|2 months ago
I use AI as a smart auto complete - I’ve tried multiple tools on multiple models and I still _regularlt_ have it dump absolute nonsense into my editor - in thr best case it’s gone on a tangent, but in the most common case it’s assumed something (often times directly contradicting what I’ve asked it to do), gone with it, and lost the plot along the way. Of course when I correct it it says “you’re right, X doesn’t exist so we need to do X”…
Has it made me faster? Yes. Had it changed engineering - not even close. There’s absolutely no world where I would trust what I’ve seen out of these tools to run in the real world even with supervision.
faxmeyourcode|2 months ago
Assume you're writing code manually, and you personally make a mistake. It's often worthwhile to create a mechanism that prevents that class of mistake from cropping up again. Adding better LSP or refactoring support to your editor, better syntax highlighting, better type checking, etc.
That same exact game of whack a mole now has to be done for you and whatever agent you're building with. Some questions to ask: What caused the hallucination? Did you have the agent lay out its plan before it writes any code? Ask you questions and iterate on a spec before implementation? Have you given it all of the necessary tools, test harnesses, and context it needs to complete a request that you've made to it? How do you automate this so that it's impossible for these pieces to be missing for the next request? Are you using the right model for the task at hand?
geitir|2 months ago