I feel like a lot of people are forgetting how good llms are at small isolated tasks because of how much better they've gotten at larger tasks. The best experiences I've had with llms all involve sketching out the interfaces for components I need and letting it fill in the implementation. That mentality also rewards choices that lead to good/maintainable code. you give functions good names so the AI knows what to implement. You make the code you ask it to generate as small as possible to minimize the chance of it hallucinating/going off the rails. You stub simple apis for the same reason. And (unsurprisingly) small, well defined functions are extremely testable! Which is a great trait to have for code that you know can very well be wrong.In time the AI will be good enough design whole applications in this vibe-code-y way... But all of the examples I've seen so far indicate that even the best publicly available models aren't there. It seems like every example I've seen has the developer bickering with the ai about something it just won't get right - often wasting more time than they were slightly more hands on. Until the tech gets over that I'll stick to it being the "junior developer I give a uml diagram to so they can figure out the messy parts".
No comments yet.