(no title)
mschild | 18 days ago
To provide a bit more context: - We use VS Code (plus derivatives like Cursor) hooked up to general modals and allowing general context access to the entire repository. - We have a MCP server that has access to company internal framework and tools (especially the documentation) so it should know how they are used.
So far, we've found 2 use-cases that make AI work for us: 1. Code Review. This took quite a bit of refinement for the instructions but we've got it to a point where it provides decent comments on the things we want it to comment on. It still fails on the more complex application logic, but will consistently point out minor things. It's used now as a Pre-PR review so engineers can use it and fix things before publishing a PR. Less noise for the rest of the developers. 2. CRUD croft like tests for a controller. We still create the controller endpoint, but providing it with the controller, DTOs, and an example of how another controller has its tests done, it will produce decent code. Even then, we still often have to fix a couple of things and debug to see where it went wrong like fixing a broken test by removing the actual strictlyEquals() call.
Just keeping up with newest AI changes is hard. We all have personal curiosity but at the end of the day, we need to deliver our product and only have so much time to experiment with AI stuff. Nevermind all the other developments in our regulatory heavy environment and tech stack we need to keep on top off.
No comments yet.