(no title)
magmostafa | 2 months ago
1. Context-aware prompting: We maintain a .ai-context folder with architecture docs, coding standards, and common patterns. Before asking the LLM anything, we feed it relevant context from these docs.
2. Incremental changes: Rather than asking for large refactors, we break tasks into small, testable chunks. This makes code review much easier and reduces the "black box" problem.
3. Test-first development: We ask the LLM to write tests before implementation. This helps ensure it understands requirements correctly and gives us confidence in the generated code.
4. Custom linting rules: We've encoded our team's conventions into ESLint/Pylint rules. LLMs respect these better than prose guidelines.
5. Review templates: We have standardized PR templates that specifically call out "AI-generated code" sections for closer human review.
The key insight: LLMs work best as pair programmers, not autonomous developers. They're excellent at boilerplate, test generation, and refactoring suggestions, but need human oversight for architectural decisions.
One surprising benefit: junior devs learn faster by reviewing LLM-generated code with seniors, compared to just reading documentation.
__mharrison__|2 months ago