I recently experimented with Gemini on Colab for building a discrete simulation in Python—initially started with ChatGPT, then moved platforms due to free-tier limits. Gemini was responsive in analyzing graph outputs and made quick progress with rapid prototyping. However, when I shifted focus to refactoring and improving code structure, e.g., extracting classes and encapsulating behavior, it defaulted to a weird hybrid class/functional approach, often placing logic outside domain objects rather than applying polymorphism. Even after I explicitly mentioned principles like "Tell, don’t ask," I had to insist before it adjusted its design choices accordingly. I asked why those principles are NOT there by default, and it said basically most coders don't use them and it seeks direct solutions.While Gemini performed well in tweaking visualizations (it even understood the output of matplotlib) and responding to direct prompts, it struggled with debugging and multi-step refactorings, occasionally failing with generic error messages. My takeaway is that these tools are incredibly productive for greenfield coding with minimal constraints, but when it comes to making code reusable or architecturally sound, they still require significant human guidance. The AI doesn’t prioritize long-term code quality unless you actively steer it in that direction.
No comments yet.