The hallucination problem has dropped exponentially in recent times in code generation. I can’t even recall a time any of the modern models I’ve used have done it in my recent usage. It’ll still do it in cheap/fast models and in places outside of code generation, but the good models write frankly incredible code, especially if you set them up with feedback loops.
No comments yet.