(no title)
dent9 | 1 month ago
So one thing I only recently figured out is that using ChatGPT via the web browser chat is massively different from using OpenAI's code-focused Codex model / interface. Once I switched to using Codex (via the VS Code extension + my own ChatGPT subscription) the quality of answers I got improved massively.
So if you're trying to use LLM to help with debug, make sure you're using the right model!! There are apparently massive differences between models of the same generation from the same company
No comments yet.