When debugging code, experienced developers don't read every file—they follow execution paths and understand system architecture. But today's AI coding tools try to read all files and get bogged down in unnecessary details.
With context windows limited to 200K tokens, cramming in random files isn't just inefficient, it's impossible for large codebases. If you’re debugging a failing test, you only need to understand the relevant files in the call chain.
It's not about more context, it's about relevant context. That's what Nuanced provides through static analysis and machine learning.
Function calls are definitely one part of adding context, but lots of others that codegen tools are probably missing. Have you guys considered post-mortems, pagerduty outputs, slack threads focused on specific issues, etc.?
aymandfire|1 year ago
With context windows limited to 200K tokens, cramming in random files isn't just inefficient, it's impossible for large codebases. If you’re debugging a failing test, you only need to understand the relevant files in the call chain. It's not about more context, it's about relevant context. That's what Nuanced provides through static analysis and machine learning.
billconan|1 year ago
millgrove|1 year ago
aymandfire|1 year ago