(no title)
mark_undoio | 1 month ago
If you're lucky enough to be able to code significant amounts with a modern agent (someone's paying, your task is amenable to it, etc) then you may experience development shifting (further) from "type in the code" to "express the concepts". Maybe you still write some code - but not as much.
What does this look like for debugging / understanding? There's a potential outcome of "AI just solves all the bugs" but I think it's reasonable to imagine that AI will be a (preferably helpful!) partner to a human developer who needs to debug.
My best guess is:
* The entities you manage are "investigations" (mapping onto agents) * You interact primarily through some kind of rich chat (includes sensibly formatted code, data, etc) * The primary artefact(s) of this workflow are not code but something more like "clues" / "evidence".
Managing all the theories and snippets of evidence is already core to debugging the old fashioned way. I think having agents in the loop gives us an opportunity to make that explicit part of the process (and then be able to assign agents to follow up gaps in the evidence, or investigate them yourself or get someone else to...).
manux81|1 month ago