I think people got fatigued by reviewing already. Most code is correct that AI produces so you end up checking out eventually.
A lot of the time the issue isn't actually the code itself but larger architectural patterns. But realizing this takes a lot of mental work. Checking out and just accepting what exists, is a lot easier but misses subtleties that are important.
I wonder if this phenomenon comes from how reliable lower layers have become. For example, I never check the binary or ASM produced by my code, nor even intermediate byte code.
So vibers may be assuming the AI is as reliable, or at least can be with enough specs and attempts.
I have seen enough compiler (and even hardware) bugs to know that you do need to dig deeper to find out why something isn't working the way you thought it should. Of course I suspect there are many others who run into those bugs, then massage the code somehow and "fix" it that way.
I suggest move the sanity check to the point of employing the parrot.
"Fixing defects down the road during testing costs 15x as much as fixing them during design, according to research from the IBM System Science Institute."
jascha_eng|14 days ago
A lot of the time the issue isn't actually the code itself but larger architectural patterns. But realizing this takes a lot of mental work. Checking out and just accepting what exists, is a lot easier but misses subtleties that are important.
paulryanrogers|15 days ago
So vibers may be assuming the AI is as reliable, or at least can be with enough specs and attempts.
userbinator|14 days ago
rsynnott|14 days ago
anthonypasq96|14 days ago
chrisjj|14 days ago
"Fixing defects down the road during testing costs 15x as much as fixing them during design, according to research from the IBM System Science Institute."
unknown|14 days ago
[deleted]
unknown|15 days ago
[deleted]