(no title)
qazxcvbnmlp | 1 month ago
In the context of software development; requirements are based on what we wanna do (which is based on emotion), the methods we choose to implement it are also based mostly on our predictions about what will work and not work well.
Most of our affinity for good software development hygiene comes from emotional experiences of the negative feelings felt from the extra work of bad development hygiene.
I think this explains a lot of varied success with coding agents. You don’t talk to them like you talk to an engineer because with an engineer, you know that they have a sense of what is good and bad. Coding agents won’t tell you what is good and bad. They have some limited heuristics, but they don’t understand nuance at all unless you prompt them on it.
Even if they could have unlimited context, window and memory, they would still need to be able to which part of that memories is important. I.e. if the human gave them conflicting instructions, how do they resolve that?.
I eventually think we’ll get to a state where a lot of the mechanics of coding and development can be incorporated into coding agents, but the what and why we build will still come from a human. I.e. will be able to do from 0 to 100% by itself a full stack web application, including deployment with all the security compliance and logins and whatever else, but it still won’t know what is important to emphasize in that website. Should the images be bigger here or the text? Questions like that.
dabaja|1 month ago
qazxcvbnmlp|1 month ago