(no title)
danschuller | 1 year ago
I don't think this is going to be reality anytime soon. In order for the LLM or agent to do what you want, you'd need to be able to precisely specify what you want and that's a hard problem all on it's own. And if you were able to do that precise specification you would be the programmer.
Not say the software developer paradigm won't change but it seems very unlikely to become "make me a better google ads system" anytime soon. I could see getting to something were you are given a result by an agent and then can iterate on it, towards some solution.
chii|1 year ago
no, you just need to vaguely know what you want, and get the LLM to produce something that you then examine, and crawl towards the end goal.
LLM's could potentially allow fast iteration from a laymen's description of what they want.
xen0|1 year ago
But I can't find the bug. I get the wrong answers but can't trace it through the logic.
Maybe it's a dumb thing like a missing index increment? Or a missing assignment and I just can't see it.
Maybe it's easier to just tear down the mess and write it again.
This is how I feel whenever I deal with AI generated code.
ljf|1 year ago
Equally, you can ask the GenAI to keep asking you questions to broaden its knowledge of the problem you are solving, and also ask it to research the issues customers are having with a current solution.
Some engineers seem to imagine any non coder using AI will behave very simply 'make me a new search engine' . Lots of very clever people (who just don't know how to or want to learn to code) will be picking up the skills to use AI as it gets better and better.
I can see AI being used to write far better requirements and produce amazing prototypes - but if you work at a megacorp, chances are (for now) they will want that code rewritten by a 'human' developer.
rwmj|1 year ago
janalsncm|1 year ago
myworkinisgood|1 year ago
pzo|1 year ago
campers|1 year ago
CaptainFever|1 year ago
HuangYuSan|1 year ago
campers|1 year ago
The other day I was looking into adding Trusted Types in the Content-Security-Policy header, which was something new to me. In my chat with Claude I asked:
"Lets brainstorm 10 a list of ideas closely related to this so we can think of anything we might be missing on the topic to consider."
And that provided a good list of items to review to consider and expand out the sphere of thinking for the LLM.
It is an infuriatingly hard problem to have the LLM produce excellent results every single time, and have it just do everything and want it to read our mind and all the knowledge and context of a task. I think we'll make some good progress over the next few years as agentic workflows are built out to mimic out thought processes, and the cost/capability of the LLMs keeps improving.