top | item 41264032

(no title)

danschuller | 1 year ago

> imagine a non-arrogant programmer that actually does what you want

I don't think this is going to be reality anytime soon. In order for the LLM or agent to do what you want, you'd need to be able to precisely specify what you want and that's a hard problem all on it's own. And if you were able to do that precise specification you would be the programmer.

Not say the software developer paradigm won't change but it seems very unlikely to become "make me a better google ads system" anytime soon. I could see getting to something were you are given a result by an agent and then can iterate on it, towards some solution.

discuss

order

chii|1 year ago

> In order for the LLM or agent to do what you want, you'd need to be able to precisely specify what you want

no, you just need to vaguely know what you want, and get the LLM to produce something that you then examine, and crawl towards the end goal.

LLM's could potentially allow fast iteration from a laymen's description of what they want.

xen0|1 year ago

Sometimes when writing 'fiddly' code, I'll have a bug.

But I can't find the bug. I get the wrong answers but can't trace it through the logic.

Maybe it's a dumb thing like a missing index increment? Or a missing assignment and I just can't see it.

Maybe it's easier to just tear down the mess and write it again.

This is how I feel whenever I deal with AI generated code.

ljf|1 year ago

Exactly this - show me something and I can tell the AI what I don't like or what it is missing.

Equally, you can ask the GenAI to keep asking you questions to broaden its knowledge of the problem you are solving, and also ask it to research the issues customers are having with a current solution.

Some engineers seem to imagine any non coder using AI will behave very simply 'make me a new search engine' . Lots of very clever people (who just don't know how to or want to learn to code) will be picking up the skills to use AI as it gets better and better.

I can see AI being used to write far better requirements and produce amazing prototypes - but if you work at a megacorp, chances are (for now) they will want that code rewritten by a 'human' developer.

rwmj|1 year ago

The problem with this plan is reading code is the hardest part of coding. Especially code you haven't written.

janalsncm|1 year ago

The issue with LLM driven development is that it’s often as hard to verify the outputs of the model as it would’ve been to write it myself. It’s basically the programming equivalent of a Gish gallop.

myworkinisgood|1 year ago

You could also do the same thing with a high-level language. Your LLM is nothing more than an interactive optimizer.

pzo|1 year ago

Thats still gonna be a big change shift if such companies could axe every 2nd or 3rd developer in their teams. In that situation you might be competing with your colleague not to loose job or have to be "non-arogant" (/s) to ask for pay rise.

campers|1 year ago

Another way to look at it is everyone's productivity will be expected to increase to match the productivity increases by the competition who are also using AI. If you don't skill up on how to effectively use AI, then we'll find someone else who has.

CaptainFever|1 year ago

This is the lump of labour fallacy. There's always more work to be done.

HuangYuSan|1 year ago

Even if you formally specify what you want on a high level and the LLM implements it on a low level, yes, you can call yourself the programmer and the LLM would be a compiler but it would still be amazingly useful

campers|1 year ago

There's definitely room to build specification builder agents, that have access to documentation and previous specifications.

The other day I was looking into adding Trusted Types in the Content-Security-Policy header, which was something new to me. In my chat with Claude I asked:

"Lets brainstorm 10 a list of ideas closely related to this so we can think of anything we might be missing on the topic to consider."

And that provided a good list of items to review to consider and expand out the sphere of thinking for the LLM.

It is an infuriatingly hard problem to have the LLM produce excellent results every single time, and have it just do everything and want it to read our mind and all the knowledge and context of a task. I think we'll make some good progress over the next few years as agentic workflows are built out to mimic out thought processes, and the cost/capability of the LLMs keeps improving.