top | item 47147980

(no title)

salawat | 4 days ago

I do not find that to be the case. Most of the things I'm getting spit out are straight up broken out of the box. Like, missing imports, syntax errors. Directing an LLM feels like having a junior gaslighting while they think you're gaslighting them. Spending as much time working on Prompts to generate code seems foolhardy, because even for the same exact prompt, my code generation result is so ill conditioned, the Prompt isn't source code to the degree of reliability actual source code is. A model may see the same prompt, then generate two entirely different API's as a solution. It's maddening. Made even worse I guess by the fact most hosted setups want to bill you by token. Makes me wonder if I should start billing by LOC to prove a point.

discuss

order

chris_money202|4 days ago

This almost sounds like you could have a setup issue or are working in a legacy codebase and the APIs are not available as context.

You need to make sure it has access to the information it needs by providing docs as context if the code is imported or it will likely hallucinate or try to ill fit a solution into what it does know / can see.

munksbeer|4 days ago

> I do not find that to be the case. Most of the things I'm getting spit out are straight up broken out of the box. Like, missing imports, syntax errors.

How is this even possible? You tell the agent to write such-and-such a feature and it will edit the source files, run the compile, check for issues, fix them, run tests, etc. If there are missing imports or syntax errors it won't even compile and the agent will continue to fix it. Not once since I started using claude have I had an issue with this.

Are you just typing into a chat and copy pasting code? That was a terrible experience for me, don't do it.