I feel like people who can't get AI to write production ready code are really bad at describing what they want done. The problem is that people want an LLM to one shot GTA6. When the average software developer prompts an LLM they expect 1) absolutely safe code 2) optimized/performant code 3) production ready code without even putting the requirements on credential/session handling.You need to prompt it like it's an idiot, you need to be the architect and the person to lead the LLM into writing performant and safe code. You can't expect it to turn key one shot everything. LLMs are not at the point yet.
ufmace|1 month ago
rbanffy|1 month ago
halJordan|1 month ago
dmux|1 month ago
jamesfinlayson|1 month ago
xandrius|1 month ago
amarant|1 month ago
What are your secrets? Teach me the dark arts!
sothatsit|1 month ago
1) the models people are using (default model in copilot vs. Opus 4.5 or Codex xhigh)
2) the tools people are using (ChatGPT vs. copilot vs. codex vs. Claude code)
3) when people tried these tools (e.g., December saw a substantial capability increase but some people only tried AI this one time last March)
4) how much effort people put into writing prompts (e.g., one vague sentence vs. a couple paragraphs of specific constraints and instructions)
Especially with all the hype, it makes sense to me why people have such different estimates for how useful AI actually is.
SoftTalker|1 month ago
reuben364|1 month ago