(no title)
gyosko | 6 months ago
While letting the AI write some code can be cool and fascinating, I really can't undersand how:
- write the prompt(and you need do be precise and think and express carefully what you have in mind)
- check/try the code
- repeat
is better than writing the code by myself. AI coding like this feels like a nightmare to me and it's 100x more exhausting.
nbaugh1|6 months ago
david-gpu|6 months ago
gyosko|6 months ago
jplusequalt|6 months ago
In our current scenario, programmers are merely describing what they think the code should do, and another program takes their description and then stochastically generates code based on it.
ModernMech|6 months ago
LLMs are more like DNA transcription, where some percentage of the time it just injects a random mutation into the transcript, either causing am evolutionary advantage, or a terminal disease.
This whole AI industry right now is trying to figure out how to always get the good mutation, and I don't think it's something that can be controlled that way. It will probably turn out that on a long enough timescale, left unattended, LLMs are guaranteed to give your codebase cancer.
ModernMech|6 months ago
Rinum|6 months ago
pavel_lishin|6 months ago
It's a perfectly cromulent approach and skillset - but it's a wildly different one.
broast|6 months ago