I tried to vibe code a technical not so popular niche and failed. Then I broke down the problem as much as I could and presented the problem in clearer terms and Gemini provided working code in just a few attempts. I know this is an anecdote, but try to break down the problem you have in simpler terms and it may work. Niche industry specific frameworks are a little difficult to work with in vibe code mode. But if you put in a little effort, AI seems to be faster than writing code all on your own.
xigoi|22 days ago
baq|22 days ago
'need' is orthogonal.
zozbot234|22 days ago
This should be the first thing you try. Something to keep in mind is that AI is just a tool for munging long strings of text. It's not really intelligent and it doesn't have a crystal ball.
sigseg1v|21 days ago
To me it reads like saying "I typed pseudocode into a JS file and it didn't compile , JS is junk". If people learn to use the tool, it works.
Anecdotally, I've been experimenting with migrations between languages and found LLMs taking shortcuts, but when I added a step to convert the source code's language to an AST and the transformed code to another AST and then designed a diff algorithm to compare the logic is equivalent in the converted code, and to retry until it matches within X tolerance, then it stopped outputting shortcuts because it simply would just continue until there were no shortcuts made. I suspect complainants are not doing this.
rockinghigh|22 days ago
zozbot234|22 days ago
Majromax|21 days ago
This is an expected outcome of how LLMs handle large problems. One of the "scaling" results is that the probability of success depends inversely on the problem size / length / duration (leading to headlines like "AI can now automate tasks that take humans [1 hour/etc]").
If the problem is broken down, however, then it's no longer a single problem but a series of sub-problems. If:
* The acceptance criteria are robust, so that success or failure can be reliably and automatically determined by the model itself, * The specification is correct, in that the full system will work as-designed if the sub-parts are individually correct, and * The parts are reasonably independent, so that complete components can be treated as a 'black box', without implementation detail polluting the model's context,
... then one can observe a much higher overall success rate by taking repeated high-probability shots (on small problems) rather than long-odds one-shots.
To be fair, this same basic intuition is also true for humans, but the boundaries are a lot fuzzier because we have genuine long-term memory and a lifetime of experience with conceptual chunking. Nobody is keeping a million-line codebase in their working memory.