top | item 46940094

(no title)

anoncow | 22 days ago

I tried to vibe code a technical not so popular niche and failed. Then I broke down the problem as much as I could and presented the problem in clearer terms and Gemini provided working code in just a few attempts. I know this is an anecdote, but try to break down the problem you have in simpler terms and it may work. Niche industry specific frameworks are a little difficult to work with in vibe code mode. But if you put in a little effort, AI seems to be faster than writing code all on your own.

discuss

order

xigoi|22 days ago

Breaking down a problem in simpler terms that a computer can understand is called coding. I don’t need a layer of unpredictability in between.

baq|22 days ago

by the time you're coding your problem should be broken down to atoms; that isn't needed anymore if you break it down to pieces which LLMs can break down to atoms instead.

'need' is orthogonal.

zozbot234|22 days ago

> I know this is an anecdote, but try to break down the problem you have in simpler terms

This should be the first thing you try. Something to keep in mind is that AI is just a tool for munging long strings of text. It's not really intelligent and it doesn't have a crystal ball.

sigseg1v|21 days ago

To add on to this, I see many complaints that "[AI] produced garbage code that doesn't solve the problem" yet I have never seen someone say "I set up a verification system where code that passes the tests and criteria and code that does not is identified as follows" and then say the same thing after.

To me it reads like saying "I typed pseudocode into a JS file and it didn't compile , JS is junk". If people learn to use the tool, it works.

Anecdotally, I've been experimenting with migrations between languages and found LLMs taking shortcuts, but when I added a step to convert the source code's language to an AST and the transformed code to another AST and then designed a diff algorithm to compare the logic is equivalent in the converted code, and to retry until it matches within X tolerance, then it stopped outputting shortcuts because it simply would just continue until there were no shortcuts made. I suspect complainants are not doing this.

rockinghigh|22 days ago

It's called problem decomposition and agentic coding systems do some of this by themselves now: generate a plan, break the tasks into subgoals, implement first subgoal, test if it works, continue.

zozbot234|22 days ago

That's nice if it works, but why not look at the plan yourself before you let the AI have its go at it? Especially for more complex work where fiddly details can be highly relevant. AI is no good at dealing with fiddly.

Majromax|21 days ago

> I know this is an anecdote, but try to break down the problem you have in simpler terms and it may work.

This is an expected outcome of how LLMs handle large problems. One of the "scaling" results is that the probability of success depends inversely on the problem size / length / duration (leading to headlines like "AI can now automate tasks that take humans [1 hour/etc]").

If the problem is broken down, however, then it's no longer a single problem but a series of sub-problems. If:

* The acceptance criteria are robust, so that success or failure can be reliably and automatically determined by the model itself, * The specification is correct, in that the full system will work as-designed if the sub-parts are individually correct, and * The parts are reasonably independent, so that complete components can be treated as a 'black box', without implementation detail polluting the model's context,

... then one can observe a much higher overall success rate by taking repeated high-probability shots (on small problems) rather than long-odds one-shots.

To be fair, this same basic intuition is also true for humans, but the boundaries are a lot fuzzier because we have genuine long-term memory and a lifetime of experience with conceptual chunking. Nobody is keeping a million-line codebase in their working memory.