(no title)
suriya-ganesh | 1 month ago
I don't get it. Even with a very good understanding of what type of work I am doing and a prebuilt knowledge of the code, even for very well specced problem. Claude code etc. just plain fail or use sloppy code. How do these industry figures claim they see no part of a 225K+ line of code and promise that it works?
It feels like we're getting into an era where oceans of code which nobody understands is going to be produced, which we hope AGI swoops in and cleans?
jrmg|1 month ago
They _can_ usually be manually tidied and fixed, with varying amounts of effort (small project = easy fixes, on a par with regular code review, large project = “this would’ve been easier to write myself...”)
I guess Gas Town’s multiple layers of supervisory entities are meant to replace this manual tidying and fixing, but, well, really?
I don’t understand how people are supposedly having so much success with things like this. Am I just holding it wrong?
If they are having real success, why are there no open source projects that are AI developed and maintained that are _not_ just systems for managing AI? (Or are there and I just haven’t seen them?...)
consumer451|1 month ago
Then Opus 4.5 was released. I had already had my CC cluade.md, and Windsurf global rules + workspace rules set up. Also, my main money making project is React/Vite/Refine.dev/antd/Supabase... known patterns.
My point is that given all that, I can now deploy amazing features that "just work," and have excellent ux in a single prompt. I still review all commits, but they are now 95% correct on front end, and ~75% correct on Postgres migrations.
Is it magic? Yes. What's worse is that I believe Dario. In a year or so, many people will just create their own Loom or Monday.com equivalent apps with a one page request. Will it be production ready? No. Will it have all the features that everyone wants? No. But it will do that they want, which is 5% of most SaaS feature sets. That will kill at least 10% of basic SaaS.
If Sonnet 3.5 (~Nov 2024) to Opus 4.5 (Nov 2025) progress is a thing, then we are slightly fucked.
"May you live in interesting times" - turns out to be a curse. I had no idea. I really thought it was a blessing all this time.
kaydub|1 month ago
Like, why are you manually tidying and fixing things? The first pass is never perfect. Maybe the functionality is there but the code is spaghetti or untestable. Have another agent review and feed that review back into the original agent that built out the code. Keep iterating like that.
My usual workflow:
Agent 1 - Build feature Agent 2 - Review these parts of the code, see if you find any code smells, bad architecture, scalability problems that will pop up, untestable code, or anything else falling outside of modern coding best practices Agent 1 - Here's the code review for your changes, please fix Agent 2 - Do another review Agent 1 - Here's the code review for your changes, please fix
Repeat until testable, maybe throw in a full codebase review instead of just the feature.
Agent 1 - Code looks good, start writing unit tests, go step by step, let's walk through everything, etc. etc. etc.
Then update your .md directive files to tell the agents how to test.
Voila, you have an llm agent loop that will write decent code and get features out the door.
pdntspa|1 month ago
I've written two seperate moderately-sized codebases using agentic techniques (oftentimes being very lazy and just blanket approving changes), and I don't encounter logic or off-by-one errors very often if at all. It seems quite good at the basic task of writing working code, but it sucks at architecture and you need occasional code review rounds to keep the codebase tidy and readable. My code reviews with the AI are like 50% DRY and separating concerns
kaydub|1 month ago
Are you guys just trying to one shot stuff? Are you not using agents to iterate on things? Are you not putting agents against each other (have one code, one critique/test the code, and put them in a loop)?
I still look at the code that's produced, I'm not THAT far down the "vibe coding" path that I'm trusting everything being produced, but I get phenomenal results and I don't actually write any code any more.
So like, yeah, first pass the llm will create my feature and there's definitely some poorly written code or duplicate code or other code smells, but then I tell another agent to review and find all these problems. Then that review gets fed back in to the agent that created the feature. Wham, bam, clean code.
I'm not using gastown or ralph wiggum ($$$) but reading the docs, looking over how things work, I can see how it all comes together and should work. They've been built out to automatically do the review + iteration loop that I do.
arrowleaf|1 month ago
You can't be too prescriptive or verbose when interacting with them, you have to interact with them a bit to start understanding how they think and go from there to determine what information or context to provide. Same for understanding their programming styles, they will typically do what they're told but sometimes they go on a tangent.
You need to know how to communicate your expectations. Especially around testing and interaction with existing systems, performance standards, technology, the list goes on.
habinero|1 month ago
The problem is some 0.05X developers thought they were 0.5X and now they think they're 2X.
alecbz|1 month ago
sjajshha|1 month ago
joshstrange|1 month ago
YES! I have been playing with vibe coding tools since they came out. "Playing" because only on rare occasions have I created something that is good enough to commit/keep/use. I keep playing with them because, well I have a subscription, but also so I don't fall into the fuddy-duddy camp of "all AI is bad" and can legitimately speak on the value, or lack thereof, of these tools.
Claude Code is super cool, no doubt, and with _highly targeted_ and _well planned_ tasks it can produce valuable output. Period. But, every attempt at full-vibe-coding I've done has gotten hung up at some point and I have to step in an manually fix this. My experience is often:
1. First Prompt: Oh wow, this is amazing, this is the future
2. Second Prompt: Ok, let me just add/tweak a few things
10. 10th prompt: Ugh, everytime I fix one thing, something else breaks
I'm not sure at all what I'm doing "wrong". Flogging the agents along doesn't not work well for me or maybe I am just having trouble letting go of the control and I'm not flogging enough?
But the bottom line is I am generally shocked that something like Gas Town was able to be vibe-coded. Maybe it's a case of the LLM overstating what it's accomplished (typical) and if you look under the hood it's doing 1% of what it says it is but I really don't know. Clearly it's doing something, but then I sit over here trying to build a simple agent with some MCPs hooked up to it using a LLM agent framework and it's falling over after a few iterations.
dceddia|1 month ago
One thing that stands out in your steps and that I’ve noticed myself- yeah, by prompt 10, it starts to suck. If it ever hits “compaction” then that’s beyond the point of return.
I still find myself slipping into this trap sometimes because I’m just in the flow of getting good results (until it nosedives), but the better strategy is to do a small unit of work per session. It keeps the context small and that keeps the model smarter.
“Ralph” is one way to do this. (decent intro here: https://www.aihero.dev/getting-started-with-ralph)
Another way is “Write out what we did to PROGRESS.md” - then start new session - then “Read @PROGRESS.md and do X”
Just playing around with ways to split up the work into smaller tasks basically, and crucially, not doing all of those small tasks in one long chat.
EFreethought|1 month ago
Maybe that is the time to start making changes by hand. I think this dream of humans never ever writing any more code might be too far and unnecessary.
theropost|1 month ago
kgwgk|1 month ago
The only promise is that you will get your face ripped off.
“WARNING DANGER CAUTION - GET THE F** OUT - YOU WILL DIE […] Gas Town is an industrialized coding factory manned by superintelligent robot chimps, and when they feel like it, they can wreck your shit in an instant. They will wreck the other chimps, the workstations, the customers. They’ll rip your face off if you aren’t already an experienced chimp-wrangler.”
kaydub|1 month ago
But I still haven't actually used Gastown. It looks cool. I think it probably works, at least somewhat. I get it. But it's just not what I need right now. It's bleeding edge and experimental.
The main thing holding me back from even tinkering with it is the cost. Otherwise I'd probably play with it a little, but it's not something I'd expect to use and ship production code right now. And I ship a ton of production code with claude.
skippyboxedhero|1 month ago
People from OpenAI was saying that GPT2 had achieved AGI. There is a very clear incentive for that statement to be made by people who are not using AI for anything productive.
Even as increasingly bombastic claims are made, it is obvious that the best AI cannot one-shot everything if you are an actual user. And the worst ones: was using Gemini yesterday and it wouldn't stop outputting emojis, was using Grok and it refused to give me a code snippet because it claimed its system prompt forbade this...what can you say?
I don't understand why anyone would want to work on a codebase they didn't understand either. What happens when something goes wrong?
Again though, there is massive financial incentive to make these claims, and some other people will fall along with that because it is good for their career, etc. I have seen this in my own company where senior people are shoehorning this stuff in that they clearly do not actually use or understand (to be clear, this is engineering not management...these are people who definitely should understand but do not).
Great tool, but the 100% vibecoding without looking at the code, for something that you are actually expecting others to use, is a bad idea. Feels more like performance art than actual work. I like jokes, I like coding, room for both but don't confuse the two.
rozap|1 month ago
It's your coworker's problem. The one who actually understands the big picture and how the system fits into it. They'll deal with it.
turtlebits|1 month ago
Maybe it changes how we code or maybe it doesn't. Vibe coding has definitely helped me write throwaway tools that were useful.
johnmaguire|1 month ago
For example, he makes a comment to the effect that anyone using an IDE to look at code in 2026 is a "bad engineer."
lovich|1 month ago
No, he threw up a hyperbolic warning and then dove deep into how this is the future of all coding in the rest of his talks/writing.
It’s as good a warning as someone saying “I’m not {X} but {something blatantly showing I am X}”
furyofantares|1 month ago
It's an experiment to discover what the limits are. Maybe the experiment fails because it's scoped beyond the limits of LLMs. Maybe we learn something by how far it gets exactly. Maybe it changes as LLMs get better, or maybe it's a flawed approach to pushing the limits of these.
bbayles|1 month ago
gtowey|1 month ago
Compilers are deterministic. People who write them test that they will produce correct results. You can expect the same code to compile to the same assembly.
With LLMs two people giving the exact same prompts can get wildly different results. That is not a tool you can use to blindly ship production code. Imagine if your compiler randomly threw in a syscall to delete your hard drive, or decide to pass credentials in plain text. LLMs can and will do those things.
conartist6|1 month ago
For me the difference is prognosis. Gas Town has no ratchet of quality: its fate was written on the wall since the day Steve decided he didn't want to know what the code says: it will grow to a moderate but unimpressive size before it collapses under its own weight. Even if someone tried to prop it up with stable infra, Steve would surely vibe the stable infra out of existence since he does not care about that
crote|1 month ago
With LLMs all bets are off. Is your code going to import leftpad, call leftpad-as-a-service, write its own leftpad implementation, decide that padding isn't needed after all, use a close-enough rightpad instead? Who knows! It's just rolling dice, so have fun finding out!
7777332215|1 month ago
3vidence|1 month ago
Vibecoding is literally just random probabilistic mapping between unknown inputs and outputs on an unknown domain.
Feels like saying because I don't know how my engine works that my car could've just been vibe-engineered. People have put 1000s of hours into making certain tools work up to a give standard and spec reviewed by many many people.
"I don't know how something works" != "This wasn't thoughtfully designed"
Why do people compare these things.
anonymous908213|1 month ago
hilbertseries|1 month ago
jplusequalt|1 month ago
But as a programmer writing C code, you're still building out the software by hand. You're having to read and write a slightly higher level encoding of the software.
With vibe coding, you don't even deal with encodings. You just prompt and move on.
gegtik|1 month ago
unknown|1 month ago
[deleted]
unknown|1 month ago
[deleted]
0xbadcafebee|1 month ago
Simple: you follow the directions, eat the food, and if it tastes good, it worked.
If cooks don't understand physics, chemistry, biology, etc, how do all the cooks in the world ensure they don't get people sick? They follow a set of practices and guidelines developed to ensure the food comes out okay. At scale, businesses develop even more practices (pasteurization, sanitization, refrigeration, etc) to ensure more food safety. None of the people involved understand it at a base level. There are no scientists directly involved in building the machines or day-to-day operations. Yet the entire world's food supply works just fine.
It's all just abstractions. You don't need to see the code for the code to work.
habinero|1 month ago
1. Chefs do learn the chemistry, at least enough to know why their techniques work.
2. Food scientist is a real job
3. The supply chain absolutely does have scientists involved in day to day operations lol.
A better analogy is just shoving the entire contents of the fridge into a pot, plastic containers and all, and assuming it'll be fine.
roberttod|1 month ago
This isn't about anthropomorphism, it's context engineering. By breaking things into more agents, you get more focused context windows.
I believe gas town has some review process built in, but my comment is more to address the idea that it's all slop.
As an aside, Opus 4.5 is the first model I used that most of the time doesn't produce much slop, in case you haven't tried it. Still produces some slop, but not much human required for building things (it's mostly higher level and architectural things they need guidance on).
fragmede|1 month ago
Any examples you can share?
mactavish88|1 month ago
For some things, LLMs are great. For others, they're absolute dog shit.
It's still early days. Anyone who claims to know what they're talking about either doesn't or what they're saying will be out of date in a month's time (including me).
anonymous908213|1 month ago
ryandrake|1 month ago
azan_|1 month ago
I have 100% vibecoded software that I now use instead of commercial implementation that cost me almost 200 usd a month (tool for radiology dictation and report generation).
asadm|1 month ago
You always have to review overall diff though and go back to agent with broader corrections to do.
_zoltan_|1 month ago
I don't know why people keep repeating this but it's wrong. It works.
causalmodels|1 month ago
mkl95|1 month ago
unknown|1 month ago
[deleted]