On the plus side, vibe coding disaster remediation looks to be a promising revenue stream in the near future, and I am rubbing my hands together eagerly as I ponder the filthy lucre.
> On the plus side, vibe coding disaster remediation looks to be a promising revenue stream in the near future, and I am rubbing my hands together eagerly as I ponder the filthy lucre.
I don't think it will be; a vibe coder using Gas Town will easily spit out 300k LoC for a MVP TODO application. Can you imagine what it will spit out for anything non-trivial?
How do you even begin to approach remedying that? The only recourse for humans is to offer to rebuild it all using the existing features as a functional spec.
There's a middle ground here that you're not considering (at least in the small amount of text). Vibe coders will spit out a lot of nonsense because they don't have the skills (or choose not) to tweak the output of their agents. A well seasoned developer using tools like Claude Code on such a codebase can remediate a lot more quickly at this point than someone not using any AI. The current best practices are akin to thinking like a mathematician with regards to calculator use, rather than like a student trying to just pass a class. Working in small chunks and understanding the output at every step is the best approach in some situations.
> How do you even begin to approach remedying that? The only recourse for humans is to offer to rebuild it all using the existing features as a functional spec.
There are cases where that will be the appropriate decision. That may not be every case, but it'll be enough cases that there's money to be made.
There will be other cases where just untangling the clusterfuck and coming up with any sense of direction at all, to be implemented however, will be the key deliverable.
I have had several projects that look like this already in the VoIP world, and it's been very gainful. However, my industry probably does not compare fairly to the common denominator of CRUD apps in common tech stacks; some of it is specialised enough that the LLMs drop to GPT-2 type levels of utility (and hallucination! -- that's been particularly lucrative).
Anyway, the problem to be solved in vibe coding remediation often has little to do with the code itself, which we can all agree can be generated in essentially infinite amounts at a pace that is, for all intents and purposes, almost instantaneous. If you are in need vibe coding disaster remediation consulting, it's not because you need to refactor 300,000 lines of slop real quick. That's not going to happen.
The general business problem to be solved is how to make this consumable to the business as a whole, which still moves at the speed of human. I am fond of a metaphor I heard somewhere: you can't just plug a firehose into your house's plumbing and expect a fire hydrant's worth of water pressure out of your kitchen faucet.
In the same way, removing the barriers to writing 300,000 lines isn't the same as removing the barriers to operationalising, adopting and owning 300,000 lines in a way that can be a realistic input into a real-world product or service. I'm not talking about the really airy-fairy appeals to maintainability or reliability one sometimes hears (although, those are very real concerns), but rather, how to get one's arms around the 300,000 lines from a product direction perspective, except by prompting one's way into even more slop.
I think that's where the challenges will be, and if you understand that challenge, especially in industry- and domain-specific ways (always critical for moats), I think there's a brisk livelihood to be made here in the foreseeable future. I make a living from adding deep specialist knowledge to projects executed by people who have no idea what they're doing, and LLMs haven't materially altered that reality in any way. Giving people who have no idea what they're doing a way to express that cluelessness in tremendous amounts of code, quickly, doesn't really solve the problem, although it certainly alters the texture of the problem.
Lastly, it's probably not a great time to be a very middling pure CRUD web app developer. However, has it ever been, outside of SV and certain very select, fortunate corners of the economy? The lack of moat around it was a problem long before LLMs. I, for example, can't imagine making a comfortable living in it outside of SV engineer inflation; it just doesn't pay remotely enough in most other places. Like everything else worth doing, deep specialisation is valuable and, to some extent, insulating. Underappreciated specialist personalities will certainly see a return in a flight-to-quality environment.
For what I am vibing my normal work process is: build a feature until it works, have decent test coverage, then ask Claude to offer a code critique and propose refactoring ideas. I'd review them and decide which to implement. It is token-heavy but produces good, elegant codebases at scales I am working on for my side projects. I do this for every feature that is completed, and have it maintain design docs that document the software architecture choices made so far. It largely ignores them when vibing very interactively on a new feature, but it does help with the regular refactoring.
In my experience, it doubles the token costs per feature but otherwise it works fine.
I have been programming since I was 7 - 40 years ago. Across all tech stacks, from barebones assembly through enterprise architecture for a large enterprise. I thought I was a decent good coder, programmer and architect. Now, I find the code Claude/Opus 4.5 generates for me to be in general of higher quality then anything I ever made myself.
Mainly because it does things I'd be too tired to do, or never bother because why expand energy on refactoring for something that is perfectly working and not to be further developed.
Btw, its a good teaching tool. Load a codebase or build one, and then have it describe the current software architecture, propose changes and explain their impact and so on.
The amount of software needed and the amount being written are off many orders of magnitude. It has been that way since software's inception and I don't see it changing anytime soon. AI tools are like having a jr dev to do your grunt work. Soon it will be like a senior dev. Then like a dev team. I would love to have an entire dev team to do my work. It doesn't change the fact that I still have plenty of work for them to do. I'm not worried AI will take my job I will just be doing bigger jobs.
> Do you not fear that future/advanced AI will be able to look at a vibe-coded codebase and make sensible refactors itself?
This is a possibility in very well-trodden areas of tech, where the stack and the application are both banal to the point of being infinitely well-represented in the training.
As far as anything with any kind of moat whatsoever? Here, I'm not too concerned.
lelanthran|1 month ago
I don't think it will be; a vibe coder using Gas Town will easily spit out 300k LoC for a MVP TODO application. Can you imagine what it will spit out for anything non-trivial?
How do you even begin to approach remedying that? The only recourse for humans is to offer to rebuild it all using the existing features as a functional spec.
mathgeek|1 month ago
baxuz|1 month ago
abalashov|1 month ago
There are cases where that will be the appropriate decision. That may not be every case, but it'll be enough cases that there's money to be made.
There will be other cases where just untangling the clusterfuck and coming up with any sense of direction at all, to be implemented however, will be the key deliverable.
I have had several projects that look like this already in the VoIP world, and it's been very gainful. However, my industry probably does not compare fairly to the common denominator of CRUD apps in common tech stacks; some of it is specialised enough that the LLMs drop to GPT-2 type levels of utility (and hallucination! -- that's been particularly lucrative).
Anyway, the problem to be solved in vibe coding remediation often has little to do with the code itself, which we can all agree can be generated in essentially infinite amounts at a pace that is, for all intents and purposes, almost instantaneous. If you are in need vibe coding disaster remediation consulting, it's not because you need to refactor 300,000 lines of slop real quick. That's not going to happen.
The general business problem to be solved is how to make this consumable to the business as a whole, which still moves at the speed of human. I am fond of a metaphor I heard somewhere: you can't just plug a firehose into your house's plumbing and expect a fire hydrant's worth of water pressure out of your kitchen faucet.
In the same way, removing the barriers to writing 300,000 lines isn't the same as removing the barriers to operationalising, adopting and owning 300,000 lines in a way that can be a realistic input into a real-world product or service. I'm not talking about the really airy-fairy appeals to maintainability or reliability one sometimes hears (although, those are very real concerns), but rather, how to get one's arms around the 300,000 lines from a product direction perspective, except by prompting one's way into even more slop.
I think that's where the challenges will be, and if you understand that challenge, especially in industry- and domain-specific ways (always critical for moats), I think there's a brisk livelihood to be made here in the foreseeable future. I make a living from adding deep specialist knowledge to projects executed by people who have no idea what they're doing, and LLMs haven't materially altered that reality in any way. Giving people who have no idea what they're doing a way to express that cluelessness in tremendous amounts of code, quickly, doesn't really solve the problem, although it certainly alters the texture of the problem.
Lastly, it's probably not a great time to be a very middling pure CRUD web app developer. However, has it ever been, outside of SV and certain very select, fortunate corners of the economy? The lack of moat around it was a problem long before LLMs. I, for example, can't imagine making a comfortable living in it outside of SV engineer inflation; it just doesn't pay remotely enough in most other places. Like everything else worth doing, deep specialisation is valuable and, to some extent, insulating. Underappreciated specialist personalities will certainly see a return in a flight-to-quality environment.
djeastm|1 month ago
That's my worry. Might be put off a few years, but still...
aenis|1 month ago
For what I am vibing my normal work process is: build a feature until it works, have decent test coverage, then ask Claude to offer a code critique and propose refactoring ideas. I'd review them and decide which to implement. It is token-heavy but produces good, elegant codebases at scales I am working on for my side projects. I do this for every feature that is completed, and have it maintain design docs that document the software architecture choices made so far. It largely ignores them when vibing very interactively on a new feature, but it does help with the regular refactoring.
In my experience, it doubles the token costs per feature but otherwise it works fine.
I have been programming since I was 7 - 40 years ago. Across all tech stacks, from barebones assembly through enterprise architecture for a large enterprise. I thought I was a decent good coder, programmer and architect. Now, I find the code Claude/Opus 4.5 generates for me to be in general of higher quality then anything I ever made myself.
Mainly because it does things I'd be too tired to do, or never bother because why expand energy on refactoring for something that is perfectly working and not to be further developed.
Btw, its a good teaching tool. Load a codebase or build one, and then have it describe the current software architecture, propose changes and explain their impact and so on.
snarfy|1 month ago
abalashov|1 month ago
This is a possibility in very well-trodden areas of tech, where the stack and the application are both banal to the point of being infinitely well-represented in the training.
As far as anything with any kind of moat whatsoever? Here, I'm not too concerned.
snarfy|1 month ago
amunozo|1 month ago