Funny how the article starts with someone using AI — to develop more AI stuff.
This reminds me of web3, where almost all projects were just web3 infrastructure or services, to the point that the purpose of most start-ups was completely inscrutable to outsiders.
Great point about web3. People always talk about during gold rushes, selling shovels instead of being the ones digging for gold. And it might be a reasonable idea, but there needs to be actual people getting rich from the gold to be an actual gold rush, if everyone is just selling shovels, it is probably a sign of a bubble.
As an experienced dev who’s gotten his feet wet a little with AI, I feel like , and does anyone else feel this way, that I spend more time telling AI what to do than it would spend actually writing this all out myself?
Yup, entirely gave up on AI as a result, and it didn't help that reviewing and fixing AI code made me incredibly bored to the point that if this somehow becomes the new norm of development, I'll have to find a new career, because I want to actually build things, not manage AI.
Not a dev but I’ve had similar experiences testing AI to help me craft various documents.
I spend ages crafting the perfect prompt only to cross my fingers and hope that the output is what I want it to be when I could have just spent all that time actually crafting the document exactly as I want it to be in the first place. Often I then need to spend more time editing and tweaking the output from the AI anyway.
I’m honestly starting to feel a little crazy because I’m the only one at my workplace that sees it this way.
Absolutely, much harder to describe what I want the software to do in English than logical programming language, and then add to that the time taken to understand the code and make sure it does what was intended. Perhaps the worst part, takes the joy out of it all.
It is nice as a tool to help solve some hairy stuff sometimes.
I find it depends. I just "vibe coded" (I hate this phrase) a simple mobile app for something I needed, without writing a line or code, or indeed ant knowledge of any web/mobile technology. No way realistically I would spend a few days learning Flutter then a few more writing my app.
I don't think this behavior has anything to do with AI although it seems like it's been used as an excuse to justify it. Everyone seems to be in a belt-tightening risk averse mode right now and that means cutting junior positions and leaning on smaller teams of senior staff. You can see this behavior in more than just tech and more than just positions that can be replaced by AI. The job boards betray it as well, job postings for junior staff have dried up.
As someone who spends his non work hours convincing a half baked quasi person to not do dumb things (a two year old), I have zero interest in convincing a half baked quasi person to not do dumb things during work hours (most coding agents).
I’ve had good results with Claude, it just takes too long. I also don’t think I can context switch fast enough to do something else while it’s churning away.
I think it’s allowed me to spend more time being an architect and thinking about processes, problem solving. To put it another way, I’m still a developer, possibly to a higher degree (because I can spend more time doing it), and less of a coder.
I have very few issues with Claude. If I just tell it what the goal is, it will make some sensible suggestions, and I can tell it to start coding towards it. It rarely messes up, and when it does I catch it in the act.
You don't necessarily want to completely tune out while you're using the AI. You want to know what it's up to, but you don't need to be at your highest level of attention to do it. This is what makes it satisfying for me, because often it eats up several minutes to hunt down trivial bugs. Normally when you have some small thing like that, you have to really concentrate to find it, and it's frustrating.
When the AI is on a multi-file edit that you understand, that's when you can tune out a bit. You know that it is implementing some edit across several instances of the same interface, so you can be confident that in a few minutes everything will build and you will get a notification.
It's as if I can suddenly make all the architecture level edits without paying the cost in time that I had previously.
I was going to point out that what you are describing is exactly like what it is to be a leader/director of people working in most efforts, i.e., managing people, when it occurred to me that maybe what we are dealing with in this conflict and mud slinging around AI is the similar conflict of coders not wanting to become managers as they are often even not really good at being managers. Devs work well together at a shared problem solving (and even that often only sometimes), but it strikes me as the same problem as when devs are forced to become managers and they really don't like it, they hate it even, sometimes even leaving their company for that reason.
When you are working with AI, you are effectively working with a group of novice people, largely with basic competence, but lacking many deeper skills that are largely developed from experience. You kind of get what you put into it with proper planning, specificity in requests/tasks, proper organization based on smart structuring of skillsets and specializations, etc.
This may ruffle some feathers, but I feel like even though AI has its issues with coding in particular, this issue is really a leadership question; lead and mentor your AI correctly, adequately, and appropriately and you end up with decent, workable outcomes. GIGO
If you add an AGENTS.md, the AI agent will work more efficiently, and there will be far fewer problems like the ones you’re facing. You can include sections such as Security, coding style guidelines, writing unit tests, etc.
How do the colleagues of people "vibe-coding" feel about that?
Does it end up like having colleagues who are aren't doing or understanding or learning from their own work, and are working like they offshored their job to an overnight team of juniors, and then just try to patch up the poor quality, before doing a pull request and calling their sprint done?
Or is it more like competent mechanical grunt work (e.g., "make a customer contact app with a Web form with these fields, which adds a row to the database"), that was always grunt work, and it's pretty hard to mess up, and nothing that normally the person you assigned it to would understand further anyway by doing it themself?
random example, may or may not come from a real situation that just happened:
- other team opens jira ticket requesting a new type of encabulator
- random guy who doesn't know anything of how the current encabulator system works picks up the ticket for whatever reason
- 10 minutes later opens a 2000 line vibe coded pr with the new encabulator type and plenty of unit tests
- assigns ticket to the person who designed the current encabulator system for review
- encabulator system designer finds out about this ticket for the first time this way
- "wait wtf what is this? why are we doing any of this? the current system is completely generic and it takes arbitrary parameters?"
- waste an hour talking to the guy and to the team that requested this
- explain they could just use the parameters, close pr and ticket
> Does it end up like having colleagues who are aren't doing or understanding or learning from their own work, and are working like they offshored their job to an overnight team of juniors, and then just try to patch up the poor quality, before doing a pull request and calling their sprint done?
> working like they offshored their job to an overnight team of juniors
Reviewing a co-worker of 13 years now gives me the exact same unpleasant feeling as opening an MR from a junior where I know it's going to be garbage and I'm tired of struggling to let them down easy.
Had our junior dev working on a case actually ran the code in the merge request and looked at the result, it would have been obviously broken and unusable. Like not even wrong value, it would have spewed a bunch of HTML into the page.
Exactly this, if you're babysitting the AI, you are, by definition, _not_ vibe coding. Vibe coding means not reading the resulting code, and accepting that things will break down completely in four or five iterations.
The worst is when I have to baby-sit someone else’s AI. It’s so frustrating to get tagged to review a PR, open it up, and find 400 lines of obviously incorrect slop. Some try to excuse by marking the PR [vibe] but like what the hell, at least review your own goddamn ai code before asking me to look at it. Usually I want to insta reject just for the disrespect for my time.
I've sort of gotten on the bandwagon. I initially used AI auto complete at my previous job and liked it a lot as a better intellisense, but wouldn't have used it to write PRs -- for one I tried it a couple times and it just wasn't good enough.
My new job pushes cursor somewhat heavily and I gave it a try, it works pretty well for me although it's definitely not something I would rely on. I like being able to ask it to do something and let it go off and come back to it in a while to see how it did. For me, I think it makes it easier to start on changes by coming into something (that might be wrong and bad), but for me personally having something in a PR to start from is a nice mental hack.
If it did well enough on the initial attempt I'll try to stick with it to polish it up, but if it failed terribly I'll just write it by hand. Even when it fails it's nice to see what it did as a jumping off point. I do wish it were a bit less prone to "lying" (yada yada anthromorphization it's just tokens etc.,) though, sometimes I'll ask it to do something in a particular way (e.g., add foo to bar and make sure you X, Y, and Z) and it'll conclude (rightfully or not) that it can't do X, but then go on anyway and claim that it did X.
I wish it were easier to manage context switching in cursor though, as it is juggling IDE windows between git repo clones is a pain (this is true for everything though, so not unique to cursor). I wish I could just keep things running on a git branch and come back to them without having to manage a bunch of different clones and windows etc.,. I think this is more of a pain point with cursor since in theory it would allow you to parallelize more tasks but the tooling isn't really there.
edit: the starting point for this is probably worktrees, I remember reading about these a while ago and should probably check them out (heh) but it only solves the problem of having a bunch of clones sitting around, I'd still need to manage N windows.
Everyone suggesting maintaining an agent.md file haven’t met the majority of their coworkers who refuse to document anything or read documentation. Perhaps train a model that can generate these agent files for them?
IMO the best possible outcome of this AI hype is better documentation.
It's proven that (agentic) LLMs work better and more efficiently when they have the proper context for the project (AGENT(S).md), the documentation is accessible and up to date (docs/ directory with proper up to date markdown) and the tooling is smooth (git hooks that prevent secrets from being pushed, forced style checking and linting).
...surprise! All of this makes things easier for actual humans too!
What I’ve seen also happen is senior devs suddenly starting to put out garbage code and PRs. One senior dev in our project has become a menace and the quality of his work has dramatically dropped.
One improvement is to start using an AGENTS file. There are many. Here's the repo that I personally use thanks to Pontus Abrahamsson and using TypeScript:
An advertisement, where the protagonists weigh the pros and cons and then come down on the side of "paying the innovation tax".
Fastly profits from"AI" just like Cloudflare (or should we say "Claudeflare").
This selection of developers does seem representative at all. The new strategy is to acknowledge "AI" weaknesses but still adamantly claim that it is inevitable.
Well you're now having to 'coach' an AI into getting it to do what you want rather than a junior employee.
And yes, you need to develop your skills in asking for what you want, whilst still having to review the outputs similar to a junior employee. It just doesn't really learn or get better based on your feedback.
[+] [-] cedilla|5 months ago|reply
This reminds me of web3, where almost all projects were just web3 infrastructure or services, to the point that the purpose of most start-ups was completely inscrutable to outsiders.
I'm having lots more hope for AI though.
[+] [-] sussmannbaka|5 months ago|reply
[+] [-] greiskul|5 months ago|reply
[+] [-] unknown|5 months ago|reply
[deleted]
[+] [-] simianwords|5 months ago|reply
[+] [-] samgranieri|5 months ago|reply
[+] [-] turtletontine|5 months ago|reply
[+] [-] askonomm|5 months ago|reply
[+] [-] lethologica|5 months ago|reply
I spend ages crafting the perfect prompt only to cross my fingers and hope that the output is what I want it to be when I could have just spent all that time actually crafting the document exactly as I want it to be in the first place. Often I then need to spend more time editing and tweaking the output from the AI anyway.
I’m honestly starting to feel a little crazy because I’m the only one at my workplace that sees it this way.
[+] [-] tfandango|5 months ago|reply
It is nice as a tool to help solve some hairy stuff sometimes.
[+] [-] JeremyNT|5 months ago|reply
It's just really hard to convert requirements to English when there are a bunch of land mines in there that you know how to avoid.
[+] [-] rich_sasha|5 months ago|reply
[+] [-] cs702|5 months ago|reply
1. Replace junior developers with AI, reducing costs today.
2. Wish and hope that senior developers never retire in the future.
3. ?
[+] [-] kermatt|5 months ago|reply
[+] [-] Spivak|5 months ago|reply
[+] [-] anonymousiam|5 months ago|reply
2. ?
3. Profit!
https://www.youtube.com/watch?v=tO5sxLapAts
[+] [-] pyuser583|5 months ago|reply
[+] [-] micromacrofoot|5 months ago|reply
[+] [-] matthewfcarlson|5 months ago|reply
I’ve had good results with Claude, it just takes too long. I also don’t think I can context switch fast enough to do something else while it’s churning away.
[+] [-] EdwardDiego|5 months ago|reply
[+] [-] bodhi_mind|5 months ago|reply
[+] [-] lordnacho|5 months ago|reply
You don't necessarily want to completely tune out while you're using the AI. You want to know what it's up to, but you don't need to be at your highest level of attention to do it. This is what makes it satisfying for me, because often it eats up several minutes to hunt down trivial bugs. Normally when you have some small thing like that, you have to really concentrate to find it, and it's frustrating.
When the AI is on a multi-file edit that you understand, that's when you can tune out a bit. You know that it is implementing some edit across several instances of the same interface, so you can be confident that in a few minutes everything will build and you will get a notification.
It's as if I can suddenly make all the architecture level edits without paying the cost in time that I had previously.
[+] [-] hopelite|5 months ago|reply
When you are working with AI, you are effectively working with a group of novice people, largely with basic competence, but lacking many deeper skills that are largely developed from experience. You kind of get what you put into it with proper planning, specificity in requests/tasks, proper organization based on smart structuring of skillsets and specializations, etc.
This may ruffle some feathers, but I feel like even though AI has its issues with coding in particular, this issue is really a leadership question; lead and mentor your AI correctly, adequately, and appropriately and you end up with decent, workable outcomes. GIGO
[+] [-] whatevaa|5 months ago|reply
[+] [-] _c7zm|5 months ago|reply
[+] [-] siliconc0w|5 months ago|reply
[+] [-] neilv|5 months ago|reply
Does it end up like having colleagues who are aren't doing or understanding or learning from their own work, and are working like they offshored their job to an overnight team of juniors, and then just try to patch up the poor quality, before doing a pull request and calling their sprint done?
Or is it more like competent mechanical grunt work (e.g., "make a customer contact app with a Web form with these fields, which adds a row to the database"), that was always grunt work, and it's pretty hard to mess up, and nothing that normally the person you assigned it to would understand further anyway by doing it themself?
[+] [-] izabera|5 months ago|reply
[+] [-] jennyholzer|5 months ago|reply
yes
[+] [-] dogleash|5 months ago|reply
Reviewing a co-worker of 13 years now gives me the exact same unpleasant feeling as opening an MR from a junior where I know it's going to be garbage and I'm tired of struggling to let them down easy.
[+] [-] Izkata|5 months ago|reply
Had our junior dev working on a case actually ran the code in the merge request and looked at the result, it would have been obviously broken and unusable. Like not even wrong value, it would have spewed a bunch of HTML into the page.
[+] [-] dwheeler|5 months ago|reply
AI can be a helpful assistant but they are nowhere near ready for letting loose when the results matter.
[+] [-] stavros|5 months ago|reply
[+] [-] DeepYogurt|5 months ago|reply
[+] [-] jitl|5 months ago|reply
[+] [-] foota|5 months ago|reply
My new job pushes cursor somewhat heavily and I gave it a try, it works pretty well for me although it's definitely not something I would rely on. I like being able to ask it to do something and let it go off and come back to it in a while to see how it did. For me, I think it makes it easier to start on changes by coming into something (that might be wrong and bad), but for me personally having something in a PR to start from is a nice mental hack.
If it did well enough on the initial attempt I'll try to stick with it to polish it up, but if it failed terribly I'll just write it by hand. Even when it fails it's nice to see what it did as a jumping off point. I do wish it were a bit less prone to "lying" (yada yada anthromorphization it's just tokens etc.,) though, sometimes I'll ask it to do something in a particular way (e.g., add foo to bar and make sure you X, Y, and Z) and it'll conclude (rightfully or not) that it can't do X, but then go on anyway and claim that it did X.
I wish it were easier to manage context switching in cursor though, as it is juggling IDE windows between git repo clones is a pain (this is true for everything though, so not unique to cursor). I wish I could just keep things running on a git branch and come back to them without having to manage a bunch of different clones and windows etc.,. I think this is more of a pain point with cursor since in theory it would allow you to parallelize more tasks but the tooling isn't really there.
edit: the starting point for this is probably worktrees, I remember reading about these a while ago and should probably check them out (heh) but it only solves the problem of having a bunch of clones sitting around, I'd still need to manage N windows.
[+] [-] righthand|5 months ago|reply
[+] [-] theshrike79|5 months ago|reply
It's proven that (agentic) LLMs work better and more efficiently when they have the proper context for the project (AGENT(S).md), the documentation is accessible and up to date (docs/ directory with proper up to date markdown) and the tooling is smooth (git hooks that prevent secrets from being pushed, forced style checking and linting).
...surprise! All of this makes things easier for actual humans too!
[+] [-] codazoda|5 months ago|reply
Dunno how helpful that is, but they shouldn’t have to write it from scratch.
[+] [-] eagerpace|5 months ago|reply
[+] [-] anon22981|5 months ago|reply
[+] [-] jph|5 months ago|reply
https://github.com/pontusab/directories
If you prefer AGENTS.md files using markdown, I've extracted them into my own repo:
https://github.com/SixArm/ai-agents-tips
[+] [-] faangguyindia|5 months ago|reply
Here I put ClaudeCode in a while loop and clone itself: https://www.reddit.com/r/ClaudeCode/s/TY3SNp7eI9
After a week it was ready, you just need a good prompt.
People who say it cannot create anything beyond simple things are wrong. In my experience you can create anything provided your plan is good.
[+] [-] bgwalter|5 months ago|reply
Fastly profits from"AI" just like Cloudflare (or should we say "Claudeflare").
This selection of developers does seem representative at all. The new strategy is to acknowledge "AI" weaknesses but still adamantly claim that it is inevitable.
[+] [-] NoPicklez|5 months ago|reply
And yes, you need to develop your skills in asking for what you want, whilst still having to review the outputs similar to a junior employee. It just doesn't really learn or get better based on your feedback.
[+] [-] unknown|5 months ago|reply
[deleted]
[+] [-] haffi112|5 months ago|reply