(no title)
edanm | 1 month ago
Here are a sample of (IMO) extremely talented and well known developers who have expressed that agentic coding helps them: Antirez (creator of Reddit), DHH (creator of RoR), Linus (Creator of Linux), Steve Yegge, Simon Wilison. This is just randomly off the top of my head, you can find many more. None of them claim that agentic coding does a years' worth of work for them in an hour, of course.
In addition, pretty much every developer I know has used some form of GenAI or agentic coding over the last year, and they all say it gives them some form of speed up, most of them significant. The "AI doesn't help me" crowd is, as far as I can tell, an online-only phenomenon. In real life, everyone has used it to at least some degree and finds it very valuable.
trashb|1 month ago
I wonder if they have measured their results? I believe that the perceived speed up of AI coding is often different from reality. The following paper backs this idea https://arxiv.org/abs/2507.09089 . Can you provide data that objects this view, based on these (celebrity) developers or otherwise?
embedding-shape|1 month ago
My naive approach would be to just implement it twice, once together with an LLM and once without, but that has obvious flaws, most obvious that the order which you do it with impacts the results too much.
So how would I actually go about and be able to provide data for this?
edanm|1 month ago
This is a notoriously difficult thing to measure in a study. More relevantly though, IMO, it's not a small effect that might be difficult to notice - it's a huge, huge speedup.
How many developers have measured whether they are faster when programming in Python vs assembly? I doubt many have. And I doubt many have chosen Python over assembly because of any study that backs it up. But it's also not exactly a subtle difference - I'm fairly 99% of people will say that, in practice, it's obvious that Python is faster for programming than assembly.
I talked literally yesterday to a colleague who's a great senior dev, and he made a demo in an hour and a half that he says would've taken him two weeks to do without AI. This isn't a subtle, hard to measure difference. Of course this is in an area where AI coding shines (a new codebase for demo purposes) - but can we at least agree that in some things AI is clearly an order of magnitude speedup?
Adrig|1 month ago
As a designer I'm having a lot of success vibe coding small use cases, like an alternative to lovable to prototype in my design system and share prototypes easily.
All the devs I work with use cursor, one of them (front) told me most of the code is written by AI. In the real world agentic coding is used massively
margorczynski|1 month ago
The second part is something I think a lot about now after playing around with Claude Code, OpenCode, Antigravity and extrapolating where this is all going.
menaerus|1 month ago
Wild guess nr.1: large majority of software jobs will be complemented (mostly replaced) with the AI agents, reducing the need for as many people doing the same job.
Wild guess nr.2: demand for creating software will increase but the demand for software engineers creating that software will not follow the same multiplier.
Wild guess nr.3: we will have the smallest teams ever with only few people on board leading perhaps to instantiating the largest amount of companies than ever.
Wild guess nr.4: in near future, the pool of software engineers as we know them today, will be drastically downsized, and only the ones who can demonstrate they can bring the substantial value over using the AI models will remain relevant.
Wild guess nr.5: getting the job in software engineering will be harder than ever.
akoboldfrying|1 month ago
Though it is fun to imagine using Reddit as a key-value store :)
neomantra|1 month ago
https://github.com/ConAcademy/reddit-kv/blob/main/README.md
edanm|1 month ago
Thanks for the correction.
grayhatter|1 month ago
> Antirez
When I first read his recent article, I found the whole article, uncompelling. https://antirez.com/news/158 (don't buy into the anti-AI hype) But gave it a 2nd chance; and re-read it. I'm gonna have to resist going line by line, because I find some of it outright objectionable.
> Whatever you believe about what the Right Thing should be, you can't control it by refusing what is happening right now. Skipping AI is not going to help you or your career.
Setting aside the rhetorical/argumentative deficiencies, and the fact this is just FUD because (he next suggests if you disagree, just keep trying it every few months? which suggests to me even he knows it's BS). He writes that in the context of the ethical or moral objections he raises. So he's suggesting that the best way to advance in your career, is to ignoring the social and ethical concerns and just get on board?
Gross.
Individual careers aside, I'm not impressed by the correctness of the code emitted, by AI and committed by most AI users. I'm unconvinced that AI will improve the industry, and it's reputation as a whole.
But the topic is supposed to be specific examples of code, so lets do that. He mentions adding utf-8 to his toy terminal input project -> https://github.com/antirez/linenoise/commit/c12b66d25508bd70... It's a very useful feature to add, without a doubt! His library is better than it was before. But parsing utf-8, while something that's very easy to implement without care, or incompletely, i.e. something that's very easy to trip over if you're careless. The implementation specifics of it are fairly described as a solved problem. It's been done so many times, if you're willing to re-implement from another existing source, It wouldn't take very long to do this without AI. (And if you're not, why are you using AI? I'm ethically opposed to the laundered provenience of source material) Then, it absolutely would take more time to verify that the code is correct if you did it by hand. The thing everyone keeps telling me I have to ensure that the AI hasn't made a mistake, so either I trust the vibes, or I'm still spending that time. Even Simon Willison agrees with me[1].
> Simon Willison
Is another suggested, so he's perfect to go next. I normally would exclude someone who's clearly best know as an AI influencer, but he's without a doubt an engineer too to fair game. Especially given he's answered a similar question just recently https://news.ycombinator.com/item?id=46582192 I've been searching for a counter point to my personal anti-AI hype, so was eager to see what the experts are making.... it's all boilerplate. I don't mean to say there's nothing valuable or that there's nothing useful there. Only that the vast majority of the code in these repos, is boilerplate that has no use out of context. The real value is just a few lines of code, something that I believe would only take 30m if you wrote the code without AI for the project you were already working on. It'd take a few hours to make any of this myself (assuming I'm even good enough to figure it out).
And I do admit, 10m on BART vs 3-4hours on a weekend is a very significant time delta. But also, I like writing code. So what was I really gonna do with that time? Make share holder value go up no doubt!
> Linus Torvalds
I can't find a single source where he's an advocate for AI. I've seen the commit, and while some of the github comments are gold. I wasn't able to draw any meaningful conclusions from the commit in isolation. Especially not when the last I read about it, he used it because he doesn't write python code. So I don't know what conclusions there are I can pull from this commit, other than AI can emit code. I knew that.
I don't have enough context to comment on the opinions of Steve Yegge or his AI generated output. I simply don't know enough, and after a quick search nothing other than AI influencer jumped out at me.
Then I try to care about who I give my time and attention to, or who I associate with so this is the end of list.
I contrast these, examples with all the hype that's proven over and over to be a miscommunication if I'm being charitable, or an outright lie if I'm not. I also think it's important to consider the incentives leading to these "miscommunications" when evaluating how much good faith you assign them.
On top of that, there's the countless examples of AI confidently lying to me about something. Explaining my fundamental concrete objection to being lied to; would take another hour I shouldn't spend on a HN comment.
What specific examples of impressive things/projects/commits/code am I missing? What output, makes all the downsides of AI a worthwhile trade off?
> In addition, pretty much every developer I know has used some form of GenAI or agentic coding over the last year, and they all say it gives them some form of speed up
I remember reading something that when tested, they're not actually faster. Any source on this other than vibes?
[1]: https://simonwillison.net/2025/Dec/18/code-proven-to-work/