(no title)
jdlshore | 11 days ago
(It used a clever and rigorous technique for measuring productivity differences, BTW, for anyone as skeptical of productivity measures as I am.)
jdlshore | 11 days ago
(It used a clever and rigorous technique for measuring productivity differences, BTW, for anyone as skeptical of productivity measures as I am.)
keeda|11 days ago
However, because these threads always go the same way whenever I post this, I'll link to a previous thread in hopes of preempting the same comments and advancing the discussion! https://news.ycombinator.com/item?id=46559254
Also, DX (whose CTO was giving the presentation) actually collects telemetry-based metrics (PR's etc.) as well: https://getdx.com/uploads/ai-measurement-framework.pdf
It's not clear from TFA if these savings are self-reported or from DX metrics.
samuelknight|11 days ago
That info is from mid 2025, talking about models released in Oct 2024 and Feb 2025. It predates tools like Claude Code and Codex, Lovable was 1/3 current ARR, etc.
This might still be true but we desperately need new data.
lunar_mycroft|11 days ago
(Also, Anthropic released Claude Code in Febuary of 2025, which was near the start of the period the study ran).
monkaiju|11 days ago
JohnBooty|10 days ago
An industry I think we spend ~10% of our time writing code and ~90% of our time maintaining it and building upon it.
The real metric is not "how long did that PR take" but "how much additional work will this PR create or save in the long run." -- ie did this create tech debt? Or did it actually save us a bunch of effort in the long run?
My experience with ChatGPT these last few years is that if used "conscientiously" it allows me to ship much higher quality code because it has been very good at finding edge cases and suggesting optimizations. I am quite certain that when viewed over the long haul it has been at least a 2X productivity gain, possibly even much more, because all those edge cases and perf issues it solved for me in the initial PR represent many hours of work that will never have to be performed in the future.
It is of course possible to use AI coding assistants in other ways, producing AI slop that passes tests but is poorly structured and understood.
williamcotton|11 days ago
lunar_mycroft|11 days ago
[0] https://mikelovesrobots.substack.com/p/wheres-the-shovelware...
jdlshore|11 days ago