(no title)
the_mitsuhiko | 6 days ago
> The challenge is to develop new personal and organizational habits that respond to the affordances and opportunities of agentic engineering.
I don't think it's the habits that need to change, it's everything. From how accountability works, to how code needs to be structured, to how languages should work. If we want to keep shipping at this speed, no stone can be left unturned.
[1]: https://lucumr.pocoo.org/2026/2/13/the-final-bottleneck/
fmbb|6 days ago
If agentic AI is a good idea and if it increases productivity we should expect to see some startup blowing everyone out of the water. I think we should be seeing it now if it makes you say ten times more productive. A lot of startups have had a year of agentic AI now to help them beat their competitors.
ej88|6 days ago
Imo the wave of top down 'AI mandates' from incumbent companies is a direct result of the competitive pressure, although it probably wont work as well as the execs think it will
that being said even Dario claims a 5-20% speedup from coding agents, 10x productivity only exists in microcosm prototypes, or if someone was so unskilled oneshotting a localhost web app is a 10x for them
simonw|6 days ago
(Whether you think OpenClaw is good software is kind of beside the point.)
jdahlin|6 days ago
awepofiwaop|5 days ago
candiddevmike|6 days ago
I don't see a bunch of small agents in the future, instead just one per device or user. Maybe there will be a fleeting moment for GUI/local apps to tie into some local, OS LLM library (or some kind of WebLLM spec) to leverage this local agent in your app.
jazzypants|6 days ago
a_better_world|6 days ago
coldtea|6 days ago
Why? Why do we need to "write code so much faster and quicker" to the point we saturate systems downstream? I understand that we can, but just because we can, does'nt mean we should.
falcor84|6 days ago
But that's point of TFA, no? Now that writing code is no longer the bottleneck, the upstream and downstream processes have become the new bottlenecks, and we need to figure out how to widen them.
As I see it, the end goal for all of this is generating software at the speed of thought, or at least at the speed of speech. I want the digital butler to whom I could just say - "I'm not happy with the way things happened to day, please change it so that from here on, it'll be like x" - and it'll just respond with "As you wish", and I'll have confidence that it knows me well enough and is capable enough to have actually implemented the best possible interpretation of what I asked for, and that the few miscommunications that do occur would be easy to fix.
We're obviously not close that yet, but why shouldn't we build towards it?
the_mitsuhiko|6 days ago
alexhans|5 days ago
I'm very focused on their minimalistic building experience as a way to make me and other traditional developers, not the bottleneck and empowering them end to end.
I think AI evals [1] are a big part of that route and hope that different disciplines can finally have probable product design stories [2] instead of there being big gaps of understanding between them.
[1] https://alexhans.github.io/posts/series/evals/measure-first-...
[2] https://ai-evals.io
username223|5 days ago
Do we? Spewing features like explosive diarrhea is not something I want.
salty_frog|5 days ago
The linked blog post draws comparisons to the industrial revolution however in the industrial revolution the speed up caused innovation upstream not downstream.
The first innovation was mechanical weaving. The bottleneck was then yarn. This was automated so the bottleneck became cotton production, which was then mechanised.
So perhaps the real bottleneck of being able to write code faster is upstream.
Can requirements of what to build keep up with pace to deliver it?
simonw|6 days ago
I'm not ready to write about how radically though because I don't know myself!
ehnto|5 days ago
SignalStackDev|6 days ago
The thing I'd add from running agents in actual production (not demos, but workflows executing unattended for weeks): the hard part isn't code volume or token cost. It's state continuity.
Agents hallucinate their own history. Past ~50-60 turns in a long-running loop, even with large context windows, they start underweighting earlier information and re-solving already-solved problems. File-based memory with explicit retrieval ends up being more reliable than in-context stuffing - less elegant but more predictable across longer runs.
Second hard part: failure isolation. If an agent workflow errors at step 7 of 12, you want to resume from step 6, not restart from zero. Most frameworks treat this as an afterthought. Checkpoint-and-resume with idempotent steps is dramatically more operationally stable.
Agree it's not just habits - the infrastructure mental model has to change too. You're not writing programs so much as engineering reliability scaffolding around code that gets regenerated anyway.