top | item 46446664

(no title)

sak84 | 2 months ago

I saw a tweet from Andrej Karpathy that's been sitting with me. He's never felt this behind as a programmer. I've been thinking about this through the marshmallow challenge, where kindergartners beat MBAs. The kids just build and iterate. Most of us are the MBAs right now with AI tools.

discuss

order

PaulHoule|2 months ago

Some of it is that the radical acceleration in productivity isn't real. See Brook's "No Silver Bullet". You certainly have those moments where you describe a bug and ask if it can understand it and get an answer in two minutes, but when you consider everything that goes into the "definition of done", 10x just isn't realistic.

My take at work is that I'm not running much faster, but I am getting better quality. Some of it is my attitude, but with AI I am more likely to go back and forth and ask things until I really understand what is going on, write tests even when it is a hassle to write tests, ask the IDE questions about the dependencies I use so I can really understand how they work, try two or three possible solutions and pick the best, etc.

When it comes to things like that memory leak it is very hit and miss. If you give it try it might solve it, it might not. It's worth trying. But you can't count on something like that working all the time.

sak84|2 months ago

I think you're right that 10x isn't realistic for most work, and Brooks is still mostly correct. The "No Silver Bullet" argument holds because most of software development isn't typing code faster.

But you're describing exactly the shift that matters. You're not running faster, you're getting better quality. You're more likely to understand dependencies, write tests, try multiple solutions. That's the actual productivity gain.

The marshmallow challenge point isn't about whether AI makes you 10x faster. It's about the mindset shift. The MBAs didn't lose because they were slower. They lost because they spent their time planning the perfect approach instead of iterating.

The memory leak example from Boris Cherny isn't about AI being reliable. It's about his coworker not having the baggage of "this is how you debug memory leaks." They just tried asking Claude first. Sometimes it works, sometimes it doesn't. But the willingness to try it first is what creates the gap.