top | item 47080356

(no title)

hintymad | 10 days ago

In the latest interview with Claude Code's author: https://podcasts.apple.com/us/podcast/lennys-podcast-product..., Boris said that writing code is a solved problem. This brings me to a hypothetical question: what if engineers stop contributing to open source, in which case would AI still be powerful enough to learn the knowledge of software development in the future? Or is the field of computer science plateaued to the point that most of what we do is linear combination of well established patterns?

discuss

order

e40|10 days ago

> Boris said that writing code is a solved problem

That's just so dumb to say. I don't think we can trust anything that comes out of the mouths of the authors of these tools. They are conflicted. Conflict of interest, in society today, is such a huge problem.

shimman|10 days ago

There are bloggers that can't even acknowledge that they're only invited out to big tech events because they'll glaze them up to high heavens.

Reminds me of that famous exchange, by noted friend of Jeffrey Epstein, Noam Chomsky: "I’m not saying you’re self-censoring. I’m sure you believe everything you say. But what I’m saying is if you believed something different you wouldn’t be sitting where you’re sitting."

timacles|10 days ago

Its all basically: Sensationalist take to shock you and get attention

chrisjj|9 days ago

> That's just so dumb to say

Depends. Its true of dumb code and dumb coders. Anorher reason why yes, smart pepple should not trust.

fhub|10 days ago

He is likely working on a very clean codebase where all the context is already reachable or indexed. There are probably strong feedback loops via tests. Some areas I contribute to have these characteristics, and the experience is very similar to his. But in areas where they don’t exist, writing code isn’t a solved problem until you can restructure the codebase to be more friendly to agents.

Even with full context, writing CSS in a project where vanilla CSS is scattered around and wasn’t well thought out originally is challenging. Coding agents struggle there too, just not as much as humans, even with feedback loops through browser automation.

pseudosavant|10 days ago

It's funny that "restructure the codebase to be more friendly to agents" aligns really well with what we have "supposed" to have been doing already, but many teams slack on: quality tests that are easy to run, and great documentation. Context and verifiability.

The easier your codebase is to hack on for a human, the easier it is for an LLM generally.

swordsith|10 days ago

Truth. I've had much easier time grappling with code bases I keep clean and compartmentalized with AI, over-stuffing context is one of the main killers of its quality.

michaelbuckbee|9 days ago

Having picked up a few long neglected projects in the past year, AI has been tremendous in rapidly shipping quality of dev life stuff like much improved test suites, documenting the existing behavior, handling upgrades to newer framework versions, etc.

I've really found it's a flywheel once you get going.

jimbokun|9 days ago

All those people who thought clean well architected code wasn’t important…now with LLMs modifying code it’s even more important.

chrisjj|9 days ago

> He is likely working on

... a laundry list phone app.

yourapostasy|10 days ago

Even as the field evolves, the phoning home telemetry of closed models creates a centralized intelligence monopoly. If open source atrophies, we lose the public square of architectural and design reasoning, the decision graph that is often just as important as the code. The labs won't just pick up new patterns; they will define them, effectively becoming the high priests of a new closed-loop ecosystem.

However, the risk isn't just a loss of "truth," but model collapse. Without the divergent, creative, and often weird contributions of open-source humans, AI risks stagnating into a linear combination of its own previous outputs. In the long run, killing the commons doesn't just make the labs powerful. It might make the technology itself hit a ceiling because it's no longer being fed novel human problem-solving at scale.

Humans will likely continue to drive consensus building around standards. The governance and reliability benefits of open source should grow in value in an AI-codes-it-first world.

hintymad|10 days ago

> It might make the technology itself hit a ceiling because it's no longer being fed novel human problem-solving at scale.

My read of the recent discussion is that people assume that the work of far fewer number of elites will define the patterns for the future. For instance, implementation of low-level networking code can be the combination of patterns of zeromq. The underlying assumption is that most people don't know how to write high-performance concurrent code anyway, so why not just ask them to command the AI instead.

layer8|10 days ago

I think you mean software engineering, not computer science. And no, I don’t think there is reason for software engineering (and certainly not for computer science) to be plateauing. Unless we let it plateau, which I don’t think we will. Also, writing code isn’t a solved problem, whatever that’s supposed to mean. Furthermore, since the patterns we use often aren’t orthogonal, it’s certainly not a linear combination.

hintymad|10 days ago

I assume that new business scenarios will drive new workflows, which requires new work of software engineering. In the meantime, I assume that computer science will drive paradigm shift, which will drive truly different software engineering practice. If we don't have advances in algorithms, systems, and etc, I'd assume that people can slowly abstract away all the hard parts, enabling AI to do most of our jobs.

biztos|10 days ago

Or does the field become plateaued because engineers treat "writing code" as a "solved problem?"

We could argue that writing poetry is a solved problem in much the same way, and while I don't think we especially need 50,000 people writing poems at Google, we do still need poets.

hintymad|10 days ago

> we especially need 50,000 people writing poems at Google, we do still need poets.

I'd assume that an implied concern of most engineers is how many software engineers the world will need in the future. If it's the situation like the world needing poets, then the field is only for the lucky few. Most people would be out of job.

stephencoyner|10 days ago

I saw Boris give a live demo today. He had a swarm of Claude agents one shot the most upvoted open issue on Excalidraw while he explained Claude code for about 20 minutes.

No lines of code written by him at all. The agent used Claude for chrome to test the fix in front of us all and it worked. I think he may be right or close to it.

mattmanser|9 days ago

Did he pick Excalidraw as the project to work on, or did the audience?

It's easy to be conned if you're not looking for the sleight of hand. You need to start channelling your inner Randi whenever AI demos are done, there's a lot of money at stake and a lot of money to prep a polished show.

To be honest, even if the audience "picked" that project, it could have been a plant shouting out the project.

I'm not saying they prepped the answer, I'm saying they prepped picking a project it could definitely work on. An AI solvable problem.

groby_b|10 days ago

That is the same team that has an app that used React for TUI, that uses gigabytes to have a scrollback buffer, and that had text scrolling so slow you could get a coffee in between.

And that then had the gall to claim writing a TUI is as hard as a video game. (It clearly must be harder, given that most dev consoles or text interfaces in video games consistently use less than ~5% CPU, which at that point was completely out of reach for CC)

He works for a company that crowed about an AI-generated C compiler that was so overfitted, it couldn't compile "hello world"

So if he tells me that "software engineering is solved", I take that with rather large grains of salt. It is far from solved. I say that as somebody who's extremely positive on AI usefulness. I see massive acceleration for the things I do with AI. But I also know where I need to override/steer/step in.

The constant hypefest is just vomit inducing.

mccoyb|10 days ago

I wanted to write the same comment. These people are fucking hucksters. Don’t listen to their words, look at their software … says all you need to know.

GeoAtreides|10 days ago

>writing code is a solved problem

sure is news for the models tripping on my thousands of LOC jquery legacy app...

nake89|9 days ago

Could the LLM rewrite it from scratch?

gip|10 days ago

My prediction: soon (e.g. a few years) the agents will be the one doing the exploration and building better ways to write code, build frameworks,... replacing open source. That being said software engineers will still be in the loop. But there will be far less of them.

Just to add: this is only the prediction of someone who has a decent amount of information, not an expert or insider

overgard|10 days ago

I really doubt it. So far these things are good at remixing old ideas, not coming up with new ones.

giancarlostoro|10 days ago

There's so many timeless books on how to write software, design patterns, lessons learned from production issues. I don't think AI will stop being used for open source, in fact, with the number of increasing projects adjusting their contributor policies to account for AI I would argue that what we'll see is always people who love to hand craft their own code, and people who use AI to build their own open source tooling and solutions. We will also see an explosion is needing specs for things. If you give a model a well defined spec, it will follow it. I get better results the more specific I get about how I want things built and which libraries I want used.

cheema33|10 days ago

> is the field of computer science plateaued to the point that most of what we do is linear combination of well established patterns?

Computer science is different from writing business software to solve business problems. I think Boris was talking about the second and not the first. And I personally think he is mostly correct. At least for my organization. It is very rare for us to write any code by hand anymore. Once you have a solid testing harness and a peer review system run by multiple and different LLMs, you are in pretty good shape for agentic software development. Not everybody's got these bits figured out. They stumble around and them blame the tools for their failures.

paulryanrogers|10 days ago

> Not everybody's got these bits figured out. They stumble around and them blame the tools for their failures.

Possible. Yet that's a pretty broad brush. It could also be that some businesses are more heavily represented in the training set. Or some combo of all the above.

stuaxo|10 days ago

"Writing code is a solved problem" disagree.

Yes, there are common parts to everything we do, at the same time - I've been doing this for 25 years and most of the projects have some new part to them.

danielbln|9 days ago

Novel problems are usually a composite of simpler and/or older problems that have been solved before. Decomposition means you can rip most novel problems apart and solve the chunks. LLMs do just fine with that.

ochronus|9 days ago

The creator of the hammer says driving nails into wood planks is a solved problem. Carpenters are now obsolete.

jacquesm|9 days ago

Prediction: open source will stop.

Sure, people did it for the fun and the credits, but the fun quickly goes out of it when the credits go to the IP laundromat and the fun is had by the people ripping off your code. Why would anybody contribute their works for free in an environment like that?

pu_pe|9 days ago

I believe the exact opposite. We will see open source contributions skyrocket now. There are a ton of people who want to help and share their work, but technical ability was a major filter. If the barrier to entry is now lowered, expect to see many more people sharing stuff.

orangecoffee|9 days ago

Many did it for liberty - a philosophical position on freedom in software. They're supercharged with AI.

therealpygon|10 days ago

I don’t believe people who have dedicated their lives to open source will simply want to stop working on it, no matter how much is or is not written by AI. I also have to agree, I find myself more and more lately laughing about just how much resources we waste creating exactly the same things over and over in software. I don’t mean generally, like languages, I mean specifically. How many trillions of times has a form with username and password fields been designed, developed, had meetings over, tested, debugged, transmitted, processed, only to ultimately be re-written months later?

I wonder what all we might build instead, if all that time could be saved.

hintymad|10 days ago

> I don’t believe people who have dedicated their lives to open source will simply want to stop working on it, no matter how much is or is not written by AI.

Yeah, hence my question can only be hypothetical.

> I wonder what all we might build instead, if all that time could be saved

If we subscribe to Economics' broken-window theory, then the investment into such repetitive work is not investment but waste. Once we stop such investment, we will have a lot more resources to work on something else, bring out a new chapter of the tech revolution. Or so I hope.

sensanaty|9 days ago

> Boris said that writing code is a solved problem.

No way, the person selling a tool that writes code says said tool can now write code? Color me shocked at this revelation.

Let's check in on Claude Code's open issues for a sec here, and see how "solved" all of its issues are? Or my favorite, how their shitty React TUI that pegs modern CPUs and consumes all the memory on the system is apparently harder to get right than Video Games! Truly the masters of software engineering, these Anthropic folks.

overgard|10 days ago

Even if you like them, I don't think there's any reason to believe what people from these companies say. They have every reason to exaggerate or outright lie, and the hype cycle moves so quickly that there are zero consequences for doing so.