(no title)
berdon | 2 months ago
The massive increase in slop code and loss of innovation in code will establish an unavoidable limit on LLMs.
berdon | 2 months ago
The massive increase in slop code and loss of innovation in code will establish an unavoidable limit on LLMs.
Dilettante_|2 months ago
AlexCoventry|2 months ago
cmrdporcupine|2 months ago
I mean, in general not only do they have all of the crappy PHP code in existence in their corpus but they also have Principia Mathematica, or probably The Art of Computer Programming. And it has become increasingly clear to me that the models have bridged the gap between "autocomplete based on code I've seen" to some sort of distillation of first order logic based on them just reading a lot of language... and some fuzzy attempt at reasoning that came out of it.
Plus the agentic tools driving them are increasingly ruthless at wringing out good results.
That said -- I think there is a natural cap on what they can get at as pure coding machines. They're pretty much there IMHO. The results are usually -- I get what I asked for, almost 100%, and it tends to "just do the right thing."
I think the next step is actually to actually make it scale and make it profitable but also...
fix the tools -- they're not what I want as an engineer. They try to take over, and they don't put me in control, and they create a very difficult review and maintenance problem. Not because they make bad code but because they make code that nobody feels responsible for.
9dev|2 months ago
Also, training data isn’t just crawled text from the internet anymore, but also sourced from interactions of millions of developers with coding agents, manually provided sample sessions, deliberately generated code, and more—there is a massive amount of money and research involved here, so that’s another bet I wouldn’t be willing to make.