Yes. Agents are good at solving densely represented (embarrassingly solved) problems, and a surprising and disturbing number of problems we have are, at least at the decomposed level, well represented. They can even compose them in new ways. But for the same reason they would be unable to derive general relativity, they cannot use insight to reformulate problems. I base this statement on my experience trying to get them to implement Flying Edges, a parallel isosurface extraction algorithm. It’s a reformulation of marching cubes, a serial algorithm that works over voxels, that works over edges instead. If they’re not shown known good code, models will try and implement marching cubes superficially shaped like flying edges.
You are still necessary to push the frontier forward. Though, given the way some models will catch themselves making a conceptual error and correct in real time, we should be nervous.
I've had the same experience. I do a lot of automation of two engineering software packages through python and java APIs which are not terribly well documented and existing discussion of them on the greater web is practically nonexistent.
They are completely, 100% useless, no matter what I do. Add on another layer of abstraction like "give me a function to calculate <engineering value>" and they get even worse. I had a small amount of luck getting it to refactor some really terrible code I wrote while under the gun, but they made tons of errors I had to go back and fix. Luckily I had a pretty comprehensive test suite by that point and finding the mistakes wasn't too hard.
(I've tried all of the "just point them at the documentation" replies I'm sure are coming. It doesn't help)
If you regard a CS degree as vocational training to "code" then perhaps not - but I don't think that's really how people should be regarding a CS degree?
Most “cs” students don’t work in aviation, majority (statistically) work on yet another SaaS that is a CRUD that has been solved millions of times already.
Depends on whether one wants to be a software engineer or a mere LLM operator.
To be fair to the parent poster, many people do seem to aspire only to be LLM operators, who will be a dime-a-dozen commodities accorded even less respect and pay than the average developer is today.
Computer science and coding are as related as physics and writing. If your thesis is the LLM can replace all of science then you have more faith in them than I do. If anything the LLM accelerates computer science and frees it from the perception that it is coding.
zjp|5 days ago
You are still necessary to push the frontier forward. Though, given the way some models will catch themselves making a conceptual error and correct in real time, we should be nervous.
operation_moose|5 days ago
They are completely, 100% useless, no matter what I do. Add on another layer of abstraction like "give me a function to calculate <engineering value>" and they get even worse. I had a small amount of luck getting it to refactor some really terrible code I wrote while under the gun, but they made tons of errors I had to go back and fix. Luckily I had a pretty comprehensive test suite by that point and finding the mistakes wasn't too hard.
(I've tried all of the "just point them at the documentation" replies I'm sure are coming. It doesn't help)
arethuza|5 days ago
mono442|5 days ago
projektfu|5 days ago
wiseowise|5 days ago
ThrowawayR2|5 days ago
To be fair to the parent poster, many people do seem to aspire only to be LLM operators, who will be a dime-a-dozen commodities accorded even less respect and pay than the average developer is today.
ModernMech|5 days ago