(no title)
wiremine | 5 months ago
I also strongly agree with Lamport, but I'm curious why you don't think Ai can help in the "theory building" process, both for the original team, and a team taking over a project? I.e., understanding a code base, the algorithms, etc.? I agree this doesn't replace all the knowledge, but it can bridge a gap.
wholinator2|5 months ago
bunderbunder|5 months ago
In other words, they can help you identify what fairly isolated pieces of code are doing. That's helpful, but it's also the single easiest part of understanding legacy code. The real challenges are things like identifying and mapping out any instances of temporal coupling, understanding implicit business rules, and inferring undocumented contracts and invariants. And LLM coding assistants are still pretty shit at those tasks.
manishsharan|5 months ago
You could paste your entire repo into Gemini and it could map your forest and also identify the "trees".
Assuming your codebase is smaller than Gemini context window. Sometimes it makes sense to upload a package,s code into Gemini and have it summarize and identify key ideas and function. Then repeat this for every package in the repository.then combine the results . It sounds tedious but it is a rather small python program that does this for me.
prmph|5 months ago
The client loved him, for obvious reasons, but it's hard to wrap my head around such an approach to software construction.
Another time, I almost took on a gig, but when I took one look at the code I was supposed to take over, I bailed. Probably a decade would still not be sufficient for untangling and cleaning up the code.
True vibe coding is the worst thing. It may be suitable for one-ff shell script s of < 100 line utilities and such, anything more than that and you are simple asking for trouble
N70Phone|5 months ago
The big problem is that LLMs do not *understand* the code you tell them to "explain". They just take probabilistic guesses about both function and design.
Even if "that's how humans do it too", this is only the first part of building an understanding of the code. You still need to verify the guess.
There's a few limitations using LLMs for such first-guessing: In humans, the built up understanding feeds back into the guessing, as you understand the codebase more, you can intuit function and design better. You start to know patterns and conventions. The LLM will always guess from zero understanding, relying only on the averaged out training data.
A following effect is that which bunderbunder points out in their reply: while LLMs are good at identifying algorithms, mere pattern recognition, they are exceptionally bad at world-modelling the surrounding environment the program was written in and the high level goals it was meant to accomplish. Especially for any information obtained outside the code. A human can run a git-blame and ask what team the original author was on, an LLM cannot and will not.
This makes them less useful for the task. Especially in any case where you intent to write new code; Sure, it's great that the LLM can give basic explanations about a programming language or framework you don't know, but if you're going to be writing code in it, you'd be better off taking the opportunity to learn it.
netghost|5 months ago
wiremine|5 months ago
To clarify my question: Based on my experience (I'm a VP for a software department), LLMs can be useful to help a team build a theory. It isn't, in and of itself, enough to build that theory: that requires hands-on practice. But it seems to greatly accelerate the process.
panarky|5 months ago
But people still think and reason just fine, but now at a higher level that gives them greater power and leverage.
Do you feel like you're missing something when you "cook for yourself" but you didn't you didn't plant and harvest the vegetables, raise and butcher the protein, forge the oven, or generate the gas or electricity that heats it?
You also didn’t write the CPU microcode or the compiler that turns your code into machine language.
When you cook or code, you're already operating on top of a very tall stack of abstractions.
jquaint|5 months ago