(no title)
mkvoid
|
1 year ago
This is fascinating. The idea of leveling off in the learning curve is one that I hadn't considered before, although with hindsight it seems obvious. Based on your recollection (and without revealing too many personal details), do you recall any specific areas that caused the struggle? For example, was it a lack of understanding of the program architecture? Was it an issue of not understanding data structures? (or whatever) Thanks for your comment, it opened up a new set of questions for me.
sho_hn|1 year ago
The overall "flow" of the code didn't exist in his head, because he was basically taking small chunks of code in and out of ChatGPT, iterating locally wherever he was and the project just sort of growing organically that way. This is likely also what make the ChatGPT outputs themselves less useful over time: He wasn't aware of enough context to prompt the model with it, so it didn't have much to work with. There wasn't a lot of emerging intelligence a la provide what the client needs not what they think they need.
These days tools like aider end up prompting the model with a repo map etc. in the background transparently, but in 2023/24 that infra didn't exist yet and the context window of the models at the time was also much smaller.
In other words, the evolving nature of these tools might lead to different results today. On the other hand, if it had back then chances are he'd become even more reliant on them. The open question is whether there's a threshold there where it just stops mattering - if the results are always good, does it matter the human doesn't understand them? Naturally I find that prospect a bit frightening and creepy, but I assume some slice of the work will start looking like that.
unknown|1 year ago
[deleted]