(no title)
debois | 2 months ago
> I don’t recall what happened next. I think I slipped into a malaise of models. 4-way split-paned worktrees, experiments with cloud agents, competing model runs and combative prompting.
You’re trying to have the LLM solve some problem that you don’t really know how to solve yourself, and then you devolve into semi-random prompting in the hope that it’ll succeed. This approach has two problems:
1. It’s not systematic. There’s no way to tell if you’re getting any closer to success. You’re just trying to get the magic to work.
2. When you eventually give up after however many hours, you haven’t succeeded, you haven’t got anything to build on, and you haven’t learned anything. Those hours were completely wasted.
Contrast this with you beginning to do the work yourself. You might give up, but you’d understand the source code base better, perhaps the relationship between Perl and Typescript, and perhaps you’d have some basics ported over that you could build on later.
gyomu|2 months ago
This feels like the LLM-enabled version of this behavior (except that in the former case, students will quickly realize that what they’re doing is pointless and ask a peer or teacher for help; whereas maybe the LLM is a little too good at hijacking that and making its user feel like things are still on track).
The most important thing to teach is how to build an internal model of what is happening, identify which assumptions in your model are most likely to be faulty/improperly captured by the model, what experiments to carry out to test those assumptions…
In essence, what we call an “engineering mindset” and what good education should strive to teach.
im_down_w_otp|2 months ago
That sounds like a lot of people I’ve known, except they weren’t students. More like “senior engineers”.
dodos|2 months ago
vbezhenar|2 months ago