> Cursor is a nervous intern! They don’t want to admit they don’t know; we need to help by providing context.
With an intern, I give them some search terms and let them go learn. I don't have to do the searching for them. It's actually more important to help them learn how to evaluate the different results. It's not even that they "don't want to admit they don't know" (which is anthropomorphization), they are not designed or trained to ask clarifications. The chat based interaction is an "afterthought" (a round of fine tuning after initial training)
The big issues I see in this paradigm are
(1) you have to know a lot of things already to do this
(2) if we automate all the low hanging fruit, how will we develop humans to the level of understanding to do this
(3) with a human, I can delegate, with an AI, I have to handhold. As much as people want to call it "pair programming" it is often more like having to teach except in never truly learns, so I never get my lost time back
I think there's some updates I can make to this to give some guidance around the (1) you have to know a lot of things point. You can use AI assistance to help learn more. But ultimately, I don't believe we are at the point where the AI makes you "smarter"; but it _can_ make you more productive.
I disagree with (3) based on my experience. That feeling happens when I am not providing enough context. I rarely have experiences where I step back to provide more context and still end up in a dumb loop. Highly recommend providing lots of context + breaking down the problem more.
I would love to dig deeper into an example you have where you feel you "never get your time back". Because in general I am saving a lot of time from how much less typing I have to do.
From tiny amount of experience I've had with LLM coding assistant, biggest problem was that it does not include what can be inferred but not explicitly asked, because(?) it's sort of a decompression process for a lossy copy of the Internet. You can't ask for a cheeseburger, but it has to be a hamburger with patty topped with sliced cheese and standard condiments.
Contrarily, it didn't seem to care a lot about roleplays other than for occasional compliments for reinforcements. If anything, old HN style blunt interactions seemed to help.
> The fundamental issue is treating AI as a magic code generator rather than a collaborative team member.
The fundamental problem is the tools being marketed and onboarded that way. I don't blame them for getting frustrated.
You can't expect everyone to simply understand the limitations of LLMs and how the tool might be implemented. There is a user discovery issue at play here.
I noticed a lot of people complaining about Cursor agent being bad. And my initial experience was bad. After working with it for a while, I found a workflow that works for me; I hope it's helpful for you too!
[+] [-] verdverm|1 year ago|reply
With an intern, I give them some search terms and let them go learn. I don't have to do the searching for them. It's actually more important to help them learn how to evaluate the different results. It's not even that they "don't want to admit they don't know" (which is anthropomorphization), they are not designed or trained to ask clarifications. The chat based interaction is an "afterthought" (a round of fine tuning after initial training)
The big issues I see in this paradigm are
(1) you have to know a lot of things already to do this
(2) if we automate all the low hanging fruit, how will we develop humans to the level of understanding to do this
(3) with a human, I can delegate, with an AI, I have to handhold. As much as people want to call it "pair programming" it is often more like having to teach except in never truly learns, so I never get my lost time back
[+] [-] sbpayne|1 year ago|reply
I disagree with (3) based on my experience. That feeling happens when I am not providing enough context. I rarely have experiences where I step back to provide more context and still end up in a dumb loop. Highly recommend providing lots of context + breaking down the problem more.
I would love to dig deeper into an example you have where you feel you "never get your time back". Because in general I am saving a lot of time from how much less typing I have to do.
[+] [-] numpad0|1 year ago|reply
Contrarily, it didn't seem to care a lot about roleplays other than for occasional compliments for reinforcements. If anything, old HN style blunt interactions seemed to help.
[+] [-] sbpayne|1 year ago|reply
[+] [-] politelemon|1 year ago|reply
The fundamental problem is the tools being marketed and onboarded that way. I don't blame them for getting frustrated.
You can't expect everyone to simply understand the limitations of LLMs and how the tool might be implemented. There is a user discovery issue at play here.
[+] [-] sbpayne|1 year ago|reply
[+] [-] hadsed|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]