(no title)
aconsult1 | 2 years ago
Recently I was trying to manipulate data in Google Sheets and was using ChatGPT to help. In the beginning is was fantastic, I was very productive because I didn't have to stop and think about formulas, read crappy documentation, or analyze data transformation. ChatGPT just gave me the right answer in a split second, as long as I kept asking the right questions.
Then I stumbled upon a particular issue that wasn't really too complicated but ChatGPT could not give me a correct answer. Unknowingly I spent 3 days trying to fix problems with the solution and every time I got an answer that was slightly wrong, not in subtle ways.
I have 20 years experience as a software engineer and still, I continued to waste time in this loop. After 3 days I decided to apply my engineering skills and solved the matter in 30 min. Now I know the solution and it was way simpler than I thought.
What surprised me was how dumb the whole process was. My questions certainly weren't the problem - as bad as they possibly were, the solution wasn't too far off. Not only ChatGPT had become a crutch but it put me in a situation that no human-tutor would ever put me.
So removing the pain of quick interactions with a tutor has benefits but the technology is not quite ready to be considered as true guidance in forming (or even helping) someone's understanding of a subject.
I've been using ChatGPT a lot for general language but when it requires logical thinking it falls pretty flat.
cmcaleer|2 years ago
I've been using GPT4 as a tool to interrogate lesson transcripts for a language I'm learning and mention in the prompt to specifically focus on things mentioned in the transcript, if it's not in the transcript, check the helper script I update as I move on through the process (which does sadly take up more and more context window) and figure out if my answer is in one of those previous lessons, and to not guess. Hallucinations are quite rare, I don't think I can name an egregious instance of it in the 25 lesson I've done of approximately 20 minutes in length each, though I'm sure it's happened.
It's also pretty good at suggesting drills based on the contents of the lesson, there are probably a whole bunch of lesson plans in the training data.
The end result has been progress at a pace I could only dream of previously, and it doesn't matter if a question is too basic because I'm asking a computer. There is zero concern of any question being embarrassing because it's only between me, GPT4, and the OAI engineer who happens across the conversaiton.
aconsult1|2 years ago
But when the task at hand involves logical thinking, that's when I believe the LLMs of today are still a very much work in progress.
So I'm skeptical of trying to use them as tutors for now. I'm sure things will evolve quite quickly from now.
PoignardAzur|2 years ago
"I spent days trying to solve X by doing Y, then it turned out I could have solved it by doing Z instead" is an experience I've had countless times before LLMs were a thing. Sometimes you really do need these three days of stumbling before you can build up the confidence to do the easy solution.
(Then again, I don't know the specifics of your case.)
danielbln|2 years ago
aconsult1|2 years ago
I have no evidence other than hundreds of hours using ChatGPT.