> (perhaps frustratingly for the student), force them to iterate through something to get an answer.
IMHO, I think feeling frustration is the whole point -- it's how our brains rewire themselves, it's how we know we are learning, and it's how we build up true grit to solve harder problems.
As we want to "feel the burn" in the gym, we want to "feel the frustration" when learning.
The issue would be with students who just want a certain grade. That's where the dopamine hit is. Maybe AI can write you a paper at home, but it can't fill out a blue book in a classroom. Maybe there needs to an adjustment around types of assignments or how to grade them, but the in-class exams have always held more weight anyway.
Just like we see posts here about how AI (at the very least AI on its own) is ineffective at coding a product, these students eventually learn what the Wharton study had proven, that AI is not effective at getting them the grade they want.
I know I'm lazy. I try shortcuts like AI, copying Wikipedia before that, hoping just punching number into a ti-86 would solve my problems for me. They simply don't.
Say what you will about Oreos and other processed foods, but they do actually contain calories. They are legitimately food.
Here's my experience as a professional educator: AI tools are used not as shortcuts in the learning process, but for avoiding the learning process entirely. The analogy is therefore not to junk food, but to GLP-1, insofar as it's something that you do instead of food.
Students can easily use AI tools to write a programming project or an essay. It's basically impossible to detect. And they can pass classes and graduate without ever having had to attempt to learn any of the material. AI is already as capable as a university student.
The only solution is also hundreds of years old: in person, proctored exams. On paper. And moreover: a willingness to fail those students who don't keep up their end of the bargain.
It's a good one! I'm a lifelong fan of the leveling-up techniques you're talking about and I found they're essential when working with AI agents, especially.
I had the epiphany that all of the "AI's problems" were problems with my code or my understanding. This is my article[0] on that.
I feel like the article is not disciplined about maintaining definitions between Education and learning here, but there's some interesting stuff. I've found (I think!) LLMs to be hyper-useful for enquiry-based learning: lots of "well does that mean that" and "isn't that the same as" and "but you said earlier that" and "could you use shorter answers and we'll do this step by step please".
I am curious to dig into "Generative AI Can Harm Learning"[0], referenced in the article. I think the summary in the article skips over some of the subtleties in the abstract though.
I re-read the abstract and they tried two different modes of ChatGPT-4, "base mode" and a "tutor mode". The tutor mode helped students more, but it cautioned at the end:
> Our results suggest that students attempt to use GPT-4 as a "crutch" during practice problem sessions, and when successful, perform worse on their own. Thus, to maintain long-term productivity, we must be cautious when deploying generative AI to ensure humans continue to learn critical skills.
I think the caution is the use of AI to circuit the real learning, even if AI is in a tutor mode, to avoid building up true grit.
Ultimately, in writing this article, my hope was that a student would read it and get angry, angry that over use of AI - using it as a crutch - is actually having a negative impact on their learning, and they would resolve to using it only for more efficiency and effectiveness, not a substitution for the true learning.
I was thinking of Richard Feynman’s approach to learning when writing this article. He was a genius, so I didn't want the analogy to be unrelatable. However, he really enjoyed understanding the first principles and that enjoyment gave him such a solid foundation. He put in the necessary hours to learn, and what a remarkable life he enjoyed because of it.
[+] [-] phillipcarter|10 months ago|reply
It's adjusted to not just give answers, but (perhaps frustratingly for the student), force them to iterate through something to get an answer.
Like anything it's likely also jail-breakable, but as we've learned with all software, the defaults matter.
[+] [-] ffdixon1|10 months ago|reply
IMHO, I think feeling frustration is the whole point -- it's how our brains rewire themselves, it's how we know we are learning, and it's how we build up true grit to solve harder problems.
As we want to "feel the burn" in the gym, we want to "feel the frustration" when learning.
[+] [-] dfxm12|10 months ago|reply
Just like we see posts here about how AI (at the very least AI on its own) is ineffective at coding a product, these students eventually learn what the Wharton study had proven, that AI is not effective at getting them the grade they want.
I know I'm lazy. I try shortcuts like AI, copying Wikipedia before that, hoping just punching number into a ti-86 would solve my problems for me. They simply don't.
[+] [-] ffdixon1|10 months ago|reply
Quick dopamine hits. Immediate satisfaction. Long-term learning deficits.
How to break this cycle? I wrote this article to try to answer this question.
[+] [-] hackyhacky|10 months ago|reply
Here's my experience as a professional educator: AI tools are used not as shortcuts in the learning process, but for avoiding the learning process entirely. The analogy is therefore not to junk food, but to GLP-1, insofar as it's something that you do instead of food.
Students can easily use AI tools to write a programming project or an essay. It's basically impossible to detect. And they can pass classes and graduate without ever having had to attempt to learn any of the material. AI is already as capable as a university student.
The only solution is also hundreds of years old: in person, proctored exams. On paper. And moreover: a willingness to fail those students who don't keep up their end of the bargain.
[+] [-] dtagames|10 months ago|reply
I had the epiphany that all of the "AI's problems" were problems with my code or my understanding. This is my article[0] on that.
[0] https://levelup.gitconnected.com/mission-impossible-managing...
[+] [-] petesergeant|10 months ago|reply
I am curious to dig into "Generative AI Can Harm Learning"[0], referenced in the article. I think the summary in the article skips over some of the subtleties in the abstract though.
0: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4895486
[+] [-] ffdixon1|10 months ago|reply
> Our results suggest that students attempt to use GPT-4 as a "crutch" during practice problem sessions, and when successful, perform worse on their own. Thus, to maintain long-term productivity, we must be cautious when deploying generative AI to ensure humans continue to learn critical skills.
I think the caution is the use of AI to circuit the real learning, even if AI is in a tutor mode, to avoid building up true grit.
Ultimately, in writing this article, my hope was that a student would read it and get angry, angry that over use of AI - using it as a crutch - is actually having a negative impact on their learning, and they would resolve to using it only for more efficiency and effectiveness, not a substitution for the true learning.
I was thinking of Richard Feynman’s approach to learning when writing this article. He was a genius, so I didn't want the analogy to be unrelatable. However, he really enjoyed understanding the first principles and that enjoyment gave him such a solid foundation. He put in the necessary hours to learn, and what a remarkable life he enjoyed because of it.
[+] [-] fithisux|10 months ago|reply
I also agree with the title and its implications.
But hype is hype, and humans like to ride the hype.
[+] [-] BrenBarn|10 months ago|reply
[+] [-] yoko888|10 months ago|reply
[deleted]