> A common question is: “how much are students using AI to cheat?” That’s hard to answer, especially as we don’t know the specific educational context where each of Claude’s responses is being used.
I built a popular product that helps teachers with this problem.
Yes, it's "hard to answer", but let's be honest... it's a very very widespread problem. I've talked to hundreds of teachers about this and it's a ubiquitous issue. For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".
I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.
> it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".
My wife is an accounting professor. For many years her battle was with students using Chegg and the like. They would submit roughly correct answers but because she would rotate the underlying numbers they would always be wrong in a provably cheating way. This made up 5-8% of her students.
Now she receives a parade of absolutely insane answers to questions from a much larger proportion of her students (she is working on some research around this but it's definitely more than 30%). When she asks students to recreate how they got to these pretty wild answers they never have any ability to articulate what happened. They are simply throwing her questions at LLMs and submitting the output. It's not great.
When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.
Whenever we have a new technology there's a response "why do I need to learn X if I can always do Y", and more or less, it has proven true, although not immediately.
For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc
Not that these aren't noble things or worth doing, but they won't impact your life too much if you're not interest in penmanship, spelling, or cartography.
I believe LLMs are different (I am still stuck in the moral panic phase), but I think my children will have a different perspective (similar to how I feel about memorizing poetry and languages without garbage collection). So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"
IMO it's so easy to ChatGPT your homework that the whole education model needs to flip on its head. Some teachers already do something like this, it's called the "Flipped classroom" approach.
Basically, a student's marks depend mostly (only?) on what they can do in a setting where AI is verifiably unavailable. It means less class time for instruction, but students have a tutor in their pocket anyway.
I've also talked with a bunch of teachers and a couple admins about this. They agree it's a huge problem. By the same token, they are using AI to create their lesson plans and assignments! Not fully of course, they edit the output using their expertise. But it's funny to imagine AI completing an AI assignment with the humans just along for the ride.
The point is, if you actually want to know what a student is capable of, you need to watch them doing it. Assigning homework has lost all meaning.
I’m a physicist. I can align and maximize ANY laser. I don’t even think when doing this task. Long hours of struggle, 50 years ago. Without struggle there is nothing. You can bullshit your way in. But you will be ejected.
After reading the whole article I still came away with the suspicion that this is a PR piece that is designed to head-off strict controls on LLM usage in education. There is a fundamental problem here beyond cheating (which is mentioned, to their credit, albeit little discussed). Some academic topics are only learned through sustained, even painful, sessions where attention has to be fully devoted, where the feeling of being "stuck" has to be endured, and where the brain is given space and time to do the real work of synthesizing, abstracting, and learning, or, in short, thinking. The prompt-chains where students are asking "show your work" and "explain" can be interpreted as the kind of back-and-forth that you'd hear between a student and a teacher, but they could also just be evidence of higher forms of "cheating". If students are not really working through the exercises at the end of each chapter, but instead offloading the task to an LLM, then we're going to have a serious competency issue. Nobody ever actually learns anything.
Even in self-study, where the solutions are at the back of the text, we've probably all had the temptation to give up and just flip to the answer. Anthropic would be more responsible to admit that the solution manual to every text ever made is now instantly and freely available. This has to fundamentally change pedagogy. No discipline is safe, not even those like music where you might think the end performance is the main thing (imagine a promising, even great, performer who cheats themselves in the education process by offloading any difficult work in their music theory class to an AI, coming away learning essentially nothing).
P.S. There is also the issue of grading on a curve in the current "interim" period where this is all new. Assume a lazy professor, or one refusing to adopt any new kind of teaching/grading method: the "honest" students have no incentive to do it the hard way when half the class is going to cheat.
I feel like Anthropic has an incentive to minimize how much students use LLMs to write their papers for them.
In the article, I guess this would be buried in
> Students also frequently used Claude to provide technical explanations or solutions for academic assignments (33.5%)—working with AI to debug and fix errors in coding assignments, implement programming algorithms and data structures, and explain or solve mathematical problems.
"Write my essay" would be considered a "solution for academic assignment," but by only referring to it obliquely in that paragraph they don't really tell us the prevalence of it.
(I also wonder if students are smart, and may keep outright usage of LLMs to complete assignments on a separate, non-university account, not trusting that Anthropic will keep their conversations private from the university if asked.)
Exactly. There's a big difference between a student having a back-and-forth dialogue with Claude around "the extent to which feudalism was one of the causes of the French Revolution.", versus another student using their smartphone to take a snapshot of the actual homework assignment, pasting it into Claude and calling it a day.
Most of their categories have straightforward interpretations in terms of students using the tool to cheat. They don't seem to want to/care to analyze that further and determine which are really cheating and which are more productive uses.
I think that's a bit telling on their motivations (esp. given their recent large institutional deals with universities).
> feel like Anthropic has an incentive to minimize how much students use LLMs to write their papers for them
You're right.
Quite incredibly, they also do the opposite, in that they hype-up / inflate the capability of their LLMs. For instance, they've categorised "summarisation" as "high-order thinking" ("Create", per Bloom's Taxonomy). It patently isn't. Comical they'd not only think so, but also publicly blog about it.
> Students primarily use AI systems for creating (using information to learn something new)
this is a smooth way to not say "cheat" in the first paragraph and to reframe creativity in a way that reflects positively on llm use. in fairness they then say
> This raises questions about ensuring students don’t offload critical cognitive tasks to AI systems.
and later they report
> nearly half (~47%) of student-AI conversations were Direct—that is, seeking answers or content with minimal engagement. Whereas many of these serve legitimate learning purposes (like asking conceptual questions or generating study guides), we did find concerning Direct conversation examples including: - Provide answers to machine learning multiple-choice questions
- Provide direct answers to English language test questions
- Rewrite marketing and business texts to avoid plagiarism detection
kudos for addressing this head on. the problem here, and the reason these are not likely to be democratizing but rather wedge technologies, is not that they make grading harder or violate principles of higher education but that they can disable people who might otherwise learn something
I should say, disable you- the tone did not reflect that it can happen to anyone, and that it can not only be a wedge between people but also (and only by virtue of being) between personal trajectories, conditional on the way one uses it
The writing is irrelevant. Who cares if students don't learn how to do it? Or if the magazines are all mostly generated a decade from now? All of that labor spent on writing wasn't really making economic sense.
The problem with that take is this: it was never about the act of writing. What we lose, if we cut humans out of the equation, is writing as a proxy for what actually matters, which is thinking.
You'll soon notice the downsides of not-thinking (at scale!) if you have a generation of students who weren't taught to exercise their thinking by writing.
I hope that more people come around to this way of seeing things. It seems like a problem that will be much easier to mitigate than to fix after the fact.
A little self-promo: I'm building a tool to help students and writers create proof that they have written something the good ol fashioned way. Check it out at https://itypedmypaper.com and let me know what you think!
How does your product prevent a person from simply retyping something that ChatGPT wrote?
I think the prevalence of these AI writing bots means schools will have to start doing things that aren’t scalable: in-class discussions, in-person writing (with pen and paper or locked down computers), way less weight given to remote assignments on Canvas or other software. Attributing authorship from text alone (or keystroke patterns) is not possible.
In my opinion this is not true. Writing is a form of communicating ideas. Structuring and communicating ideas with others is really important, not just in written contexts, and it needs to be trained.
Maybe the way universities do it is not great, but writing in itself is important.
Students will work in a world where they have to use AI to do their jobs. This is not going to be optional. Learning to use AIs effectively is an important skill and should be part of their education.
And it's an opportunity for educators to raise the ambition level quite a bit. It indeed obsoletes some of the tests they've been using to evaluate students. But they too now have the AI tools to do a better job and come up with more effective tests.
Think of all that time freed up having to actually read all those submitted papers. I can tell you from experience (I taught a few classes as a post doc way back): not fun. Minimum you can just instantly fail the ones that are obviously poorly written, are full of grammatical errors, and feature lots of flawed reasoning. Most decent LLMs do a decent job of doing that. Is using an LLM for that cheating if a teacher does it? I think that should just be expected at this point. And if it is OK for the teacher, it should be OK for the student.
If you expect LLMs to be used, it raises the bar for the acceptable quality level of submitted papers. They should be readable, well structured, well researched, etc. There really is no excuse for those papers not being like that. The student needs to be able to tell the difference. That actually takes skill to ask for the right things. And you can grill them on knowledge of their own work. A little 10 minute conversation maybe. Which should be about the amount of time a teacher would have otherwise spent on evaluating the paper manually and is definitely more fun (I used to do that; give people an opportunity to defend their work).
And if you really want to test writing skills, put students in a room with pen and paper. That's how we did things in the eighties and nineties. Most people did not have PCs and printers then. Poor teachers had to actually sit down and try to decipher my handwriting. Which even when that skill had not atrophied for a few decades, wasn't great.
LLMs will force change in education one way or another. Most of that change will be good. People trying to cheat is a constant. We just need to force them to be smarter about it. Which at a meta level isn't that bad of a skill to learn when you are educating people.
Writing is not necessary for thinking. You can learn to think without writing. I've never had a brilliant thought while writing.
In fact, I've done a lot more thinking and had a lot more insights from talking than from writing.
Writing can be a useful tool to help with rigorous thinking. In my opinion, is mostly about augmenting the author's effective memory to be larger and more precise.
I'm sure the same effect could be achieved by having AI transcribe a conversation.
How can I, as a student, avoid hindering my learning with language models?
I use Claude, a lot. I’ll upload the slides and ask questions. I’ve talked to Claude for hours trying to break down a problem. I think I’m learning more. But what I think might not be what’s happening.
In one of my machine learning classes, cheating is a huge issue. People are using LMs to answer multiple choice questions on quizzes that are on the computer. The professors somehow found out students would close their laptops without submitting, go out into the hallway, and use a LM on their phone to answer the questions. I’ve been doing worse in the class and chalked it up to it being grad level, but now I think it’s the cheating.
I would never do cheat like that, but when I’m stuck and use Claude for a hint on the HW am I loosing neurons? The other day I used Claude to check my work on a graded HW question (breaking down a binary packet) and it caught an error. I did it on my own before and developed some intuition but would I have learned more if I submitted that and felt the pain of losing points?
Interesting article, but I think it downplays the incidence of students using Claude as an alternative to building foundational skills. I could easily see conversations that they outline as "Collaborative" primarily being a user walking Claude through multi-part problems or asking it to produce justifications for answers that students add to assignments.
I've used AI for one of the best studying experiences I've had in a long time:
1. Dump the whole textbook into Gemini, along with various syllabi/learning goals.
2. (Carefully) Prompt it to create Anki flashcards to meet each goal.
3. Use Anki (duh).
4. Dump the day's flashcards into a ChatGPT session, turn on voice mode, and ask it to quiz me.
Then I can go about my day answering questions. The best part is that if I don't understand something, or am having a hard time retaining some information, I can immediately ask it to explain - I can start a whole side tangent conversation deepening my understanding of the knowledge unit in the card, and then go right back to quizzing on the next card when I'm ready.
I think most people miss the bigger picture on the impact of AI on the learning process, especially in engineering disciplines.
Doing things that could be in principle automated by AI is still fundamentally valuable, because they bring two massive benefits:
- *Understanding what happens under the hood*: if you want to be an effective software engineer, you need to understand the whole stack. This is true of any engineering discipline really. Civil engineers take classes in fluid dynamics and material science classes although they will mostly apply pre-defined recipes on the job. You wouldn't be comfortable if the engineer who signed off on the blueprints of dam upstream of your house had no idea about the physics of concrete, hydrodynamic scour, etc.
- *Having fun*: there is nothing like the joy of discovering how things work, even though a perfectly fine abstraction that hides these details underneath. It is a huge part of the motivation for becoming an engineer. Even by assuming that Vibe Coding could develop into something that works, it would be a very tedious job.
When students use AI to do the hard work on their behalf, they miss out on those. We need to be extremely careful with this, as we might hurt a whole generation of students, both in terms of their performance and their love of technology.
I agree with you, but I hope schools also take the opportunity to reflect on what they teach and how. I used to think I hated writing, but it turns out I just hated English class. (I got a STEM degree because I hated English class so much, so maybe I have my high school English teacher to thank for it.)
Torturing students with five paragraph essays, which is what “learning” looks like for most American kids, is not that great and isn’t actually teaching critical thinking which is most valuable. I don’t know any other form of writing that is like that.
Reading “themes” into books that your teacher is convinced are there. Looking for 3 quotes to support your thesis (which must come in the intro paragraph, but not before the “hook” which must be exciting and grab the reader’s attention!).
Most of us here took their education before AI. Students trying to avoid having to do work is a constant and as old as the notion of schools is. Changing/improving the tools just means teachers have to escalate the counter measures. For example by raising the ambition level in terms of quality and amount of work expected.
And teachers should use AIs too. Evaluating papers is not that hard for an LLM.
"Your a teacher. Given this assignment (paste /attach the file and the student's paper), does this paper meet the criteria. Identify flaws and grammatical errors. Compose a list of ten questions to grill the student on based on their own work and their understanding of the background material."
A prompt like that sounds like it would do the job. Of course, you'd expect students to use similar prompts to make sure they are prepared for discussing those questions with the teacher.
This has been observation about the internet. Growing up in a small town without access to advanced classes, having access to Wikipedia felt like the greatest equalizer in the world. 20 years post internet, seeing the most common outcome be that people learn less as a result of unlimited access to information would be depressing if it did not result in my own personal gain.
My wife works at a European engineering university with students from all over the world and is often a thesis advisor for Masters students. She says that up until 2 years ago a lot of her time was spent on just proofreading and correcting the student's English. Now everybody writes 'perfect' English and all sound exactly the same in an obvious ChatGPT sort way. It is also obvious that they use AI when she asks them why they used a certain 'big' word or complicated sentence structure, and they just stare blankly and cannot answer.
To be clear the students almost certainly aren't using ChatGPT to write their thesis for them from scratch, but rather to edit and improve their bad first drafts.
No one seems to be talking about the fact that we need to change the definition of cheating.
People's careers are going to be filled with AI. College needs to prepare them for that reality, not to get jobs that are now extinct.
If they are never going to have to program without AI, what's the point in teaching them to do it? It's like expecting them to do arithmetic by hand. No one does.
For every class, teachers need to be asking themselves "is this class relevant" and "what are the learning goals in this class? Goals that they will still need, in a world with AI".
It says STEM undergrad students are the primary beneficiaries of LLMs but Wolfram Alpha was already able to do the lion's share of most undergrad STEM homework 15 years ago.
I loved asking questions as a kid. To the point of annoying adults. I would have loved to sit and ask these AI questions about all kinds of interests when I was young.
I'm looking forward to the next installment on this subject from Anthropic, namely "How University Teachers Use Claude".
How many teachers are offloading their teaching duties onto LLMs? Are they reading essays and annotating them by hand? If everything is submitted electronically, why not just dump 30 or 50 papers into a LLM queue for analysis, suggested comments for improvement, etc. while the instructor gets back to the research they care about? Is this 'cheating' too?
Then there's the use of LLMs to generate problem sets, test those problem sets for accuracy, come up with interesting essay questions and so on.
I think the only real solution will be to go back to in-person instruction with handwritten problem-solving and essay-writing in class with no electronic devices allowed. This is much more demanding of both the teachers and the students, but if the goal is quality educational programs, then that's what it will take.
Alternatively, let's throw out our outmoded ideas and all get excited for an AI-based future in which professors let AI grade the essays student generate with AI.
Just think of the time everybody will save! Instead of wasting effort learning or teaching, we'll be free to spend our time doing... uh... something! Generative AI will clearly be a real 10x or even 100x multiplier! We'll spiral into cultural and intellectual oblivion so much faster than we ever thought possible!
This topic is also interesting to me because I have small children.
Currently, I view LLMs as huge enablers. They helped me create a side-project alongside my primary job, and they make development and almost anything related to knowledge work more interesting. I don't think they made me think less; rather, they made me think a lot more, work more, and absorb significantly more information. But I am a senior, motivated, curious, and skilled engineer with 15+ years of IT, Enterprise Networking, and Development experience.
There are a number of ways one can use this technology. You can use it as an enabler, or you can use it for cheating. The education system needs to adapt rapidly to address the challenges that are coming, which is often a significant issue (particularly in countries like Hungary). For example, consider an exam where you are allowed to use AI (similar to open-book exams), but the exam is designed in such a way that it is sufficiently difficult, so you can only solve it (even with AI assistance) if you possess deep and broad knowledge of the domain or topic. This is doable. Maybe the scoring system will be different, focusing not just on whether the solution works, but also on how elegant it is. Or, in the Creator domain, perhaps the focus will be on whether the output is sufficiently personal, stylish, or unique.
I tend to think current LLMs are more like tools and enablers. I believe that every area of the world will now experience a boom effect and accelerate exponentially.
When superintelligence arrives—and let's say it isn't sentient but just an expert system—humans will still need to chart the path forward and hopefully control it in such a way that it remains a tool, much like current LLMs.
So yes, education, broad knowledge, and experience are very important. We must teach our children to use this technology responsibly. Because of this acceleration, I don't think the age of AI will require less intelligent people. On the contrary, everything will likely become much more complex and abstract, because every knowledge worker (who wants to participate) will be empowered to do more, build more, and imagine more.
I am surprised that business students are relatively low adopters: LLMs seem perfect for helping with presentations, etc, and business students are stereotypically practical-minded rather than motivated by love of the subject.
Perhaps Claude is disproportionately marketed to the STEM crowd, and the business students are doing the same stuff using ChatGPT.
While recognizing the material downsides of education in the time of AI, I envy serious students who now have access to these systems. As an engineering undergrad at a research-focused institution a couple decades ago, I had a few classes taught by professors who appeared entirely uninterested in whether their students were comprehending the material or not. I would have given a lot for the ability to ask a modern frontier LLM to explain a concept to me in a different way when the original breezed-through, "obvious" approach didn't connect with me.
They use an LLM to summarize the chats, which IMO makes the results as fundamentally unreliable as LLMs are. Maybe for an aggregate statistical analysis (for the purpose of...vibe-based product direction?) this is good enough, but if you were to use this to try to inform impactful policies, caveat emptor.
For example, it's fashionable in math education these days to ask students to generate problems as a different mode of probing understanding of a topic. And from the article: "We found that students primarily use Claude to create and improve educational content across disciplines (39.3% of conversations). This often entailed designing practice questions, ..." That last part smells fishy, and even if you saw a prompt like "design a practice question..." you wouldn't be able to know if they were cheating, given the context mentioned above.
[+] [-] dtnewman|11 months ago|reply
I built a popular product that helps teachers with this problem.
Yes, it's "hard to answer", but let's be honest... it's a very very widespread problem. I've talked to hundreds of teachers about this and it's a ubiquitous issue. For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".
I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.
[+] [-] enjo|11 months ago|reply
My wife is an accounting professor. For many years her battle was with students using Chegg and the like. They would submit roughly correct answers but because she would rotate the underlying numbers they would always be wrong in a provably cheating way. This made up 5-8% of her students.
Now she receives a parade of absolutely insane answers to questions from a much larger proportion of her students (she is working on some research around this but it's definitely more than 30%). When she asks students to recreate how they got to these pretty wild answers they never have any ability to articulate what happened. They are simply throwing her questions at LLMs and submitting the output. It's not great.
[+] [-] bko|11 months ago|reply
Whenever we have a new technology there's a response "why do I need to learn X if I can always do Y", and more or less, it has proven true, although not immediately.
For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc
Not that these aren't noble things or worth doing, but they won't impact your life too much if you're not interest in penmanship, spelling, or cartography.
I believe LLMs are different (I am still stuck in the moral panic phase), but I think my children will have a different perspective (similar to how I feel about memorizing poetry and languages without garbage collection). So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"
[+] [-] srveale|11 months ago|reply
Basically, a student's marks depend mostly (only?) on what they can do in a setting where AI is verifiably unavailable. It means less class time for instruction, but students have a tutor in their pocket anyway.
I've also talked with a bunch of teachers and a couple admins about this. They agree it's a huge problem. By the same token, they are using AI to create their lesson plans and assignments! Not fully of course, they edit the output using their expertise. But it's funny to imagine AI completing an AI assignment with the humans just along for the ride.
The point is, if you actually want to know what a student is capable of, you need to watch them doing it. Assigning homework has lost all meaning.
[+] [-] aj7|11 months ago|reply
[+] [-] defgeneric|11 months ago|reply
Even in self-study, where the solutions are at the back of the text, we've probably all had the temptation to give up and just flip to the answer. Anthropic would be more responsible to admit that the solution manual to every text ever made is now instantly and freely available. This has to fundamentally change pedagogy. No discipline is safe, not even those like music where you might think the end performance is the main thing (imagine a promising, even great, performer who cheats themselves in the education process by offloading any difficult work in their music theory class to an AI, coming away learning essentially nothing).
P.S. There is also the issue of grading on a curve in the current "interim" period where this is all new. Assume a lazy professor, or one refusing to adopt any new kind of teaching/grading method: the "honest" students have no incentive to do it the hard way when half the class is going to cheat.
[+] [-] SamBam|11 months ago|reply
In the article, I guess this would be buried in
> Students also frequently used Claude to provide technical explanations or solutions for academic assignments (33.5%)—working with AI to debug and fix errors in coding assignments, implement programming algorithms and data structures, and explain or solve mathematical problems.
"Write my essay" would be considered a "solution for academic assignment," but by only referring to it obliquely in that paragraph they don't really tell us the prevalence of it.
(I also wonder if students are smart, and may keep outright usage of LLMs to complete assignments on a separate, non-university account, not trusting that Anthropic will keep their conversations private from the university if asked.)
[+] [-] vunderba|11 months ago|reply
[+] [-] radioactivist|11 months ago|reply
I think that's a bit telling on their motivations (esp. given their recent large institutional deals with universities).
[+] [-] ignoramous|11 months ago|reply
You're right.
Quite incredibly, they also do the opposite, in that they hype-up / inflate the capability of their LLMs. For instance, they've categorised "summarisation" as "high-order thinking" ("Create", per Bloom's Taxonomy). It patently isn't. Comical they'd not only think so, but also publicly blog about it.
[+] [-] walleeee|11 months ago|reply
this is a smooth way to not say "cheat" in the first paragraph and to reframe creativity in a way that reflects positively on llm use. in fairness they then say
> This raises questions about ensuring students don’t offload critical cognitive tasks to AI systems.
and later they report
> nearly half (~47%) of student-AI conversations were Direct—that is, seeking answers or content with minimal engagement. Whereas many of these serve legitimate learning purposes (like asking conceptual questions or generating study guides), we did find concerning Direct conversation examples including: - Provide answers to machine learning multiple-choice questions - Provide direct answers to English language test questions - Rewrite marketing and business texts to avoid plagiarism detection
kudos for addressing this head on. the problem here, and the reason these are not likely to be democratizing but rather wedge technologies, is not that they make grading harder or violate principles of higher education but that they can disable people who might otherwise learn something
[+] [-] unknown|11 months ago|reply
[deleted]
[+] [-] walleeee|11 months ago|reply
[+] [-] zebomon|11 months ago|reply
The problem with that take is this: it was never about the act of writing. What we lose, if we cut humans out of the equation, is writing as a proxy for what actually matters, which is thinking.
You'll soon notice the downsides of not-thinking (at scale!) if you have a generation of students who weren't taught to exercise their thinking by writing.
I hope that more people come around to this way of seeing things. It seems like a problem that will be much easier to mitigate than to fix after the fact.
A little self-promo: I'm building a tool to help students and writers create proof that they have written something the good ol fashioned way. Check it out at https://itypedmypaper.com and let me know what you think!
[+] [-] janalsncm|11 months ago|reply
I think the prevalence of these AI writing bots means schools will have to start doing things that aren’t scalable: in-class discussions, in-person writing (with pen and paper or locked down computers), way less weight given to remote assignments on Canvas or other software. Attributing authorship from text alone (or keystroke patterns) is not possible.
[+] [-] ketzu|11 months ago|reply
In my opinion this is not true. Writing is a form of communicating ideas. Structuring and communicating ideas with others is really important, not just in written contexts, and it needs to be trained.
Maybe the way universities do it is not great, but writing in itself is important.
[+] [-] knowaveragejoe|11 months ago|reply
https://www.paulgraham.com/writes.html
[+] [-] aprilthird2021|11 months ago|reply
All those have, at the base of them, the experience of being human, something an LLM does not and will never have.
[+] [-] jillesvangurp|11 months ago|reply
And it's an opportunity for educators to raise the ambition level quite a bit. It indeed obsoletes some of the tests they've been using to evaluate students. But they too now have the AI tools to do a better job and come up with more effective tests.
Think of all that time freed up having to actually read all those submitted papers. I can tell you from experience (I taught a few classes as a post doc way back): not fun. Minimum you can just instantly fail the ones that are obviously poorly written, are full of grammatical errors, and feature lots of flawed reasoning. Most decent LLMs do a decent job of doing that. Is using an LLM for that cheating if a teacher does it? I think that should just be expected at this point. And if it is OK for the teacher, it should be OK for the student.
If you expect LLMs to be used, it raises the bar for the acceptable quality level of submitted papers. They should be readable, well structured, well researched, etc. There really is no excuse for those papers not being like that. The student needs to be able to tell the difference. That actually takes skill to ask for the right things. And you can grill them on knowledge of their own work. A little 10 minute conversation maybe. Which should be about the amount of time a teacher would have otherwise spent on evaluating the paper manually and is definitely more fun (I used to do that; give people an opportunity to defend their work).
And if you really want to test writing skills, put students in a room with pen and paper. That's how we did things in the eighties and nineties. Most people did not have PCs and printers then. Poor teachers had to actually sit down and try to decipher my handwriting. Which even when that skill had not atrophied for a few decades, wasn't great.
LLMs will force change in education one way or another. Most of that change will be good. People trying to cheat is a constant. We just need to force them to be smarter about it. Which at a meta level isn't that bad of a skill to learn when you are educating people.
[+] [-] spongebobstoes|11 months ago|reply
In fact, I've done a lot more thinking and had a lot more insights from talking than from writing.
Writing can be a useful tool to help with rigorous thinking. In my opinion, is mostly about augmenting the author's effective memory to be larger and more precise.
I'm sure the same effect could be achieved by having AI transcribe a conversation.
[+] [-] karn97|11 months ago|reply
[+] [-] moojacob|11 months ago|reply
I use Claude, a lot. I’ll upload the slides and ask questions. I’ve talked to Claude for hours trying to break down a problem. I think I’m learning more. But what I think might not be what’s happening.
In one of my machine learning classes, cheating is a huge issue. People are using LMs to answer multiple choice questions on quizzes that are on the computer. The professors somehow found out students would close their laptops without submitting, go out into the hallway, and use a LM on their phone to answer the questions. I’ve been doing worse in the class and chalked it up to it being grad level, but now I think it’s the cheating.
I would never do cheat like that, but when I’m stuck and use Claude for a hint on the HW am I loosing neurons? The other day I used Claude to check my work on a graded HW question (breaking down a binary packet) and it caught an error. I did it on my own before and developed some intuition but would I have learned more if I submitted that and felt the pain of losing points?
[+] [-] stv_123|11 months ago|reply
[+] [-] pugio|11 months ago|reply
1. Dump the whole textbook into Gemini, along with various syllabi/learning goals.
2. (Carefully) Prompt it to create Anki flashcards to meet each goal.
3. Use Anki (duh).
4. Dump the day's flashcards into a ChatGPT session, turn on voice mode, and ask it to quiz me.
Then I can go about my day answering questions. The best part is that if I don't understand something, or am having a hard time retaining some information, I can immediately ask it to explain - I can start a whole side tangent conversation deepening my understanding of the knowledge unit in the card, and then go right back to quizzing on the next card when I'm ready.
It feels like a learning superpower.
[+] [-] jay_kyburz|11 months ago|reply
I would double check every card at the start though, to make sure it didn't hallucinate anything that you then cram in your brain.
[+] [-] azemetre|11 months ago|reply
[+] [-] jurgenaut23|11 months ago|reply
Doing things that could be in principle automated by AI is still fundamentally valuable, because they bring two massive benefits:
- *Understanding what happens under the hood*: if you want to be an effective software engineer, you need to understand the whole stack. This is true of any engineering discipline really. Civil engineers take classes in fluid dynamics and material science classes although they will mostly apply pre-defined recipes on the job. You wouldn't be comfortable if the engineer who signed off on the blueprints of dam upstream of your house had no idea about the physics of concrete, hydrodynamic scour, etc.
- *Having fun*: there is nothing like the joy of discovering how things work, even though a perfectly fine abstraction that hides these details underneath. It is a huge part of the motivation for becoming an engineer. Even by assuming that Vibe Coding could develop into something that works, it would be a very tedious job.
When students use AI to do the hard work on their behalf, they miss out on those. We need to be extremely careful with this, as we might hurt a whole generation of students, both in terms of their performance and their love of technology.
[+] [-] karpour|11 months ago|reply
[+] [-] janalsncm|11 months ago|reply
Torturing students with five paragraph essays, which is what “learning” looks like for most American kids, is not that great and isn’t actually teaching critical thinking which is most valuable. I don’t know any other form of writing that is like that.
Reading “themes” into books that your teacher is convinced are there. Looking for 3 quotes to support your thesis (which must come in the intro paragraph, but not before the “hook” which must be exciting and grab the reader’s attention!).
[+] [-] jillesvangurp|11 months ago|reply
And teachers should use AIs too. Evaluating papers is not that hard for an LLM.
"Your a teacher. Given this assignment (paste /attach the file and the student's paper), does this paper meet the criteria. Identify flaws and grammatical errors. Compose a list of ten questions to grill the student on based on their own work and their understanding of the background material."
A prompt like that sounds like it would do the job. Of course, you'd expect students to use similar prompts to make sure they are prepared for discussing those questions with the teacher.
[+] [-] hervature|11 months ago|reply
[+] [-] nthingtohide|11 months ago|reply
[+] [-] sebstefan|11 months ago|reply
The only thing I care about is the ratio between those two things and you decide to group them together in your report? Fuck that
[+] [-] dagw|11 months ago|reply
To be clear the students almost certainly aren't using ChatGPT to write their thesis for them from scratch, but rather to edit and improve their bad first drafts.
[+] [-] _tom_|11 months ago|reply
People's careers are going to be filled with AI. College needs to prepare them for that reality, not to get jobs that are now extinct.
If they are never going to have to program without AI, what's the point in teaching them to do it? It's like expecting them to do arithmetic by hand. No one does.
For every class, teachers need to be asking themselves "is this class relevant" and "what are the learning goals in this class? Goals that they will still need, in a world with AI".
[+] [-] jimbob45|11 months ago|reply
[+] [-] ozarkerD|11 months ago|reply
[+] [-] qwertox|11 months ago|reply
They will clearly recognize other kids which did not have an AI to talk with at that stage when curiosity really blossoms.
[+] [-] walthamstow|11 months ago|reply
[+] [-] Longtemps|11 months ago|reply
[deleted]
[+] [-] photochemsyn|11 months ago|reply
How many teachers are offloading their teaching duties onto LLMs? Are they reading essays and annotating them by hand? If everything is submitted electronically, why not just dump 30 or 50 papers into a LLM queue for analysis, suggested comments for improvement, etc. while the instructor gets back to the research they care about? Is this 'cheating' too?
Then there's the use of LLMs to generate problem sets, test those problem sets for accuracy, come up with interesting essay questions and so on.
I think the only real solution will be to go back to in-person instruction with handwritten problem-solving and essay-writing in class with no electronic devices allowed. This is much more demanding of both the teachers and the students, but if the goal is quality educational programs, then that's what it will take.
[+] [-] globnomulous|11 months ago|reply
Just think of the time everybody will save! Instead of wasting effort learning or teaching, we'll be free to spend our time doing... uh... something! Generative AI will clearly be a real 10x or even 100x multiplier! We'll spiral into cultural and intellectual oblivion so much faster than we ever thought possible!
[+] [-] xcke|11 months ago|reply
Currently, I view LLMs as huge enablers. They helped me create a side-project alongside my primary job, and they make development and almost anything related to knowledge work more interesting. I don't think they made me think less; rather, they made me think a lot more, work more, and absorb significantly more information. But I am a senior, motivated, curious, and skilled engineer with 15+ years of IT, Enterprise Networking, and Development experience.
There are a number of ways one can use this technology. You can use it as an enabler, or you can use it for cheating. The education system needs to adapt rapidly to address the challenges that are coming, which is often a significant issue (particularly in countries like Hungary). For example, consider an exam where you are allowed to use AI (similar to open-book exams), but the exam is designed in such a way that it is sufficiently difficult, so you can only solve it (even with AI assistance) if you possess deep and broad knowledge of the domain or topic. This is doable. Maybe the scoring system will be different, focusing not just on whether the solution works, but also on how elegant it is. Or, in the Creator domain, perhaps the focus will be on whether the output is sufficiently personal, stylish, or unique.
I tend to think current LLMs are more like tools and enablers. I believe that every area of the world will now experience a boom effect and accelerate exponentially.
When superintelligence arrives—and let's say it isn't sentient but just an expert system—humans will still need to chart the path forward and hopefully control it in such a way that it remains a tool, much like current LLMs.
So yes, education, broad knowledge, and experience are very important. We must teach our children to use this technology responsibly. Because of this acceleration, I don't think the age of AI will require less intelligent people. On the contrary, everything will likely become much more complex and abstract, because every knowledge worker (who wants to participate) will be empowered to do more, build more, and imagine more.
[+] [-] dmurray|11 months ago|reply
Perhaps Claude is disproportionately marketed to the STEM crowd, and the business students are doing the same stuff using ChatGPT.
[+] [-] kaonwarb|11 months ago|reply
[+] [-] j2kun|11 months ago|reply
[+] [-] j2kun|11 months ago|reply