I focus 100% of my anti-cheating attention on encouraging students to not want to cheat.
It's multifaceted, but part of it is scaring them straight--you're going to go up against an interviewer and not only do you have to convince them you know what you're talking about, you have to do that better than any other candidates who are interviewing.
And--hey--if you want to be $x0,000 in debt with no job prospects, there are faster and easier ways to do that. Why bother coming to class? Just buy an expensive sports car. At least you'll have a nice car to show for it until it gets repossessed.
Another part of it is lesson and project optimization. Overwhelmed students are more likely to cheat. So... is it possible to teach the same topic just as effectively with less time or mental effort requirements? And yes, it often is. Less can be more.
Yet another part is maintaining student engagement by being there for them. I'm lucky enough to teach at a university where class sizes rarely exceed 30, which means I can answer questions in their medium of choice very frequently throughout the day. I try to let them know I'm on their team, and we'll slay this dragon together.
Can ChatGPT solve their programming projects for them? Hell yes. And if not this year, very likely next year. And I know I won't be able to tell the difference between a ChatGPT solution and the solution of a capable student.
The only sensible option is to get them to not want to use it.
Edit: get them not to want to use it to cheat, in particular. It's a pretty powerful tool for figuring things out.
>Can ChatGPT solve their programming projects for them? Hell yes. And if not this year, very likely next year. And I know I won't be able to tell the difference between a ChatGPT solution and the solution of a capable student.
How about having the students store the solutions in a git repo and require 3-5 commits as they work through the problem?
I’ve learned to embrace the technology as a prof. Even though in its current state it can be spotty, it’s a given it’s going to continuously improve.
AI “detectors” are not going to work in the long run. Students can easily use tech to rewrite.
Graded work will need to change. I believe a shift towards oral defense of a research topic is going to be needed, or in-person testing, or hands-on assignments where permitted.
> . I believe a shift towards oral defense of a research topic is going to be needed
It is done on the professional research level (thesis defense for postgraduate studies) but doing it over this scale is much easier than this being the default for all levels. It will need a lot of resources to be allocated more the currently available in the education systems.
As a university student, I’m going to play the other side.
I’ve found it incredibly useful as a bespoke tutor. The professor explains something complicated or that you don’t know? Ask it mid-lecture for a simpler explanation. I’ve used it to analyze papers, explain concepts, or grade my own work. It’s a fantastic learning tool.
I suppose this is just more of a case of ease of use.
Personally I think the only realistic / sure fire testing method would be a student sitting down with a professor and just discussing the material. Q&A, "show me what you know" or even just discussion would work... heck the student might learn more... maybe that's even a path to a better place?
That's assuming we actually want to know / care if the student is learning anything.
I think your underestimating chatgpts utility for students. It’s not just a “do the work for me” tool. It’s a “hey how can I improve this paragraph?”, “why is this true?”, “who first created X”, “If you were a mean TA what would you say about this?” Tool.
It wouldn’t surprise me if C students suddenly looked like A students. Their getting proper and timely feedback for the first time.
Problem is, this can’t scale. Private tuition, yeah, but there’s huge classes in college environments that would require gobs of proctors. Students wishing to learn will learn. Those wishing scores will get scores. The trick will be detecting the devalued grades.
ChatGPT becomes the test taking interface. Go to a proctored environment and get quizzed by TA-GPT in a socratic test about the subject. Humans review the scores and that's your test result.
ChatGPT is like the pocket calculator. It was "cheating" when it first showed up, but is regardless here to stay. You can either go to great lengths trying to ensure that students don't use it at home or in the classroom, or you can just hand one to them and go "now do your best".
These students are going to be entering a workplace where language models and all other iterations of AI will be prevalent and expected to be used in day to day work. There is zero reason students shouldn't also get familiar with the technology instead of pretending it doesn't exist.
My 7th grade algebra teacher many decades ago allowed calculators in the classroom, including during exams. His reason, "A calculator will only help you get the wrong answer faster."
ChatGPT still spits out stuff that doesn't work, generates code that requires libraries that don't exist, and more. Used well, it can be a force multiplier, like the aforementioned calculator. Used without understanding, it will only help you make a mess faster. "I shaved 50 yaks instead of 1."
Learning how to prompt engineer for real deliverables is a trade that students will actually need.
Many of us with no real incentive to stay relevant wont do much more than read a headline or have a funny conversation with chatgpt.
But these students have to play the cat and mouse game from day one. Saving time where possible and actually learning at their own discretion, not getting exposed or expelled.
AI will quickly obviate the need for prompt engineering, since it can just mimic the best prompt engineer in the world and interrogate the user about what they really want.
You can use it to do the assignments for you or you can use it to help you understand what you're solving. In my computer science exams I had to write correct code with paper and pencil. ChatGPT won't be able to help you in the exam, will it?
While it might be good for a non-technical essay, for technical matters, it has a bad habit of spewing nonsense, both in answers and citations. Professor Alex Wellerstein (of Nukemap fame) gives two anecdotes highlighting their issues.
An anecdote, but I recently was asked to review the essay of a student who I had not taught. I became highly suspicious it had been generated by ChatGPT, because it had the "feel" of its output. The clincher was that it had an entire page of references... all of which were fake. They all looked plausible, and even had URLs. But not one of them was accurate, and all of the URLs were dead, and all investigation made it clear there references had never existed. I was somewhat amazed, both at the gall of a chatbot inventing fake references, and for the student who clearly did not click on even one of the generated links, yet had still asked for an essay re-grade!!
One experiment I ran with it recently was to ask it about the RIPPLE, which is a nuclear weapon design that was tested in the 1960s. The details of the RIPPLE are not public, but the fact of its existence, who invented it, and its testing are, as well as the some very broad pieces of information about it. Anyway, I repeatedly asked ChatGPT how the RIPPLE worked, and why it was called the RIPPLE, and every time it gave me a totally new and contradictory answer, freely making it up each time. After giving me maybe 6 different answers in a row it then noticed it was giving me contradictions, and from that point onward claimed that the most recent answer was correct. I was impressed at how inconsistent it was, that you could just ask it the same thing over and over again and it would just make new things up each time. The only consistency it gave me was wrong: it repeatedly emphasized that the design was entire hypothetical and never tested, which is false (it was tested at least four times).
In a separate exchange, I asked it to ask me a question, and when (for whatever reason) I told it I was interested in nuclear weapons, it began to lecture me on how this was a topic that should be left to experts. I then told it I was an expert, and it then started lecturing me on how an expert on this topic ought to behave and think. It almost seemed defensive. I thought it was pretty rich — an impressive mansplaining simulator, indeed.
I like the characterization of chatGPT in this comment in the second link:
> [–]righthandofdog 46 points 2 days ago
Mansplaining as a service is the best description of GPT.
> The reason it CAN be right about more general info is because people trained it away from lies. No one has trained out the lies on more rarified knowledge, so it makes shit up.
> Trusting its answers to be correct is flatly stupid when it is literally designed to make shit up that sounds good instead of saying "I don't know".
I asked GPT to summarize a Wikipedia page on the Willow pipeline project in Alaska and it summarized something completely different, so this is believable.
[+] [-] beej71|3 years ago|reply
It's multifaceted, but part of it is scaring them straight--you're going to go up against an interviewer and not only do you have to convince them you know what you're talking about, you have to do that better than any other candidates who are interviewing.
And--hey--if you want to be $x0,000 in debt with no job prospects, there are faster and easier ways to do that. Why bother coming to class? Just buy an expensive sports car. At least you'll have a nice car to show for it until it gets repossessed.
Another part of it is lesson and project optimization. Overwhelmed students are more likely to cheat. So... is it possible to teach the same topic just as effectively with less time or mental effort requirements? And yes, it often is. Less can be more.
Yet another part is maintaining student engagement by being there for them. I'm lucky enough to teach at a university where class sizes rarely exceed 30, which means I can answer questions in their medium of choice very frequently throughout the day. I try to let them know I'm on their team, and we'll slay this dragon together.
Can ChatGPT solve their programming projects for them? Hell yes. And if not this year, very likely next year. And I know I won't be able to tell the difference between a ChatGPT solution and the solution of a capable student.
The only sensible option is to get them to not want to use it.
Edit: get them not to want to use it to cheat, in particular. It's a pretty powerful tool for figuring things out.
[+] [-] foobar_______|3 years ago|reply
[+] [-] operatingthetan|3 years ago|reply
How about having the students store the solutions in a git repo and require 3-5 commits as they work through the problem?
[+] [-] davidg109|3 years ago|reply
AI “detectors” are not going to work in the long run. Students can easily use tech to rewrite.
Graded work will need to change. I believe a shift towards oral defense of a research topic is going to be needed, or in-person testing, or hands-on assignments where permitted.
Is the graded essay dead? Possibly.
[+] [-] elashri|3 years ago|reply
It is done on the professional research level (thesis defense for postgraduate studies) but doing it over this scale is much easier than this being the default for all levels. It will need a lot of resources to be allocated more the currently available in the education systems.
[+] [-] mdev23|3 years ago|reply
If you're willing, i'd love to chat with an educator that has used chatgpt -> [email protected]
[+] [-] zarkov99|3 years ago|reply
[+] [-] 2OEH8eoCRo0|3 years ago|reply
[+] [-] aqme28|3 years ago|reply
I’ve found it incredibly useful as a bespoke tutor. The professor explains something complicated or that you don’t know? Ask it mid-lecture for a simpler explanation. I’ve used it to analyze papers, explain concepts, or grade my own work. It’s a fantastic learning tool.
[+] [-] etrautmann|3 years ago|reply
[+] [-] duxup|3 years ago|reply
Personally I think the only realistic / sure fire testing method would be a student sitting down with a professor and just discussing the material. Q&A, "show me what you know" or even just discussion would work... heck the student might learn more... maybe that's even a path to a better place?
That's assuming we actually want to know / care if the student is learning anything.
[+] [-] lumost|3 years ago|reply
It wouldn’t surprise me if C students suddenly looked like A students. Their getting proper and timely feedback for the first time.
[+] [-] jleyank|3 years ago|reply
[+] [-] piyh|3 years ago|reply
[+] [-] ericpauley|3 years ago|reply
Few if any, given ChatGPT was announced at the end of November...
[+] [-] paxys|3 years ago|reply
These students are going to be entering a workplace where language models and all other iterations of AI will be prevalent and expected to be used in day to day work. There is zero reason students shouldn't also get familiar with the technology instead of pretending it doesn't exist.
[+] [-] ipaddr|3 years ago|reply
[+] [-] slowmovintarget|3 years ago|reply
ChatGPT still spits out stuff that doesn't work, generates code that requires libraries that don't exist, and more. Used well, it can be a force multiplier, like the aforementioned calculator. Used without understanding, it will only help you make a mess faster. "I shaved 50 yaks instead of 1."
[+] [-] yieldcrv|3 years ago|reply
Many of us with no real incentive to stay relevant wont do much more than read a headline or have a funny conversation with chatgpt.
But these students have to play the cat and mouse game from day one. Saving time where possible and actually learning at their own discretion, not getting exposed or expelled.
[+] [-] oh_sigh|3 years ago|reply
[+] [-] platzhirsch|3 years ago|reply
[+] [-] berkle4455|3 years ago|reply
[+] [-] nukeman|3 years ago|reply
1. https://old.reddit.com/r/AskHistorians/comments/11u21ie/the_...
2. https://old.reddit.com/r/AskHistorians/comments/11u21ie/the_... 3. The full discussion outlined in (2): https://old.reddit.com/r/nuclearweapons/comments/117hssn/cha...[+] [-] tuatoru|3 years ago|reply
> [–]righthandofdog 46 points 2 days ago
Mansplaining as a service is the best description of GPT.
> The reason it CAN be right about more general info is because people trained it away from lies. No one has trained out the lies on more rarified knowledge, so it makes shit up.
> Trusting its answers to be correct is flatly stupid when it is literally designed to make shit up that sounds good instead of saying "I don't know".
[+] [-] nunez|3 years ago|reply
[+] [-] nullc|3 years ago|reply
Why is that described as bad or surprising? It doesn't know and so these varrious answers seem about equally likely to it.
[+] [-] redox99|3 years ago|reply
[+] [-] lucasphoebe521|3 years ago|reply
[deleted]