(no title)
enjo | 10 months ago
My wife is an accounting professor. For many years her battle was with students using Chegg and the like. They would submit roughly correct answers but because she would rotate the underlying numbers they would always be wrong in a provably cheating way. This made up 5-8% of her students.
Now she receives a parade of absolutely insane answers to questions from a much larger proportion of her students (she is working on some research around this but it's definitely more than 30%). When she asks students to recreate how they got to these pretty wild answers they never have any ability to articulate what happened. They are simply throwing her questions at LLMs and submitting the output. It's not great.
Zanfa|10 months ago
andai|10 months ago
I'm running into similar issues trying to use LLMs for logic and reasoning.
They can do it (surprisingly well, once you disable the friendliness that prevents it), but you get a different random subset of correct answers every time.
I don't know if setting temperature to 0 would help. You'd get the same output every time, but it would be the same incomplete / wrong output.
Probably a better solution is a multi phase thing, where you generate a bunch of outputs and then collect and filter them.
Suppafly|10 months ago
They really should modify it to take out that whole loop where it apologizes, claims to recognize its mistake, and then continues to make the mistake that it claimed to recognize.
vintermann|10 months ago
davedx|10 months ago
samuel|10 months ago
I'm more worried about those who will learn to solve the problems with the help of an LLM, but can't do anything without one. Those will go under the radar, unnoticed, and the problem is, how bad is it, actually? I would say that a lot, but then I realize I'm pretty useless driver without a GPS (once I get out of my hometown). That's the hard question, IMO.
Stubbs|10 months ago
I don't see the former as that much of a problem.
lr4444lr|10 months ago
shinycode|10 months ago
xhkkffbf|10 months ago
9rx|10 months ago
...But then.
DSingularity|10 months ago
Target the cheaters with pop quizzes. Prof can randomly choose 3 questions from assignments. If students cant get enough marks on 2/3 of them they are dealt a huge penalty. Students that actually work through the problems will have no problems with scoring enough marks on 2/3 of the questions. Students that lean irresponsibly on LLMs will lose their marks.
cellularmitosis|10 months ago
Homework would still be assigned as a learning tool, but has no impact on your grade.
rrr_oh_man|10 months ago
el_benhameen|10 months ago
anon35|10 months ago
We’re either handicapping our brightest, or boosting our dumbest. One part is concerning, the other encouraging .
woodrowbarlow|10 months ago
pc86|10 months ago
I'm in a 100%-online grad school but they proctor major exams through local testing centers, and every class is at least 50% based on one or more major exams. It's a good way to let people use LLMs, because they're available, and trying to stop it is a fool's errand, while requiring people to understand the underlying concepts in order to pass.
iNic|10 months ago
Ekaros|10 months ago
Suppafly|10 months ago
Not really. While doing something to ensure that students are actually learning is important, plenty of the smartest people still don't always test well. End of semester exams also tend to not be the best way to tell if people are learning along the way and then fall off part way through for whatever reason.