There's an even cooler thing you can do with automatic grading. I saw this idea on another HN comment once. The computer can figure out the answers without programming them in. Or it can figure out mistakes in the real answer key.
The idea is that answers are highly correlated. Better students are more likely to get all answers correct, worse students are more likely to make mistakes. So if you do something like PCA on all the students' answers, the first dimension will represent the quality of the student. And the weights will represent which answers are correct.
> Better students are more likely to get all answers correct, worse students are more likely to make mistakes.
Yes! Until you brainfuck them, by putting the correct answer in the column B for the first 10 questions. Once the doubt is there, it's not gonna go anywhere.
If automatic grading works well, could you also automatically assess the effectiveness of the question/answers themselves? For instance, a teacher creates a series of 10 different quizzes over the years that assess for the same knowledge/skill. Rather than simply re-using a static document, could you automatically generate a set of quizzes that assess students knowledge/skills at "basic", "intermediate", and "advanced" levels (distributed automatically based on [potentially] preparatory work scores).
I would love to see an automated "quiz creation" engine (perhaps with some minimal instructor curation) that would both quiz/score students - but also allow teachers to integrate more assessment, more often at lower stakes.
By significantly reducing the labor involved with assessment, teachers can more effectively create learning opportunities for students (teaching - with all it contains - is time constrained). Certainly, self-leveling assessments already exist and are in common use with cognitive tutors/online education; however, the knowledge/skills they assess must either be very general or specific to the content of the online course. If teachers were tagging the content of their courses - which are likely to teach knowledge/content that is being taught in hundreds/thousands of other school across the country - then such a self-generating assessment engine would be very useful if it could self-modify to accurate assess whatever is being taught in the course.
Modifying learning materials and tasks for different student levels is a major pinpoint for teachers, but is very much a stated goal in almost all k12 schools. It is done poorly, if ever - and mostly when legally required by IEP/504 plans. The trend is very much in the direction of increased data/personalization in the classroom, but currently the big investments in data are being made on the administrative side of the k12 industry. There is so much potential for improving the classroom backend to support instructor workflow rather than constraining it with pre-conceived notions of how we work/what our goals are. I hope to see it happen.
Classic multiple choice grading machines will print out the number of incorrect answers on a final sheet passed through, next to each question (where the checkmark normally is). A lot simpler than PCA, but still pretty effective in finding mistakes, or gaps in course material.
Interesting, but how would PCA work with a discretely-valued, unordered, metric-less data like multiple choice answers? (Actual question, not snark, I only know the basics of PCA.)
You'll have trouble with the hard questions, though, the ones toward the end where half the high quality students get it right and the other half fall for the trap answer.
Whats especially great - and differs from most bloggers that are shamelessly shilling their wares - is that he provides a ton of excellent content on his blog.
His book takes it to another level, but isn't necessary to get your feet wet.
So back in the Bronze Age my teacher would collect all the scantron cards -- which by the way can already be automatically graded; that is the entire point of the scantron -- and put them on the overhead projector with the correct answers masked out. Overhead projector was more than bright enough to shine through the paper card. The teacher could easily grade the entire class in a minute or two, no cameras or computers needed.
This is pretty cool. I am getting ready to embark on camera projects with a Raspberry Pi 3 and saw this project. Since I will be using python and OpenCV anyway this is a motivator.
Combining the Raspberry Pi + Python + OpenCV is a lot of fun. If you feel like sharing the details on your project I might be able to point you in the right direction.
Processing well-known forms (AP, PSAT, SAT, etc.) would be sweet. The one for AP is 4 pages in color, but one might have a greyscale printer or decide to print only one page of it.
[+] [-] Houshalter|9 years ago|reply
The idea is that answers are highly correlated. Better students are more likely to get all answers correct, worse students are more likely to make mistakes. So if you do something like PCA on all the students' answers, the first dimension will represent the quality of the student. And the weights will represent which answers are correct.
[+] [-] Fiahil|9 years ago|reply
Yes! Until you brainfuck them, by putting the correct answer in the column B for the first 10 questions. Once the doubt is there, it's not gonna go anywhere.
[+] [-] germinalphrase|9 years ago|reply
I would love to see an automated "quiz creation" engine (perhaps with some minimal instructor curation) that would both quiz/score students - but also allow teachers to integrate more assessment, more often at lower stakes.
By significantly reducing the labor involved with assessment, teachers can more effectively create learning opportunities for students (teaching - with all it contains - is time constrained). Certainly, self-leveling assessments already exist and are in common use with cognitive tutors/online education; however, the knowledge/skills they assess must either be very general or specific to the content of the online course. If teachers were tagging the content of their courses - which are likely to teach knowledge/content that is being taught in hundreds/thousands of other school across the country - then such a self-generating assessment engine would be very useful if it could self-modify to accurate assess whatever is being taught in the course.
Modifying learning materials and tasks for different student levels is a major pinpoint for teachers, but is very much a stated goal in almost all k12 schools. It is done poorly, if ever - and mostly when legally required by IEP/504 plans. The trend is very much in the direction of increased data/personalization in the classroom, but currently the big investments in data are being made on the administrative side of the k12 industry. There is so much potential for improving the classroom backend to support instructor workflow rather than constraining it with pre-conceived notions of how we work/what our goals are. I hope to see it happen.
[+] [-] TD-Linux|9 years ago|reply
[+] [-] ronald_raygun|9 years ago|reply
http://www.icml.cc/2012/papers/597.pdf
[+] [-] tgb|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] eanzenberg|9 years ago|reply
[+] [-] fnbr|9 years ago|reply
Note- I'm not challenging you- I don't see why it shouldn't be the case- I just don't follow your reasoning (but I'm quite interested in the claim).
Is it because the largest determinant in which answer is correct is the quality of the student?
[+] [-] patmcguire|9 years ago|reply
[+] [-] zakki|9 years ago|reply
[+] [-] coredog64|9 years ago|reply
Disclaimer: I bought his eBook bundle.
[+] [-] canada_dry|9 years ago|reply
His book takes it to another level, but isn't necessary to get your feet wet.
[+] [-] mountaineer22|9 years ago|reply
[+] [-] honkhonkpants|9 years ago|reply
[+] [-] coredog64|9 years ago|reply
[+] [-] sah2ed|9 years ago|reply
[+] [-] epalmer|9 years ago|reply
[+] [-] zionsrogue|9 years ago|reply
[+] [-] swyphcosmo|9 years ago|reply
[+] [-] mandudebruh|9 years ago|reply
[+] [-] andersonfreitas|9 years ago|reply
[+] [-] honkhonkpants|9 years ago|reply
[+] [-] tropo|9 years ago|reply
[+] [-] jrcii|9 years ago|reply
[+] [-] teach|9 years ago|reply
[0] https://blogs.nvidia.com/blog/2016/09/02/gradescope-brings-a...