(no title)
elbci
|
1 month ago
rare here: well written and insightful, I would take this course. I'm curious about why he penalized chatbot mistakes more, at first glance sounds like just discouraging their use but the hole setup indicates genuine desire to let it be a possibility. In my mind the rule should be "same penalty and extra super cookies for catching chatbot mistakes"
jcattle|1 month ago
I thought this part of penalizing mistakes made with the help of LLMs more was quite ingenious.
If you have this great resource available to you (an LLM) you better show that you read and checked its output. If there's something in the LLM output you do not understand or check to be true, you better remove it.
If you do not use LLMs and just misunderstood something, you will have a (flawed) justification for why you wrote this. If there's something flawed in an LLM answer, the likelihood that you do not have any justification except for "the LLM said so" is quite high and should thus be penalized higher.
One shows a misunderstanding, the other doesn't necessarily show any understanding at all.
qwertytyyuu|1 month ago