(no title)
kuhewa | 1 year ago
And we have plenty of evidence of hallmarks of LLM use, we can even replicate the LLM resume generation process if we wanted. There is plenty of useful "training data" available even if you don't have a validated set of resumes submitted for this type of role at this type of company from this demographic of applicants.
Basically what you are trying to argue is that you can't have confidence that the animals you see people walking down the street on leashes are dogs unless you ask the owners whether they are dogs are not... AND that it doesn't matter that dogs are highly distinct from other domestic pets AND that we've seen many verified dogs before in other contexts, AND have even bred different varieties of dogs on our own.
I highly doubt that you maintain that standard for inductive inference across the board in your own practice. Life would be very difficult if you refused to make inferences about novel things (with any confidence) based on generalised patterns derived from other, similar cases.
exe34|1 year ago
this seems at odds with the ongoing issues at universities where professors blindly trust AI detectors that label their own work as AI generated. you'd think if it were that obvious, they would have high specificity.
with your dog example, children absolutely have to learn by checking with their parents - you show them a dog and teach them the word, they will apply it to cats and goats and you have to correct them.
you are like a child pointing at every animal and calling it a dog, but refusing to shift your position when your elders tell you no, that's a goat.
auggierose|1 year ago