In the sense of just completely making up references out of nowhere? Very low. Humans make mistakes, of course, but they tend not to be so egregious. Grounding a submission on a made up reference is quite likely to be fatal to your case in a way that most human errors aren't.
This is the important bit that's always missing in discussions about LLM/AI applications to existing industries - what is the rate of mistakes for a human worker?
NoboruWataya|1 year ago
squigz|1 year ago
pbhjpbhj|1 year ago
echelon|1 year ago
fitzgera1d|1 year ago