(no title)
VSerge | 5 months ago
The problem here is for a child to be thinking this system is reliable when it is not. For now, the lack of reliability is obvious as chatGPT hallucinates on a very regular basis. However, this will become much harder to notice if/when chatGPT will be almost reliable while saying wrong things with complete confidence. Should such models be able to say reliably when they don't know something, this would be a big step for this specific objection I had, but it still wouldn't solve the other problems I mentioned.
No comments yet.