top | item 35714683

(no title)

p4coder | 2 years ago

After seeing ChatGPT and others hallucinate and make mistakes I was wondering if self-driving AI is inherently susceptible to similar behavior. Like hallucinating objects on the road and ignoring signs etc. If we can not fully understand these types of behavior for LLMs at a fundamental level, is the self-driving AI any different.

Of course, it may be that the LLM architecture makes them susceptible for this, but not self-driving AI. But the question still remains, do we understand enough about it to trust human lives to the AI?

discuss

order

No comments yet.