(no title)
avikalp | 1 year ago
- More than being "good enough", it is about taking responsibility. - A human can make more mistakes than an AI, and they are still the more appropriate choice because humans can be held responsible for their actions. AI, by its very nature, cannot be 'held responsible' -- this has been agreed upon based on years of research in the field of "Responsible AI". - To completely automate anything using AI, you need a way to trivially verify whether it did the right thing or not. If the output cannot be verified trivially, you are just changing the nature of the job, and it is still a job or a human being (like the staff you mentioned who remotely control Waymos when something goes wrong). - If an action is not trivially verifiable and requires AI's output to directly reach the end-user without a human-in-the-loop, then the creator is taking a massive risk. Which usually doesn't make sense for a business when it comes to mission-critical activities.
In Waymo's case, they are taking massive risks because of Google's backing. But it is not about being 'good enough'. It is about the results of the AI being trivially verifiable - which, in the case of driving, is true. You just need three yes/no answers: Did the customer reach where they wanted? Are they safe? Did they arrive on time? Are they happy with the experience?
Herring|1 year ago