top | item 43226056

(no title)

avikalp | 1 year ago

I have had a similar discussion with a fellow On-Deck Founder, and here is where we reached:

- More than being "good enough", it is about taking responsibility. - A human can make more mistakes than an AI, and they are still the more appropriate choice because humans can be held responsible for their actions. AI, by its very nature, cannot be 'held responsible' -- this has been agreed upon based on years of research in the field of "Responsible AI". - To completely automate anything using AI, you need a way to trivially verify whether it did the right thing or not. If the output cannot be verified trivially, you are just changing the nature of the job, and it is still a job or a human being (like the staff you mentioned who remotely control Waymos when something goes wrong). - If an action is not trivially verifiable and requires AI's output to directly reach the end-user without a human-in-the-loop, then the creator is taking a massive risk. Which usually doesn't make sense for a business when it comes to mission-critical activities.

In Waymo's case, they are taking massive risks because of Google's backing. But it is not about being 'good enough'. It is about the results of the AI being trivially verifiable - which, in the case of driving, is true. You just need three yes/no answers: Did the customer reach where they wanted? Are they safe? Did they arrive on time? Are they happy with the experience?

discuss

order

Herring|1 year ago

I'd be really hesitant to say anything involving humans and human judgement under uncertainty is trivial. What if the customer wants the car to drive aggressively, maybe speed a little where it "seems" safe? Should the car stop for an object that might be a plastic bag or a child's backpack? Even manual drivers are difficult to "verify" because accidents and traffic violations depend on interpretations of events, which is why we often have to go to court.