AI will definitely, without a doubt, make executive decisions. It already makes lower level decisions. The company that runs the AI, can be held accountable. (meaning less likely OpenAI or the foundational LLM, but more likely the company calling LLMs that make decisions on car insurance, etc...)
chasing|1 year ago
owlbite|1 year ago
You end up with "Computer says shoot" and so many cooks involved in the software chain that no one can feasibly be held accountable except maybe the chief of staff or the president.
themanmaran|1 year ago
The person who clicks the "Approve" / "Deny" button is likely an underwriter looking at info on their screen.
The info they're looking at get's aggregated from a lot of sources. They have the insurance contract. Maybe one part is AI summary of the police report. And another part is a repair estimate that gets synced over from the dealership. A list of prior claims this person has. Probably a dozen other sources.
Now what happens if this person makes a totally correct decision based on their data, but that data was wrong because the _syncFromMazdaRepairShopSFTP_ service got the quote data wrong? Who is liable? The person denying the claim, the engineer who wrote the code, AWS?
In reality, it's "the company" in so far as fault can be proven. The underlying service providers they use doesn't really factor into that decision. AI is just another tool in that process that (like other tools) can break.
almosthere|1 year ago
Just because an automated decision system exists, does not mean an OOB (out of band) correctional measure should not exist.
In other words if AI fixes a time sink for 99% of cases, but fails on 1%, then let 50% of the 1% of angry customers get a second decision because they emailed the staff. That failure system still saves the company millions per year.