(no title)
zugi | 1 month ago
They said ethics demand that any AI that is going to pass judgment on humans must be able to explain its reasoning. An if-then rule says this, or even a statistical correlation between A and B indicates that would be fine. Fundamental fairness requires that if an automated system denies you a loan, a house, or a job, it be able to explain something you can challenge, fix, or at least understand.
LLMs may be able to provide that, but it would have to be carefully built into the system.
nemomarx|1 month ago
zugi|1 month ago
That's a great point: funny, sad, and true.
My AI class predated LLMs. The implicit assumption was that the explanation had to be correct and verifiable, which may not be achievable with LLMs.
nullc|1 month ago
SpaceNoodled|1 month ago
rilindo|1 month ago
That could get interesting, as most companies will not provide feedback if you are denied employment.
zugi|1 month ago
direwolf20|1 month ago
em-bee|1 month ago
ottah|1 month ago