(no title)
gsuuon
|
1 year ago
This makes sense, effective post-mortems don't focus on assigning blame. They try to identify and fix the problem. The issue with AI is that it's such a black-box right now, this process is likely not feasible. If an AI makes a decision that turns out to be wrong, there's no reliable way to identify and fix the problem. You can prompt engineer or finetune or re-train the model, but in the end you can only hope that issue has been fixed. I think the black-box nature of AI makes it different from other systems we use.
No comments yet.