top | item 21527208

(no title)

Doingmything123 | 6 years ago

I admit that it's unfortunate that AI can't write out their decision logic but I would argue that is because there hasn't been enough resources put into explainable AI. Considering the increasing use of these algorithms, I don't know if that is even a high priority.

I tend to think that people are not as logical as they like to think they are, myself included. Not to say there isn't good reasoning, just that much of our decision making is emotional and habitual over some pure sense of logic.

Systems like BERT seem perfectly rational to me. Are they not just following a set of rules on a given input to modify a state?(In the most simplistic sense of computation). I think the confusion is more over what the goal of these programs are and how do we encode that. This reminds me of the ai system that would pause the game of tetris so that it could never lose. Not we it's programmers intended but still accomplished it's "goal".

discuss

order

bluGill|6 years ago

While people are not as logical as we like to be, we all are logical. I can teach someone the rules of math - my method of teaching might be (probably is) bad, but if the student tries he will learn those rules. Latter on when given a test the student can show his work and it will be much the same as every other student trying to explain his reasoning: operations like "complete the squares" have been well described and reasoned out.

Likewise chess masters can explain their thought process while looking at the next move and other chess masters will agree the lines of thought are good (they will probably ask why not some other equally good line...). We know this explanation is good because students can watch the experts explain their thought process and replicate it in games to a small extent and to better.