top | item 36238176

(no title)

congoe | 2 years ago

How could AlphaZero possibly play better chess than humans when it doesn’t even understand the history of chess theory?

RL doesn’t stop at human levels

discuss

order

Maken|2 years ago

Because the entire history of chess theory is really a set of heuristics to optimize a tree search.

lupire|2 years ago

So is computer science.

wsdookadr|2 years ago

Even if AlphaZero does play better chess, there's absolutely zero it can do in terms of explaining why it played that way. AlphaZero is zero in terms of explainability. Humans have to explain to themselves and to others what they do, this is key in understanding what's happening, in communicating what's happening, in human decision-making, in deciding between what works and what doesn't and how well or how bad it works.

Returning back to the original DeepMind press release, it's misinforming the public about the alleged progress, in fact no fundamental progress was made, DeepMind did not come up with an entirely new sorting algorithm, the improvement was marginal at best.

I maintain my opinion that Alphadev does not understand any of the existing sorting algorithms at all.

Even if AI comes up with a marginal improvement to something, it's incapable of explaining what it has done. Humans (unless they're a politician or a dictator) always have to explain their decisions, how they got there, they have to argue their decisions and their thought-process.

elcomet|2 years ago

It cannot explain because (1) it is not necessary to become good and (2) it wasn't explicitly trained to explain.

But it's reasonable to imagine a later model trained to explain things. The issue is that some positions might not be explainable, as they require branching too much and a lot of edge cases, so the explanation is not understandable by the human.