top | item 18719936

(no title)

kahoon | 7 years ago

I don't understand this fixation on symbolic reasoning. Do any other animals practice this? If the answer is no, then it is probably not the most important milestone to AGI or at least not the one we should be currently aiming for. Right now we can not replicate the cognition of a mouse. Feels like we want to go to Mars before figuring out how to build a rocket.

discuss

order

evrydayhustling|7 years ago

Seconded. Even if animals do symbolic reasoning, they do it on top of hardware based on continuous physical dynamics, more similar to DNNs... So why not build on that platform?

I don't think biological precedent is the only or even most valuable heuristic for deciding where to research intelligence... But I don't see where there is evidence that symbolic reasoning is either necessary or sufficient for AGI, except people describing how they think their brain works.

Related, there are a lot of statements that symbolic or rule based systems do better / as well as / almost as well as neural methods. Citation please, I'd love a map of which ML problems are still best solved with symbolic systems. (Sincerely - it's not that I expect there aren't any.)

kahoon|7 years ago

> I don't think biological precedent is the only or even most valuable heuristic for deciding where to research intelligence...

Good point, we wouldn't have AlphaZero now if we only relied on biological inspiration. Nature hardly ever performs Monte Carlo Tree Search (though I'm not sure this is entirely true, see slime mold searching for food: https://thumbs.gfycat.com/IdealisticThirdCalf-size_restricte...).

taneq|7 years ago

The thing is, whatever the hell it is that human brains actually do in the background to produce our 'understanding' of the world and our ability to synthesize new ways to manipulate it, we're also very good at back-fitting explanations based on symbolic reasoning. So it looks like machines need symbolic reasoning to replicate human abilities, whereas I'd bet a dollar that actually, we're doing something quite different (and messy and Bayesian and statistical) in the background and then, using the same process, coming up with a story to explain our outcome semantically. It's not insight so much as parallel construction.

barrkel|7 years ago

I fully agree, as I wrote in my other comment in here. Logical symbolic reasoning is usually post-hoc rationalisation built constructively to come to an already held conclusion that "feels right". It's rare that someone changes their mind due to logic, especially if the topic isn't abstract and has real-world consequences and emotional engagement.

Gibbon1|7 years ago

I think the the fixation on symbolic reasoning comes from ignorance at how hard classification is vs how hard pure mechanical symbolic operations are for humans. It's easy to make the mistake of thinking that since a computer can rapidly multiply two numbers together (hard for humans) that they were operating at a higher level than human brains.

Turns out this is wrong. Human brains are very efficient.

mehh|7 years ago

> Human brains are very efficient.

At some things, not all.

Subsymbolic systems, such as ANN are clearly good at some things and symbolic systems are better at others.

It is argued that symbolic reasoning is required for what we might call higher levels of intelligence (lets assume this is correct).

Symbolic systems have struggled in the realms of grounding a symbol to something in the physical world, because its messy and complex, i.e. the area where subsymbolic systems play best.

If we assume that ANN are approximately akin to natural brains, then can we take that they are examples of a subsymbolic system able to, with the correct architecture, produce (perhaps the wrong word) a symbolic resoning system?

Perhaps this emergence ontop of the subsymbolic processing is what humans (and others to varying degrees) possess. Perhaps in the past (GOFAI) suffered because it was going top down, or not even going down to subsymbolic to ground the symbols.

Perhaps ANN struggles because its not going up to symbolic reasoning.

Then also perhaps ANN (or organic brains), which evolved where reaction/perception give the critical survival advantage, then only much later did symbolic become possible and beneficial, however wit hardware that wasnt necessarily developed for that in most efficient way.

Being of the belief that ANN are sufficient for AGI (for 20+ years), and possibly offer an elegant solution, I currently think that they are at this time, not the most efficient (nor plausible with the current compute/hardware, or for many years (probably my lifetime)). Practical progress imho is likely in hybridisation of ANN and Logic (however I'm not referring to hand baked rules), and even propose a mixed hardware might even supersede a pure ANN or what evolution has provided in the brain.

mindgam3|7 years ago

100% agree. I am terrible at mental arithmetic, but I am exceedingly good at performing symbolic operations playing bullet chess. It's primarily a visual or geometric calculation, not purely abstract like math.

I think most people don't realize that our brains have this ability. But all you need to do is spend a few months learning chess and you'll see for yourself.