I am skeptical of any paper with this result, because several very plausible events would likely prove by example that computers can respond as a human would. It requires accepting certain assumption though.
1: Physicalism is true. Nothing exists that is not part of the physical world.
2: The physical world obeys mathematical laws, and those laws can be learned in time.
2.1: The physical contents of the human body can eventually be learned with arbitrary/sufficient fidelity.
3: Any mathematical rule can be computed by a sufficiently advanced computer. (Edit: or maybe a better assumption: the mathematical laws that underlie the universe are all computable.)
4: Computational power will continue to increase.
Subject to these assumptions, we will eventually gain the ability to simulate full physical human beings within computers. Perhaps with some amount of slowdown, but in the end, these simulated humans would be able to converse with entities outside the computer. In all likelihood, computers will pass the Turing test long before this. But if they don't, simulated humans seem like something that is certainly possible or even probable, and therefore the result of this paper is likely incorrect.
I also was of the opinion that there was the possibility to reach general AI in the future, but now I will take time to read this paper. I'm not skeptical about it because we know your assumption number 3 to be false: there are mathematical functions that aren't computable. For a proof you can look here [1] or google it.
[1] https://www.hse.ru/mirror/pubs/share/198271819
Wait, you forgot one interesting question. This paper claims that a Turing machine can not be a general AI, but it says nothing about other models of computation. The notion that (quantum) Turing machines are the most general/powerful model of computation is only a conjecture, although a very plausible one (usually called the (quantum) extended Church Turing thesis).
But even with that in mind, this paper does not seem convincing, including for the reasons you already mentioned. Hopefully it will spark a conversation though.
I think your premises are fair, but assumption #3 ("Any mathematical rule can be computed by a sufficiently advanced computer") is effectively ruled out by Gödel's incompleteness theorem[1] and/or the Church-Turing thesis[2].
The problem then becomes finding an approach to general AI that avoids hitting incompleteness/undecidability[3] issues. My feeling is that this would be difficult. One way to try to avoid these issues is to avoid notions of self-reference, since self-reference spawns a lot of undecidable stuff (eg, "this statement is false" is neither true nor false). It seems to me, though, that the notions of the self and self-awareness are central to human consciousness, and so unavoidable when developing a complete simulation of human consciousness. The self is probably not computable.
Obviously there could be approaches that avoid these pitfalls, but every year that goes by without much progress towards general AI makes me feel more confident in this intuition. I do think there will be lots of useful progress in specialized AIs, but I see this as analogous to developing algorithms to decide the halting problem for special classes of algorithms. General AI is a whole different beast.
But if general AI is physically impossible, how does the human brain "compute" general intelligence at all? It could be that your assumption #1 ("Physicalism is true. Nothing exists that is not part of the physical world.") is not correct. Maybe reality has "layers" and our world is some kind of simulation in another layer. Or maybe there is only one consciousness like many spiritual people and Boltzmann[4] suggest. Or maybe the human experience could be a process of trying to solve an undecidable problem and failing...
How many particles make up a human, and how many transistors does it take to simulate the behavior and interactions of all those particles? How much energy does it take to power that many transistors? How does that compare to the Sun’s output? There are a few of the questions we need to answer before confidently predicting that we will ever be able to make a physics-based human simulator.
This would be a very good paper if it were titled, "What makes general AI hard", and it didn't try to make any claims about uncomputability.
Beyond the somewhat useful collection of some of the prickly points of whatever it is that humans do that we call Intelligence, this particular discussion isn't bringing much to the table in support of its incredibly strong claims. It is functionally an extended application of Searle's Chinese Room argument to these hard points, usually built on question-begging premises (for example, regarding "biography" as a component of dialogue, quote, "Because machines lack an inner mental life – as we do not know how to engineer such a thing – they also lack those capabilities".)
The paper addresses the traditional response to Searle thus: "How, then, do humans pass the Turing test? By using language, as humans do. Language is a unique human ability that evolved over millions of years of evolutionary selection pressure... machines cannot use language in this sense because they lack any framework of intentions". This is even blunter than Searle's actual counter, that there's something specific about biological machinery that makes it more capable in this regard than digital machinery. Instead, we're simply told that language is a special Human thing, Humans are not Turing-computable, and thus it's probably something computers can't do.
I am a big proponent of anti-hype in AI technology and of the idea that language cannot be separated from the general human experience of Intelligence. I'm very frustrated when people assume we've solved a given problem in AI because we've been able to tackle some toy examples. And I'm a big fan of proving what can't be done. But this is not a particularly valuable exercise in any of those things, perhaps beyond prodding some of the hubris of the current cult of "we're almost there".
Part 2 of the article is really good. It’s a shame if people are put off by the premise of the article. The authors have already pre-judged the outcome though.
“Because machines lack an inner mental life...”
Right, well that’s it then. Case closed. No point in researching general AI any more, might as well put all those researchers on unemployment benefits.
The authors do say we don’t know how to engineer a machine with an inner mental life and consciousness, and this is true. It’s why like you (dmreedy) I’m a skeptic of claims that general AI is just round the corner. It isn’t and the Singularity is a good long way off. Our current efforts in AI are pitifully primitive, at best many orders of magnitude dumber than a fruit fly. That doesn’t lead me to believe therefore that we will never learn to solve this problem, or that this problem is in principle not solvable.
The paper seems to basically say, as I read it, "the current approaches for modeling human behavior are unlikely to be perfect enough so no approach will ever work." I find that to be filled with a lot of unsupported strong assumptions. Specifically, it talks about modeling language with machine learning based on input-output pairs.
But, for example, if you took a human brain, deconstructed it's physics down to individual chemical reactions then you're no longer trying to predict a black box with input-output pairs. You literally have a copy of the black box in mathematical terms.
Like most of these papers it basically boils down to positing a solution to a problem as the only solution and then claiming that solution doesn't work so no solution would work.
> if you took a human brain, deconstructed it's physics down to individual chemical reactions
Frankly, the idea that we can mathematically model 10^21 simultaneous [unobserved] chemical reactions in an individual's head in real time sufficiently well to result in a generalised model of cognition which can be applied to other uses seems more of a stretch than modelling language with ML...
Skimmed a bit and found some snippets, from which I can't take this paper seriously as it dismisses unsupervised learning / language models over large datasets. Yes, sec 4.3.4 briefly discusses recent work in this area, but only briefly and dismisses it by cherry-picking the least positive result of many.
"Only if we have a sufficiently large collection of input-output tuples, in which the
outputs have been appropriately tagged, can we use the data to train a machine so that
it is able, given new inputs sufficiently similar to those in the training data, to predict
corresponding outputs"
This ignores of recent work with large language models that do generalize, zero-shot, to novel tasks.
"supervised learning with core technology end-to-end sequence-to-sequence deep
networks using LSTM (section 4.2.5) with several extensions and variations, including use of GANs"
This reads like something generated from a LM (e.g. GPT-2):
* Where is any mention of attention or Transformer?
* GANs? Have any recent works used GANs successfully for text? There are a few, e.g. CycleGAN, but not widespread afiact.
"Turing machines can only compute what can be modelled mathematically, and since we cannot model human dialogues mathematically, it follows that Turing machines cannot pass the Turing test."
It feels more like an argument that chatbots will never exhibit general AI.
Which ought not to be controversial. The point of the Turing test isn't to provide a blueprint (just optimize human dialogue and you'll eventually get general AI) but a test to see that your general AI works. You might build a general AI that fails the Turing test, but you won't be able to pass the test without a general AI. That's the idea.
Unfortunately people have taken the wrong idea from the Turing Test and decided to attack the "faking human communication" thing directly. Which is fun! But anyone who in 2019 thinks that better chatbots will eventually develop general AI are delusional. I don't know if anyone with more than a passing interest actually does believe this, so it feels like this paper is arguing against a straw man.
"Passing the strong form of the test would indeed be clear evidence of general Artificial Intelligence. But this will not happen in the short- or mid-term."
To me this is a more realistic claim, but undermines the rest of the paper. The title and abstract claim that Turing machines cannot pass the Turing test (with the implication being that they can /never/ pass the Turing test), while that quote says that computers cannot pass the Turing test now or in the near future. The latter is a much weaker claim, but seems to actually be supported by the paper. As a disclaimer I only skimmed the paper.
And it is why I find the paper very unconvincing. We do not have a good model right now, but it is shortsighted to claim we will never have one. The whole statement is completely circular and tautalocal.
How does that sentence apply to playing chess, go or StarCraft II?
We can easily simulate humans using an extended version of Lattice QCD [1] that consider the other forces, and get an accurate simulation of a human that can talk. It is discrete, so it is easy to model. The only problem only is the scale [2], so we can model humans mathematically as well as we can model playing chess, go or StarCraft II.
[2] I'm not sure about the state of the art here, but I guess the biggest models have a few dozen of particles. For a human you need something like 10^28 particles, and a human with a home needs more [3]. And the complexity of the calculation grows exponentially, so the run time is like e^(10^28) bigger than the current calculations, but mathematically I doesn't matter.
"We don't know how to do it, therefore it is impossible" is silly.
A real result would be "We prove that human-equivalent intelligence is impossible", which would be quite a shocker since we have the existence proof of actual humans.
Since 1950, when Alan Turing proposed what has since come to be called the Turing test, the ability of a machine to pass this test has established itself as the primary hallmark of general AI.
That's... not true. I mean, to the general public at large, sure, they think "the Turning test is the hallmark of AI." But I don't think any serious AI researchers actually agree with that sentiment. And for good reason: among others, the fact that "programming" a machine to pass the Turing test is basically programming it to lie effectively. A useful skill to have in some contexts, perhaps, but not exactly the defining trait of intelligence. Beyond that, the "Turing Test" (or "Imitation Game") as originally specified, if memory serves correctly, was fairly under-specified with regards to rules, constraints, time, etc.
This whole thing also blurs the distinction between "human level intelligence" and "human like intelligence". It seems reasonable to think that we could build a computer with intelligence every bit as general as that of a human being, and which would still fail the Turing Test miserably. Why? Because it wouldn't actually have human experiences and therefore - unless trained to lie - would never be able to answer questions about human experiences. "Have you ever fallen down and busted your face?" "Did it hurt like hell?" "Did you ever really like somebody and then they blew you off and you felt really depressed for like a week?", "have you ever been really pissed off when you caught a friend lying to you?" etc. A honest computer with "human level" intelligence would be easily distinguishable as a computer when faced with questions like that, but it might still be just as intelligent as you or I.
The paper does not have any redeeming qualities and the title and abstract do not even align with the content.
In my opinion, conversation and other high level skills are sort of the icing on the cake of general intelligence. I believe that the key abilities that enable general intelligence are those that humans share with many other animals.
So I think that a research goal of animal-like intelligence will give the most progress as long as the abilities of more intelligent animals like mammals are the goal.
I think that people who have worked closely with animals or had a pet will more easily recognize that.
Animals adapt to complex environments. They take in high bandwidth data of multiple types. They have a way to automatically create representations that allow them to understand, model and predict their environment. They learn in an online manner.
No software approaches true emulation of the subtleties of behavior and abilities of an animal like a cat or a dog.
Obviously it's another step to say that leads to human intelligence. I'm not trying to prove it, but will just say that it seems mainly to be a matter of degree rather than quality. If cats and dogs are not convincing for you, look at the complexity of chimpanzee behavior.
So this is just a half baked comment on a thread, and I would not try to publish it, but I don't think that the paper is actually much more rigorous and yet we are supposed to take it seriously.
arxiv is amazing and we should not change it, but you have to keep in mind that there is literally zero barrier for entry, and anyone's garbage essay can get on there with the trappings of real academic work. So you just have to read carefully and judge on the merit or total lack thereof.
Unless this work disproves the Church–Turing thesis, I suppose it can safely be disregarded.
Well, unless you 1) want to ascribe supernatural powers to the human brain, or 2) assert that human intelligence is not general. The little cynic in me is gleefully considering option 2 right now...
By this argument, prop planes, jets, helicopters and rockets don't fly because they don't flap their wings like general flying creatures do.
To me, it seems the question of general AI is bordering on semantic word games. We'll always come up with new reasons something isn't generally intelligent this way.
It is. It is a thorough study of the necessary components of real human dialogue, and a well-defended claim that there is no model, nor even any existing TYPE of model, which can model human dialogue. Human dialogue, the paper says, is a temporal process, and the two mathematical models for such processes--differential and stochastic--are insufficient.
From the paper: "For example, it is not conceivable that we could create a mathematical model that would enable the computation of the appropriate interpretation of interrupted statements, or of statements made by people who are talking over each other, or of the appropriate length of a pause in a conversation, which may depend on context (remembrance dinner or cocktail party), on emotional loading of the situation, on knowledge of the other person’s social standing or dialogue history, or on what the other person is doing – perhaps looking at his phone – when the conversation pauses."
Optimists in the comments here have hope for advances in mathematics that would give us a new method for modeling that could be applied. Maybe their hope isn't unfounded. I'm just a dude who read an academic paper. But I did enjoy it.
I only skimmed through it, but I can not take it seriously. They talk about general mathematical proofs, but it feels more like a few very arbitrarily chosen definitions, without addressing the most standard argument ("what is so difficult about simulating a human brain in principle, except for the (not fundamental) problem of having a big computer and precise classical measurements").
Hasn't the Turning test been discredited as a test of general AI?
Overall from reading the abstract it seems like a pretty obvious conclusion. Same applies to robotics where the only successful cases are very constrained.
It has recently occured to me to ask why we have decided to try solve autonomous driving before solving any other seemingly easier robotics problems. Other than robo-vacuums, which aren't particularly complex, we have jumped straight to trying to solve one of the hardest unconstrained robotics and AI challenges in automotive environments.
edit: Getting downvoted, if you disagree could you reply with why?
Self driving r&d is rewarded in stock valuations. First, because you can create great looking demos and hide the real difficulties.
Second, because you can say you will be selling trips, and have network effects,which seems to be a much more profitable proposition than selling robots.
Can you build a similar story for domestic robots ? It would be much harder. And without that, you can't invest long term.
>Hasn't the Turning test been discredited as a test of general AI?
I'm not sure discredited so much as 1.) Turing didn't introduce the test as an explicit test of intelligence and 2.) there have always been criticisms of trying to use the test for this purpose.
>It has recently occured to me to ask why we have decided to try solve autonomous driving before solving any other seemingly easier robotics problems. Other than robo-vacuums, which aren't particularly complex, we have jumped straight to trying to solve one of the hardest unconstrained robotics and AI challenges in automotive environments.
I'd say it's because other problems are actually difficult in hidden ways or lack much of a value proposition given existing mechanical aids. Cars are in some ways simple because they have plenty of space for electronics and have an easy to automate set of controls. That's not even getting into industrial robotics which is very popular.
>> It has recently occured to me to ask why we have decided to try solve autonomous driving before solving any other seemingly easier robotics problems.
As far as I can tell, a few years ago, Google decided it was a good idea and then everyone else followed suit, because it's Google so they must be on to something.
Mind, there was earlier work on self-driving cars that was just as impressive as modern efforts, but you rarely hear about it.
Apart from the obvious value of solving the problem, part of the point of taking moonshots is that you invent a whole lot of valuable stuff along the way, even if you fail.
And in this case I mean moonshots literally, because NASA was one.
[+] [-] asdfasgasdgasdg|6 years ago|reply
1: Physicalism is true. Nothing exists that is not part of the physical world.
2: The physical world obeys mathematical laws, and those laws can be learned in time.
2.1: The physical contents of the human body can eventually be learned with arbitrary/sufficient fidelity.
3: Any mathematical rule can be computed by a sufficiently advanced computer. (Edit: or maybe a better assumption: the mathematical laws that underlie the universe are all computable.)
4: Computational power will continue to increase.
Subject to these assumptions, we will eventually gain the ability to simulate full physical human beings within computers. Perhaps with some amount of slowdown, but in the end, these simulated humans would be able to converse with entities outside the computer. In all likelihood, computers will pass the Turing test long before this. But if they don't, simulated humans seem like something that is certainly possible or even probable, and therefore the result of this paper is likely incorrect.
[+] [-] GTP|6 years ago|reply
[+] [-] krastanov|6 years ago|reply
But even with that in mind, this paper does not seem convincing, including for the reasons you already mentioned. Hopefully it will spark a conversation though.
[+] [-] rfugger|6 years ago|reply
The problem then becomes finding an approach to general AI that avoids hitting incompleteness/undecidability[3] issues. My feeling is that this would be difficult. One way to try to avoid these issues is to avoid notions of self-reference, since self-reference spawns a lot of undecidable stuff (eg, "this statement is false" is neither true nor false). It seems to me, though, that the notions of the self and self-awareness are central to human consciousness, and so unavoidable when developing a complete simulation of human consciousness. The self is probably not computable.
Obviously there could be approaches that avoid these pitfalls, but every year that goes by without much progress towards general AI makes me feel more confident in this intuition. I do think there will be lots of useful progress in specialized AIs, but I see this as analogous to developing algorithms to decide the halting problem for special classes of algorithms. General AI is a whole different beast.
But if general AI is physically impossible, how does the human brain "compute" general intelligence at all? It could be that your assumption #1 ("Physicalism is true. Nothing exists that is not part of the physical world.") is not correct. Maybe reality has "layers" and our world is some kind of simulation in another layer. Or maybe there is only one consciousness like many spiritual people and Boltzmann[4] suggest. Or maybe the human experience could be a process of trying to solve an undecidable problem and failing...
1. https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...
2. https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
3. https://en.wikipedia.org/wiki/Undecidable_problem
4. https://en.wikipedia.org/wiki/Boltzmann_brain
[+] [-] djakjxnanjak|6 years ago|reply
[+] [-] dmreedy|6 years ago|reply
Beyond the somewhat useful collection of some of the prickly points of whatever it is that humans do that we call Intelligence, this particular discussion isn't bringing much to the table in support of its incredibly strong claims. It is functionally an extended application of Searle's Chinese Room argument to these hard points, usually built on question-begging premises (for example, regarding "biography" as a component of dialogue, quote, "Because machines lack an inner mental life – as we do not know how to engineer such a thing – they also lack those capabilities".)
The paper addresses the traditional response to Searle thus: "How, then, do humans pass the Turing test? By using language, as humans do. Language is a unique human ability that evolved over millions of years of evolutionary selection pressure... machines cannot use language in this sense because they lack any framework of intentions". This is even blunter than Searle's actual counter, that there's something specific about biological machinery that makes it more capable in this regard than digital machinery. Instead, we're simply told that language is a special Human thing, Humans are not Turing-computable, and thus it's probably something computers can't do.
I am a big proponent of anti-hype in AI technology and of the idea that language cannot be separated from the general human experience of Intelligence. I'm very frustrated when people assume we've solved a given problem in AI because we've been able to tackle some toy examples. And I'm a big fan of proving what can't be done. But this is not a particularly valuable exercise in any of those things, perhaps beyond prodding some of the hubris of the current cult of "we're almost there".
[+] [-] simonh|6 years ago|reply
“Because machines lack an inner mental life...”
Right, well that’s it then. Case closed. No point in researching general AI any more, might as well put all those researchers on unemployment benefits.
The authors do say we don’t know how to engineer a machine with an inner mental life and consciousness, and this is true. It’s why like you (dmreedy) I’m a skeptic of claims that general AI is just round the corner. It isn’t and the Singularity is a good long way off. Our current efforts in AI are pitifully primitive, at best many orders of magnitude dumber than a fruit fly. That doesn’t lead me to believe therefore that we will never learn to solve this problem, or that this problem is in principle not solvable.
[+] [-] marcinzm|6 years ago|reply
But, for example, if you took a human brain, deconstructed it's physics down to individual chemical reactions then you're no longer trying to predict a black box with input-output pairs. You literally have a copy of the black box in mathematical terms.
Like most of these papers it basically boils down to positing a solution to a problem as the only solution and then claiming that solution doesn't work so no solution would work.
[+] [-] notahacker|6 years ago|reply
Frankly, the idea that we can mathematically model 10^21 simultaneous [unobserved] chemical reactions in an individual's head in real time sufficiently well to result in a generalised model of cognition which can be applied to other uses seems more of a stretch than modelling language with ML...
[+] [-] nfiedel|6 years ago|reply
"Only if we have a sufficiently large collection of input-output tuples, in which the outputs have been appropriately tagged, can we use the data to train a machine so that it is able, given new inputs sufficiently similar to those in the training data, to predict corresponding outputs"
This ignores of recent work with large language models that do generalize, zero-shot, to novel tasks.
"supervised learning with core technology end-to-end sequence-to-sequence deep networks using LSTM (section 4.2.5) with several extensions and variations, including use of GANs"
This reads like something generated from a LM (e.g. GPT-2): * Where is any mention of attention or Transformer? * GANs? Have any recent works used GANs successfully for text? There are a few, e.g. CycleGAN, but not widespread afiact.
[+] [-] YeGoblynQueenne|6 years ago|reply
Which work is that?
[+] [-] malft|6 years ago|reply
"Turing machines can only compute what can be modelled mathematically, and since we cannot model human dialogues mathematically, it follows that Turing machines cannot pass the Turing test."
[+] [-] roywiggins|6 years ago|reply
Which ought not to be controversial. The point of the Turing test isn't to provide a blueprint (just optimize human dialogue and you'll eventually get general AI) but a test to see that your general AI works. You might build a general AI that fails the Turing test, but you won't be able to pass the test without a general AI. That's the idea.
Unfortunately people have taken the wrong idea from the Turing Test and decided to attack the "faking human communication" thing directly. Which is fun! But anyone who in 2019 thinks that better chatbots will eventually develop general AI are delusional. I don't know if anyone with more than a passing interest actually does believe this, so it feels like this paper is arguing against a straw man.
[+] [-] stilley2|6 years ago|reply
"Passing the strong form of the test would indeed be clear evidence of general Artificial Intelligence. But this will not happen in the short- or mid-term."
To me this is a more realistic claim, but undermines the rest of the paper. The title and abstract claim that Turing machines cannot pass the Turing test (with the implication being that they can /never/ pass the Turing test), while that quote says that computers cannot pass the Turing test now or in the near future. The latter is a much weaker claim, but seems to actually be supported by the paper. As a disclaimer I only skimmed the paper.
[+] [-] krastanov|6 years ago|reply
[+] [-] gus_massa|6 years ago|reply
We can easily simulate humans using an extended version of Lattice QCD [1] that consider the other forces, and get an accurate simulation of a human that can talk. It is discrete, so it is easy to model. The only problem only is the scale [2], so we can model humans mathematically as well as we can model playing chess, go or StarCraft II.
[1] https://en.wikipedia.org/wiki/Lattice_QCD
[2] I'm not sure about the state of the art here, but I guess the biggest models have a few dozen of particles. For a human you need something like 10^28 particles, and a human with a home needs more [3]. And the complexity of the calculation grows exponentially, so the run time is like e^(10^28) bigger than the current calculations, but mathematically I doesn't matter.
[3] Do you think that's air you're breathing now?
[+] [-] 13415|6 years ago|reply
[+] [-] dsr_|6 years ago|reply
A real result would be "We prove that human-equivalent intelligence is impossible", which would be quite a shocker since we have the existence proof of actual humans.
[+] [-] Conjoiner|6 years ago|reply
[+] [-] mindcrime|6 years ago|reply
That's... not true. I mean, to the general public at large, sure, they think "the Turning test is the hallmark of AI." But I don't think any serious AI researchers actually agree with that sentiment. And for good reason: among others, the fact that "programming" a machine to pass the Turing test is basically programming it to lie effectively. A useful skill to have in some contexts, perhaps, but not exactly the defining trait of intelligence. Beyond that, the "Turing Test" (or "Imitation Game") as originally specified, if memory serves correctly, was fairly under-specified with regards to rules, constraints, time, etc.
This whole thing also blurs the distinction between "human level intelligence" and "human like intelligence". It seems reasonable to think that we could build a computer with intelligence every bit as general as that of a human being, and which would still fail the Turing Test miserably. Why? Because it wouldn't actually have human experiences and therefore - unless trained to lie - would never be able to answer questions about human experiences. "Have you ever fallen down and busted your face?" "Did it hurt like hell?" "Did you ever really like somebody and then they blew you off and you felt really depressed for like a week?", "have you ever been really pissed off when you caught a friend lying to you?" etc. A honest computer with "human level" intelligence would be easily distinguishable as a computer when faced with questions like that, but it might still be just as intelligent as you or I.
[+] [-] ilaksh|6 years ago|reply
In my opinion, conversation and other high level skills are sort of the icing on the cake of general intelligence. I believe that the key abilities that enable general intelligence are those that humans share with many other animals.
So I think that a research goal of animal-like intelligence will give the most progress as long as the abilities of more intelligent animals like mammals are the goal.
I think that people who have worked closely with animals or had a pet will more easily recognize that.
Animals adapt to complex environments. They take in high bandwidth data of multiple types. They have a way to automatically create representations that allow them to understand, model and predict their environment. They learn in an online manner.
No software approaches true emulation of the subtleties of behavior and abilities of an animal like a cat or a dog.
Obviously it's another step to say that leads to human intelligence. I'm not trying to prove it, but will just say that it seems mainly to be a matter of degree rather than quality. If cats and dogs are not convincing for you, look at the complexity of chimpanzee behavior.
So this is just a half baked comment on a thread, and I would not try to publish it, but I don't think that the paper is actually much more rigorous and yet we are supposed to take it seriously.
arxiv is amazing and we should not change it, but you have to keep in mind that there is literally zero barrier for entry, and anyone's garbage essay can get on there with the trappings of real academic work. So you just have to read carefully and judge on the merit or total lack thereof.
[+] [-] _0ffh|6 years ago|reply
Well, unless you 1) want to ascribe supernatural powers to the human brain, or 2) assert that human intelligence is not general. The little cynic in me is gleefully considering option 2 right now...
[+] [-] ksaj|6 years ago|reply
To me, it seems the question of general AI is bordering on semantic word games. We'll always come up with new reasons something isn't generally intelligent this way.
[+] [-] czr|6 years ago|reply
[+] [-] Bootvis|6 years ago|reply
[+] [-] johnfactorial|6 years ago|reply
From the paper: "For example, it is not conceivable that we could create a mathematical model that would enable the computation of the appropriate interpretation of interrupted statements, or of statements made by people who are talking over each other, or of the appropriate length of a pause in a conversation, which may depend on context (remembrance dinner or cocktail party), on emotional loading of the situation, on knowledge of the other person’s social standing or dialogue history, or on what the other person is doing – perhaps looking at his phone – when the conversation pauses."
Optimists in the comments here have hope for advances in mathematics that would give us a new method for modeling that could be applied. Maybe their hope isn't unfounded. I'm just a dude who read an academic paper. But I did enjoy it.
[+] [-] krastanov|6 years ago|reply
[+] [-] dooglius|6 years ago|reply
[+] [-] dooglius|6 years ago|reply
[+] [-] sctb|6 years ago|reply
https://news.ycombinator.com/newsguidelines.html
[+] [-] hellllllllooo|6 years ago|reply
Overall from reading the abstract it seems like a pretty obvious conclusion. Same applies to robotics where the only successful cases are very constrained.
It has recently occured to me to ask why we have decided to try solve autonomous driving before solving any other seemingly easier robotics problems. Other than robo-vacuums, which aren't particularly complex, we have jumped straight to trying to solve one of the hardest unconstrained robotics and AI challenges in automotive environments.
edit: Getting downvoted, if you disagree could you reply with why?
[+] [-] petra|6 years ago|reply
Second, because you can say you will be selling trips, and have network effects,which seems to be a much more profitable proposition than selling robots.
Can you build a similar story for domestic robots ? It would be much harder. And without that, you can't invest long term.
So robotics is advancing incrementally.
[+] [-] ghaff|6 years ago|reply
I'm not sure discredited so much as 1.) Turing didn't introduce the test as an explicit test of intelligence and 2.) there have always been criticisms of trying to use the test for this purpose.
Wikipedia has a pretty good run-down: https://en.wikipedia.org/wiki/Turing_test#Weaknesses
[+] [-] marcinzm|6 years ago|reply
I'd say it's because other problems are actually difficult in hidden ways or lack much of a value proposition given existing mechanical aids. Cars are in some ways simple because they have plenty of space for electronics and have an easy to automate set of controls. That's not even getting into industrial robotics which is very popular.
[+] [-] YeGoblynQueenne|6 years ago|reply
As far as I can tell, a few years ago, Google decided it was a good idea and then everyone else followed suit, because it's Google so they must be on to something.
Mind, there was earlier work on self-driving cars that was just as impressive as modern efforts, but you rarely hear about it.
Here; You-Again Schmidhuber got the works:
http://people.idsia.ch/~juergen/robotcars.html
[+] [-] munchbunny|6 years ago|reply
And in this case I mean moonshots literally, because NASA was one.