And no one should be surprised by this. The NN advancement of late doesn't help addressing human-style symbolic reasoning at all. All we have is a much more powerful function approximator with a drastic increased capacity (very deep networks with billions of parameters) and scalable training scheme (SGD and its variants).
Such architecture works great for differentiable data, such's images/audios, but the improvement on natural language tasks are only incremental.
I was thinking maybe DeepMind's RL+DL is the way leads to AGI, since it does offer an elegant and complete framework. But seems like even DeepMind had trouble to get it working to more realistic scenarios, so maybe our modelling of intelligence is still hopelessly romantic.
Maaaaybe. I tend to think that symbolic reasoning is a learning tool, rather than a goalpost for general intelligence. For example, we use symbolic reasoning quite extensively when learning to read a new language, but once fluent can rely on something closer to raw processing - no more reading and sounding out character sequences. Similarly with chess - eventually we have good mnemonics for what make good plays, and can play blitz reasonably well.
And - let's be real - a lot of human symbolic reasoning actually happens outside of the brain, on paper or computer screens. We painstakingly learn relatively simple transformations and feedback loops for manipulating this external memory, and then bootstrap it into short-term reaction via lots of practice.
I tend to think that the problems are:
a) Tightly defined / domain-specific loss functions. If all I ever do is ask you to identify pictures of bananas, you'll never get around to writing the great american novel. And we don't know how to train the kinds of adaptive or free form loss functions that would get us away from these domain-specific losses.
b) Similarly, I have a soft-spot for the view that a mind is only as good as its set of inputs. We currently mostly build models that are only receptive (image, sound) or generative. Reinforcement learning is getting progress on feedback loops, but I have the sense that there's still a long way to go.
c) I have the feeling that there's still a long way to go in understanding how to deal with time...
d) As great as LSTMs are, there still seems to be some shortcoming in how to incorporate memory into networks. LSTMs seem to give a decent approximation of short-term memory, but still seems far from great. This might be the key to symbolic reasoning, though.
Writing all that down, I gotta say I agree fundamentally with the DeepMind research priorities on reinforcement learning and multi-modal models.
I don't understand this fixation on symbolic reasoning. Do any other animals practice this? If the answer is no, then it is probably not the most important milestone to AGI or at least not the one we should be currently aiming for. Right now we can not replicate the cognition of a mouse. Feels like we want to go to Mars before figuring out how to build a rocket.
"Hence, if it requires, say, a thousand years to fit for easy flight a bird which started with rudimentary wings, or ten thousand for one which started with no wings at all and had to sprout them ab initio, it might be assumed that the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanicians in from one million to ten million years--provided, of course, we can meanwhile eliminate such little drawbacks and embarrassments as the existing relation between weight and strength in inorganic materials. [Emphasis added.]
The New York Times, Oct 9, 1903, p. 6."
-----
A couple of the leading minds in AGI say it's a long ways away... just because the universe likes to give us the finger, maybe AGI is on the horizon. Maybe we'll look back at this in 10 years and laugh (if we're here).
The arguments like above are "platitude level arguments".
We really don't learn anything from the problem in had by talking in generic terms. We use these arguments when we want to justify our hopes and feeling, but there is really nothing to learn from it.
Hinton, Hassabis, Bengio and others point out that we can't 'brute force' AI development. There needs to be actual breakthroughs in the field and there may be several decades between them.
AI, brain science and cognitive science are extremely difficult fields with small advances, yet people assume that it's possible to 'brute force' AGI by just adding more computing power and doing more of the same.
Macroeconomics is probably less complex research subject than AI or brain science, but nobody assumes that you can just brute force truly great macroeconomic model in few years if you just spend little more resources.
For flight, the components necessary were obvious very early on: you need some kind of structure to hold you aloft and some kind of powered apparatus to propel you forwards. Once those were found, mechanical flight was achieved (and unpowered flight was already possible long before that).
What are the components of intelligence? For example, AlphaZero can solve problems that are hard for humans to solve in the domain of chess, shogi and go- is it intelligent? Is its problem-solving ability, limited as it is to the domain of three board games, a necessary component of general intelligence? Have we even made any tiny baby steps on the road to AGI, with the advances of the last few years, or are we merely chasing our tails in a dead end of statistical approximation that will never sufficiently, well, approximate, true intelligence?
These are very hard questions to answer and the most conservative answers suggest that AGI will not happen in a short time, as a sudden growth spurt that takes us from no-AGI to AGI. With flight, it sufficed to blow up a big balloon with hot air and- tadaaaa! Flight. There really seems to be no such one neat trick for AGI. It will most likely be tiny baby steps all the way up.
Interestingly, Hinton is on record as essentially saying that there's a good possibility that what's currently being done is wrong - and that we need to rethink our approach.
Mainly in the idea/concept of back-propagation. It's something that I've thought about myself. For the longest time, I could never understand how it worked, then I went thru Ng's "ML Class" (in 2011, which was based around Octave), and one part was developing a neural network with backprop - and the calcs being done using linear algebra. It suddenly "clicked" for me; I finally understood (maybe not to the detailed level I'd like - but to the general idea) how it all worked.
And while I was excited (and still am) by that revelation, at the same time I thought "this seems really overly complex" and "there's no way this kind of thing is happening in a real brain".
Indeed, as far as we've been able to find (although research continues, and there's been hints and model which may challenge things) - brains (well, neurons) don't do backprop; as far as we know, there's no biological mechanism to allow for backprop to occur.
So how do biological brains learn? Furthermore, how are they able to learn from only a very few examples in most cases (vs the thousands to millions examples needed by deep learning neural networks)?
We've come up with a very well engineering solution to the problem, that works - but it seems overly complex. We've essentially have made an airplane that is part ornithopter, part fixed-wing, part balloon, and part helicopter. Sure it flies - but it's rather overly complex, right?
Humanity cracked the nut when it came to heavier-than-air flight when it finally shed the idea that the wings had to flap. While it was known this was the way forward long before the Wright's or even Langley (and likely even before Lilienthal), a lot of wasted time and effort went into flying machines with flapping wings, because it was thought that "that's the way birds do it, right"?
So - in addition to the idea that backprop may not be all it's cracked up to be - what if we also need to figure out the "fixed wing" solution to artificial intelligence? Instead of trying to emulate and imitate nature so closely, perhaps there's a shortcut that currently we're missing?
I do recall a recent paper that was mentioned here on HN that I don't completely understand - that may be a way forward (the paper was called "Neural Ordinary Differential Equations"). Even so, it too seems way too complex to be a biologically plausible model of what a brain does...
Behind every successful neural network is a human brain. Neural networks are a tool, an advanced tool for sure, but still just a tool. If we are looking for AGI, and assuming the brain is an AGI, then there are still many differences to resolve. For example, back propagation has not been observed in nature. Nor has gradient descent. So the core mechanisms for learning in nature have still to reveal their secrets.
> Behind every successful neural network is a human brain.
I've spent a lot of time trying to explain this to people, that there is a confluence between the human brain and the machine, people tend to look at the machine separately, which is a mistake. When I say unequivocally, 'there is no such thing as machine intelligence', I just get blank stares.
I mean it's difficult to 'observe' gradient descent, there are no characteristic properties that you can identify without specifying the relative objective function. But most of the process theories from computational neuroscience are based on some form of gradient descent. Even if it's only implicit, you'll be able to describe the variables of the system as moving against the gradient of some function.
But yes, it's extremely unlikely that nature implements backpropagation directly, as it relies on non-local gradients.
Your reasoning does not follow. To see why, take something humans already clearly created: Flight. Kerosene-type jet fuel propulsion has not been observed in nature. It is flight nonetheless.
Human flight is not as agile or energy-effective as a dragonfly, but it is faster and stronger. Just like artificial learning may not be as sample-effecient as the human brain. It is a learning intelligence nonetheless and we are already working with the core mechanisms of reasoning and deduction.
If you want AGI you need to give it a world to live in. The ecological component of perception is missing. Without full senses, a machine doesn't have a world to think generally about. It just has the narrow subdomain of inputs that it is able to process.
You could bet that AGI won't manifest until AI and robotics are properly fused. Cognition does not happen in a void. This image of a purely rational mind floating in an abyss is an outdated paradigm to which many in the AI community still cling. Instead, the body and environment become incorporated into the computation.
Tangential: This title is weird. As if no one but the top minds in AI didn't know this? This isn't big news to anyone who has done even just a modicum of AI research.
My impression is this is common among DeepMind folks and not an aberration. (See also dwiel's comment elsewhere.) It is super weird for me that Demis Hassabis says AGI is nowhere close. Is he lying? Or does he mean 10 years is not close?
The problem is when non-technical people write articles or respond to posts about Deepmind. They think all AIs are the same and that one specific AI achievement means the Matrix is coming.
When I was at ICLR a couple years ago, a group of 10 or so researchers from deepmind took a poll of themselves at breakfast and found the general consensus was that AGI was between 5-10 years away.
Research can tell you current effort fall far short. Research can tell you current efforts are moving incrementally towards the goal, even. But research won't you when something we don't understand will happen. "A long time" but overall, it seems like the kind of situation where probably as such isn't particularly applicable.
I watched the talk linked where that quote apparently comes from, and it was really good. Thanks for sharing that. Ilya specifically says in the talk that it is unlikely but that there is sufficient lack of understanding that we can't rule it out, and that thus the questions around it are worth thinking about.
It bothers me that the qoutes in this article are all cut up, in some cases ending when a sentence clearly wasn't finished. It makes it hard to judge what they are really saying here, and I wish the full interview would be published.
I wonder to what extent the data being fed to these models are the issue. Or rather the problem is the systems that generate these data-sets and how representative of reality they are. If we make an app that involves humans and that data is used in a model - to what extent does user experience and other factors warp reality?
Maybe our existing methods are good enough given enough compute to reach AGI but our datasets are too low fidelity and non-representative of the problem space to reach desired results?
The problem is not the data. The problem is the need for high quality data. Current ML is data driven statistical learning. ML tries to learn a model that describes the distribution. It's impossible to get similar performance as the best reference implementation (human brain) using this approach. https://i.redd.it/kvvgv6zzhtp11.png
Think of 16 year old human:
* it has received less than 400 million wakeful seconds of data + 100 millions seconds of sleep,
* it has made only few million high level cognitive decisions where feedback is important and delay is tens of second or several minutes (say few thousand per day). From just few million samples it has learned to behave in the society like a human and do human things.
* Assuming 50 ms learning rate at average, at the lowest level there is at most 10 billion iterations per neuron (Short-term synaptic plasticity acts on a timescale of tens of milliseconds to a few minutes.)
Humans generate very detailed model of their environment with very little data and even less feedback. They can learn complex concept from one example. For example you need only one example of pickpocket to understand the whole concept.
Not sure how I feel about this; for one, the Kurzweilian singularity which largely could be fueled by the advent of AGI is both exciting and yet also scary. The upside could forever change humanity as we know it; far increased longevity, the potential to create anything via a universal assembler[0], bringing everything feasible within the laws of physics to reality. Knowledge is the only limiting factor stopping us from doing anything which is physically possible in this universe; and in that light AGI could be an enlightenment.
On the other hand, the ubiquity of knowledge once it's available could lead any maniac to use it for the wrong purpose and wipe out humanity from their basement.
My feelings on the potential of AGI is therefore mixed. I for one have just found my particular niche in the workforce and am finally reaping the dividends from decades of hard work. Having AGI displace me and millions (or billions) of individuals is frightening and definitely keeps me on my toes.
Technology changes the world; my parents both worked for newspapers and talk endlessly about how the demise of their industry after the advent of the internet is so unfortunate. Luckily for them they are both at retirement age so their livelihood was not upset by displacement.
If AGI does become a thing it will be interesting to see how millenials and gen Z react to becoming irrelevant in what would have been the peak of their careers.
Not to mention that we don't even know if general intelligence exists. All we know is that mental abilities tend to correlate, but not why they tend to correlate. And if you think about designing machines, in general, the idea of general intelligence is utterly ridiculous. Does a fast car have general speediness? Of course not, it has dozens or hundreds of discrete optimizations that all contribute in some degree to the car being faster.
I'm not sure you and the OP mean the same thing by "General Intelligence".
It seems clear that autonomous systems which can apply their computational machinery to a diverse range of problems, and can, in a diverse range of settings, formulate instrumental goals as part of a plan to attain a final goal, do exist.
Because that's what humans are, at least some of the time.
Well, we have general purpose processors. You can prove they can run any algorithm you want (i.e. are Turing complete), but also, for practical problems (i.e. the ones encountered in engineering solutions in our planet and in our universe), they give reasonable max-min performance. Analogously I don't think 'AGI' is entirely useless -- you'd expect an AGI to have some properties like being able to solve reasonably well problems found in nature and society, maybe have a motivational framework distinguishing it as a separate entity, some knowledge about the world, etc.
edit: In terms of Turing-completeness analogues, the best candidate for AGI I think would be simply brute force capability: can this agent try all possible solutions until it solves this problem? (obviously using a heuristic to prioritize) -- that is, it'd employ a form of Universal Search[1] (aka Levin Search). Humans don't necessarily pass this test rigorously because we'd always get bored with a problem and because we have finite memory. But then CPUs are not truly Turing complete either (it's "just" a good model).
Great interview with Hassabis from the BBC. It's meanderingly biographical, with insights about his path through internships, curiosity, startups, commitment, burnout, trusted team mates and eventual successes ...
Demis Hassabis (true) statements here would be much more credible if DeepMind wasn't currently making a mint by promoting AlphaZero to the masses as a "general purpose artificial intelligence system".
Don't believe me? Check out this series of marketing videos on YouTube by GM Matthew Sadler.
1. “Hi, I’m GM Matthew Sadler, and in this series of videos we’re taking a look at new games between AlphaZero, DeepMind’s general purpose artificial intelligence system, and Stockfish” (1)
2. “Hi, I’m GM Matthew Sadler, and welcome to this review of the World Champinship match between Magnus Carlsen and Fabiano Caruana. And it’s a review with a difference, because we are taking a look at the games together with AlphaZero, DeepMind’s general purpose artificial intelligence system...” (2)
3. “Hi, I’m GM Matthew Sadler, and in this video we’ll be taking a look at a game between AlphaZero, DeepMind’s general purpose artificial intelligence system, and Stockfish” (3)
I could go on, but you get my point. Search youtube for "Sadler DeepMind" and you'll see all the rest. This is a script.
But wait, you say, that's just some random unaffiliated independent grandmaster who just happens to be using an inaccurate script on his own, no DeepMind connection at all! And to that I would say, check out this same random GM being quoted directly on DeepMind's blog waxing eloquently and rapturously about AlphaZero's incredible qualities. (4)
Let's be clear. I am in no way dismissing AlphaZero's truy remarkable abilities in both chess and other games like go and shogi. Nor do I have a problem with Demis Hassabis making headlines for stating the obvious about deep learning (that it's good at solving certain limited types of puzzles, but we are a long way from AGI, why is this controversial).
My problem is that Hassabis is speaking out of both sides of his mouth. Increasing DeepMind/Google's value by many millions with his marketing message, while acting like he's not doing that. It feels intellectually dishonest.
To solve this, all DeepMind needs to stop instructing its Grandmaster mouthpieces to refer to AlphaZero as a "general articial intelligence system". Let's see how long that takes.
"General" as in what? As opposed to reinforcement learning, in er, general? As opposed to other ANN architectures?
>> I am in no way dismissing AlphaZero's truy remarkable abilities in both chess and other games like go and shogi.
More to the point- it's only chess, go and shogi; not games "like" those.
The AlphaZero architecture has the structure of a chessboard and the range of moves of pieces in chess, go and shogi hard-coded and you can't just take a trained AlphaZero model and apply it to a game that doesn't have either the board or the moves of those three games.
To be blunt, AlphaZero has mastered chess, go and shogi, but it can't play noughts-and-crosses.
I don't think they mean the two in the same way. AlphaZero is "general purpose artificial intelligence" because if you formulate a problem in the right way and then throw a server cluster at it for a few weeks, it often comes back with pretty good performance at solving that problem. It's probably our current best crack at creating AGI, but it's a long way from a machine that can take a very high level goal and figure out the rest for itself, which is what we usually mean by "AGI" - not just a machine that answers multiple questions, but a thing analogous to a human mind which can analyse new things, infer properties and mechanics, generalise those to new contexts, and apply that knowledge to achieve new outcomes.
If AGI (an artificial human mind with direct access to computational power of classic computers and whole Internet of information) was possible then we would probably already be living in the Travelers TV show.
As I always ask regarding this sort of story, why do we believe human intelligence is computable? The only answer I've heard is the materialist presupposition and sneers at any other metaphysic as "magic," which is not exactly a valid form of argument.
As an alternative, the human mind could be some sort of halting oracle. That's a well defined entity in computer science which cannot be reduced to Turing computation, thus cannot be any sort of AI, since we cannot create any form of computation more powerful than a Turing machine. How have we ruled out that possibility? As far as I can tell, we have not ruled it out, nor even tried.
This line of reasoning can be applied to almost any fundamental scientific discovery before it was made.
Why do we believe man can make fire? Well, dammit, we WANT to make fire. Let's figure out how to do it!
Finally, if we were able to explain the brain well with "metaphysics" it would then be just "physics". It seems that all you are saying here is that there is a mechanism that is not yet understood and it may be fundamentally different than other things we have studied so far (which seems unlikely, I might add).
Part of me almost hopes it is a halting oracle of some sort because then we could start looking into either hooking up multiple brains to a single oracle or a single brain to many oracles.
I'm not even convinced that a real AI is possible with conventional computer hardware or anything remotely similar to it. Not even considering software I get the impression there is a fundamental limitation of hardware.
[+] [-] karmasimida|7 years ago|reply
Such architecture works great for differentiable data, such's images/audios, but the improvement on natural language tasks are only incremental.
I was thinking maybe DeepMind's RL+DL is the way leads to AGI, since it does offer an elegant and complete framework. But seems like even DeepMind had trouble to get it working to more realistic scenarios, so maybe our modelling of intelligence is still hopelessly romantic.
[+] [-] sdenton4|7 years ago|reply
And - let's be real - a lot of human symbolic reasoning actually happens outside of the brain, on paper or computer screens. We painstakingly learn relatively simple transformations and feedback loops for manipulating this external memory, and then bootstrap it into short-term reaction via lots of practice.
I tend to think that the problems are: a) Tightly defined / domain-specific loss functions. If all I ever do is ask you to identify pictures of bananas, you'll never get around to writing the great american novel. And we don't know how to train the kinds of adaptive or free form loss functions that would get us away from these domain-specific losses.
b) Similarly, I have a soft-spot for the view that a mind is only as good as its set of inputs. We currently mostly build models that are only receptive (image, sound) or generative. Reinforcement learning is getting progress on feedback loops, but I have the sense that there's still a long way to go.
c) I have the feeling that there's still a long way to go in understanding how to deal with time...
d) As great as LSTMs are, there still seems to be some shortcoming in how to incorporate memory into networks. LSTMs seem to give a decent approximation of short-term memory, but still seems far from great. This might be the key to symbolic reasoning, though.
Writing all that down, I gotta say I agree fundamentally with the DeepMind research priorities on reinforcement learning and multi-modal models.
[+] [-] kahoon|7 years ago|reply
[+] [-] epicureanideal|7 years ago|reply
"Hence, if it requires, say, a thousand years to fit for easy flight a bird which started with rudimentary wings, or ten thousand for one which started with no wings at all and had to sprout them ab initio, it might be assumed that the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanicians in from one million to ten million years--provided, of course, we can meanwhile eliminate such little drawbacks and embarrassments as the existing relation between weight and strength in inorganic materials. [Emphasis added.] The New York Times, Oct 9, 1903, p. 6."
-----
A couple of the leading minds in AGI say it's a long ways away... just because the universe likes to give us the finger, maybe AGI is on the horizon. Maybe we'll look back at this in 10 years and laugh (if we're here).
[+] [-] nabla9|7 years ago|reply
We really don't learn anything from the problem in had by talking in generic terms. We use these arguments when we want to justify our hopes and feeling, but there is really nothing to learn from it.
Hinton, Hassabis, Bengio and others point out that we can't 'brute force' AI development. There needs to be actual breakthroughs in the field and there may be several decades between them.
AI, brain science and cognitive science are extremely difficult fields with small advances, yet people assume that it's possible to 'brute force' AGI by just adding more computing power and doing more of the same.
Macroeconomics is probably less complex research subject than AI or brain science, but nobody assumes that you can just brute force truly great macroeconomic model in few years if you just spend little more resources.
[+] [-] YeGoblynQueenne|7 years ago|reply
What are the components of intelligence? For example, AlphaZero can solve problems that are hard for humans to solve in the domain of chess, shogi and go- is it intelligent? Is its problem-solving ability, limited as it is to the domain of three board games, a necessary component of general intelligence? Have we even made any tiny baby steps on the road to AGI, with the advances of the last few years, or are we merely chasing our tails in a dead end of statistical approximation that will never sufficiently, well, approximate, true intelligence?
These are very hard questions to answer and the most conservative answers suggest that AGI will not happen in a short time, as a sudden growth spurt that takes us from no-AGI to AGI. With flight, it sufficed to blow up a big balloon with hot air and- tadaaaa! Flight. There really seems to be no such one neat trick for AGI. It will most likely be tiny baby steps all the way up.
[+] [-] cr0sh|7 years ago|reply
Mainly in the idea/concept of back-propagation. It's something that I've thought about myself. For the longest time, I could never understand how it worked, then I went thru Ng's "ML Class" (in 2011, which was based around Octave), and one part was developing a neural network with backprop - and the calcs being done using linear algebra. It suddenly "clicked" for me; I finally understood (maybe not to the detailed level I'd like - but to the general idea) how it all worked.
And while I was excited (and still am) by that revelation, at the same time I thought "this seems really overly complex" and "there's no way this kind of thing is happening in a real brain".
Indeed, as far as we've been able to find (although research continues, and there's been hints and model which may challenge things) - brains (well, neurons) don't do backprop; as far as we know, there's no biological mechanism to allow for backprop to occur.
So how do biological brains learn? Furthermore, how are they able to learn from only a very few examples in most cases (vs the thousands to millions examples needed by deep learning neural networks)?
We've come up with a very well engineering solution to the problem, that works - but it seems overly complex. We've essentially have made an airplane that is part ornithopter, part fixed-wing, part balloon, and part helicopter. Sure it flies - but it's rather overly complex, right?
Humanity cracked the nut when it came to heavier-than-air flight when it finally shed the idea that the wings had to flap. While it was known this was the way forward long before the Wright's or even Langley (and likely even before Lilienthal), a lot of wasted time and effort went into flying machines with flapping wings, because it was thought that "that's the way birds do it, right"?
So - in addition to the idea that backprop may not be all it's cracked up to be - what if we also need to figure out the "fixed wing" solution to artificial intelligence? Instead of trying to emulate and imitate nature so closely, perhaps there's a shortcut that currently we're missing?
I do recall a recent paper that was mentioned here on HN that I don't completely understand - that may be a way forward (the paper was called "Neural Ordinary Differential Equations"). Even so, it too seems way too complex to be a biologically plausible model of what a brain does...
[+] [-] hacker_9|7 years ago|reply
[+] [-] codekilla|7 years ago|reply
I've spent a lot of time trying to explain this to people, that there is a confluence between the human brain and the machine, people tend to look at the machine separately, which is a mistake. When I say unequivocally, 'there is no such thing as machine intelligence', I just get blank stares.
[+] [-] atschantz|7 years ago|reply
But yes, it's extremely unlikely that nature implements backpropagation directly, as it relies on non-local gradients.
[+] [-] 995533|7 years ago|reply
Human flight is not as agile or energy-effective as a dragonfly, but it is faster and stronger. Just like artificial learning may not be as sample-effecient as the human brain. It is a learning intelligence nonetheless and we are already working with the core mechanisms of reasoning and deduction.
[+] [-] joe_the_user|7 years ago|reply
[+] [-] buboard|7 years ago|reply
[+] [-] cantthinkofone|7 years ago|reply
You could bet that AGI won't manifest until AI and robotics are properly fused. Cognition does not happen in a void. This image of a purely rational mind floating in an abyss is an outdated paradigm to which many in the AI community still cling. Instead, the body and environment become incorporated into the computation.
[+] [-] anonytrary|7 years ago|reply
[+] [-] brandonmenc|7 years ago|reply
Anecdotal, but nearly all of my programmer friends believe that full-blown AGI is less than a decade away.
[+] [-] sanxiyn|7 years ago|reply
My impression is this is common among DeepMind folks and not an aberration. (See also dwiel's comment elsewhere.) It is super weird for me that Demis Hassabis says AGI is nowhere close. Is he lying? Or does he mean 10 years is not close?
[+] [-] rytill|7 years ago|reply
Also, do you believe AGI is currently more a compute/hardware problem, or an algorithmic problem?
[+] [-] ionforce|7 years ago|reply
People lack nuance and critical thinking.
[+] [-] dwiel|7 years ago|reply
[+] [-] joe_the_user|7 years ago|reply
[+] [-] lainga|7 years ago|reply
[+] [-] rozim|7 years ago|reply
https://medium.com/intuitionmachine/near-term-agi-should-be-...
[+] [-] why_only_15|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] goolulusaurs|7 years ago|reply
[+] [-] mrdoops|7 years ago|reply
Maybe our existing methods are good enough given enough compute to reach AGI but our datasets are too low fidelity and non-representative of the problem space to reach desired results?
[+] [-] MAXPOOL|7 years ago|reply
Think of 16 year old human:
* it has received less than 400 million wakeful seconds of data + 100 millions seconds of sleep,
* it has made only few million high level cognitive decisions where feedback is important and delay is tens of second or several minutes (say few thousand per day). From just few million samples it has learned to behave in the society like a human and do human things.
* Assuming 50 ms learning rate at average, at the lowest level there is at most 10 billion iterations per neuron (Short-term synaptic plasticity acts on a timescale of tens of milliseconds to a few minutes.)
Humans generate very detailed model of their environment with very little data and even less feedback. They can learn complex concept from one example. For example you need only one example of pickpocket to understand the whole concept.
[+] [-] nikkwong|7 years ago|reply
On the other hand, the ubiquity of knowledge once it's available could lead any maniac to use it for the wrong purpose and wipe out humanity from their basement.
My feelings on the potential of AGI is therefore mixed. I for one have just found my particular niche in the workforce and am finally reaping the dividends from decades of hard work. Having AGI displace me and millions (or billions) of individuals is frightening and definitely keeps me on my toes.
Technology changes the world; my parents both worked for newspapers and talk endlessly about how the demise of their industry after the advent of the internet is so unfortunate. Luckily for them they are both at retirement age so their livelihood was not upset by displacement.
If AGI does become a thing it will be interesting to see how millenials and gen Z react to becoming irrelevant in what would have been the peak of their careers.
[0] https://en.wikipedia.org/wiki/Molecular_assembler
[+] [-] diminish|7 years ago|reply
https://news.ycombinator.com/item?id=18720482
[+] [-] toasterlovin|7 years ago|reply
[+] [-] mac01021|7 years ago|reply
It seems clear that autonomous systems which can apply their computational machinery to a diverse range of problems, and can, in a diverse range of settings, formulate instrumental goals as part of a plan to attain a final goal, do exist.
Because that's what humans are, at least some of the time.
[+] [-] darkmighty|7 years ago|reply
edit: In terms of Turing-completeness analogues, the best candidate for AGI I think would be simply brute force capability: can this agent try all possible solutions until it solves this problem? (obviously using a heuristic to prioritize) -- that is, it'd employ a form of Universal Search[1] (aka Levin Search). Humans don't necessarily pass this test rigorously because we'd always get bored with a problem and because we have finite memory. But then CPUs are not truly Turing complete either (it's "just" a good model).
[1] http://www.scholarpedia.org/article/Universal_search
[+] [-] mikhailfranco|7 years ago|reply
https://www.bbc.co.uk/sounds/play/p06qvj98
[+] [-] mindgam3|7 years ago|reply
Don't believe me? Check out this series of marketing videos on YouTube by GM Matthew Sadler.
1. “Hi, I’m GM Matthew Sadler, and in this series of videos we’re taking a look at new games between AlphaZero, DeepMind’s general purpose artificial intelligence system, and Stockfish” (1)
2. “Hi, I’m GM Matthew Sadler, and welcome to this review of the World Champinship match between Magnus Carlsen and Fabiano Caruana. And it’s a review with a difference, because we are taking a look at the games together with AlphaZero, DeepMind’s general purpose artificial intelligence system...” (2)
3. “Hi, I’m GM Matthew Sadler, and in this video we’ll be taking a look at a game between AlphaZero, DeepMind’s general purpose artificial intelligence system, and Stockfish” (3)
I could go on, but you get my point. Search youtube for "Sadler DeepMind" and you'll see all the rest. This is a script.
But wait, you say, that's just some random unaffiliated independent grandmaster who just happens to be using an inaccurate script on his own, no DeepMind connection at all! And to that I would say, check out this same random GM being quoted directly on DeepMind's blog waxing eloquently and rapturously about AlphaZero's incredible qualities. (4)
Let's be clear. I am in no way dismissing AlphaZero's truy remarkable abilities in both chess and other games like go and shogi. Nor do I have a problem with Demis Hassabis making headlines for stating the obvious about deep learning (that it's good at solving certain limited types of puzzles, but we are a long way from AGI, why is this controversial).
My problem is that Hassabis is speaking out of both sides of his mouth. Increasing DeepMind/Google's value by many millions with his marketing message, while acting like he's not doing that. It feels intellectually dishonest.
To solve this, all DeepMind needs to stop instructing its Grandmaster mouthpieces to refer to AlphaZero as a "general articial intelligence system". Let's see how long that takes.
(1) https://www.youtube.com/watch?v=2-wFUdvKTVQ&t=0m10s (2) https://www.youtube.com/watch?v=X4T0_IoGQCE&t=0m05s (3) https://www.youtube.com/watch?v=jS26Ct34YrQ&t=0m05s (4) https://deepmind.com/blog/alphazero-shedding-new-light-grand...
[+] [-] YeGoblynQueenne|7 years ago|reply
A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play
http://science.sciencemag.org/content/362/6419/1140
"General" as in what? As opposed to reinforcement learning, in er, general? As opposed to other ANN architectures?
>> I am in no way dismissing AlphaZero's truy remarkable abilities in both chess and other games like go and shogi.
More to the point- it's only chess, go and shogi; not games "like" those.
The AlphaZero architecture has the structure of a chessboard and the range of moves of pieces in chess, go and shogi hard-coded and you can't just take a trained AlphaZero model and apply it to a game that doesn't have either the board or the moves of those three games.
To be blunt, AlphaZero has mastered chess, go and shogi, but it can't play noughts-and-crosses.
[+] [-] rozim|7 years ago|reply
[+] [-] taneq|7 years ago|reply
[+] [-] qwerty456127|7 years ago|reply
[+] [-] mindcrime|7 years ago|reply
How do you know we aren't?
BTW, if you hadn't noticed, Season Three just came out on Netflix. I'm champing at the bit to binge watch that... :-)
[+] [-] yters|7 years ago|reply
As an alternative, the human mind could be some sort of halting oracle. That's a well defined entity in computer science which cannot be reduced to Turing computation, thus cannot be any sort of AI, since we cannot create any form of computation more powerful than a Turing machine. How have we ruled out that possibility? As far as I can tell, we have not ruled it out, nor even tried.
[+] [-] hackernudes|7 years ago|reply
Why do we believe man can make fire? Well, dammit, we WANT to make fire. Let's figure out how to do it!
Finally, if we were able to explain the brain well with "metaphysics" it would then be just "physics". It seems that all you are saying here is that there is a mechanism that is not yet understood and it may be fundamentally different than other things we have studied so far (which seems unlikely, I might add).
[+] [-] siekmanj|7 years ago|reply
[+] [-] mortivore|7 years ago|reply
[+] [-] magwa101|7 years ago|reply
[+] [-] hyperpallium|7 years ago|reply
[+] [-] izzydata|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] ludicast|7 years ago|reply
[+] [-] MR4D|7 years ago|reply
Seriously - that's a wicked funny post you had there!