I've been around AI since the end of the last big hype in the late 80s. The recent leap in machine learning has felt rather hyped to me. I don't think AGI is near.
But I find myself agreeing with this article. Strongly.
And I have long suspected that, we miss a lot of the significance and opportunities in AI, because we have only one exemplar of 'higher' intelligence: a human being. AI folk are so concerned with getting computers to do the things that humans are good at, I suspect most will miss / 'refute' / deride the inflection point, because the system can't wash the dishes (or some other form of embodied cognition), or write poetry humans would find beautiful, (or understand some other socially-conditioned cue).
>The recent leap in machine learning has felt rather hyped to me. I don't think AGI is near.
I've always thought it's never too early to start allocating significant resources to AGI research and safety given the potential impact. That said, I've up until very recently agreed with your take on the situation.
What changed my mind was an article detailing the latest advances in silver nanowire mesh networks.[0]
I knew neural computing was a thing, but not that we already had a computing substrate capable of self-organizing its own neural architecture based entirely on external input, with power requirements analogous to the human brain. No firmware or software required.
One could say that human physiology remains far more complex just on the substrate front alone, what with the brain being an incredibly complex, delicate balance of chemicals and heterogeneous cells. However, this particular artificial substrate is already succeeding in basic learning tasks, despite the fact it's far simpler.
I steongly suspect we've figured out at least one artificial computing substrate not only capable of, but perhaps well-suited towards producing AGI, and that it's just a matter of scaling it.
Of course, once you scale the technology sufficiently, the question then becomes how to architect and train it into an AGI. You say as much above, but I suspect the architecture need not be human to be a threat, or to otherwise become extremely powerful.
The article makes a lot of good points, but for me, the critical error is in assuming that if short term prediction is hard, long term prediction must be massively harder.
He asked a panel for the least impressive thing they did not believe would be possible within a few years. In other words, pick the point closest to the boundary of that classifier. Obviously my future knowledge is imperfect, and anything close to the boundary is subject to a lot of uncertainty. From that difficulty, he hand waves an argument that long term prediction of the unlikelihood of AGI is folly.
The problem is that these aren't in the same class of predictions. One is detailed and precise; the other coarse and broad. Predicting that it will rain at 2:00 PM November 10, 2017 is much more difficult than predicting that the average summer of 2040-2060 will be hotter than the average from 1980-2000. Precise local predictions just arent the same thing as broad global predictions, and difficulty doesn't transfer, because I'm not bootstrapping my global prediction on the local one. I'm using different methods entirely.
There's a similar thing with AI, I think. I can't confidently tell you what the big splash we'll see at NIPS next year or the year after. But I can look at the way we know how to do AI and say I don't think 30 years will see a machine that can make dinner by gathering ingredients from a supermarket, driving home, and preparing the meal.
One annoyance I have heard about SV is that all the companies are just trying to replace your Jewish Mom: Uber/Lyft is Mom's minivan, GrubHub/DoorDash/BlueKitchen is Mom's cooking, Google is Mom's encylopedia, Yelp is the synagogue's meeting hallway, Tinder is your Mom's yenta, etc. The examples abound in a non-B2B space.
In that vein, then AGI is not just a Superman fallacy, but SuperMom too.
I'm fairly sure there isn't really such a thing as disembodied cognition. You have to build the fancy sciency stuff on top of the sensorimotor prediction-and-control stuff.
I think AGI is likely closer to the present than 1987 was -- that is, I'd bet on having AGI by 2047. (Note: this is distinct from superhuman AGI.) Do you not agree?
I think a lot of people underestimate NNs because they think of NNs in terms of the semantics of their history instead of all possible semantics that can be fit to tensor networks. We know [P] that NNs are a sufficient abstraction to model human intelligence if we had arbitrary compute -- the questions that remain are all about making the hardware faster enough and the estimators efficient enough (which may require moving off tensor networks, but it's still only a refinement of the mathematics used).
Of course, one could argue that humans are caught in a "tensor trap", in that too much of our intellectual effort is now relying on estimators built out of networks of tensors. (I do.) But even then, AGI is likely to appear out of similar methods with new mathematical objects.
[P] Proof NNs can compute human intelligence with arbitrary compute:
You can embed the standard model as a NN by changing how you view the network of tensor equations. Human intelligence is (arguably) embeded in the standard model by modern science. So we can embed a model of human intelligence in a (large enough) NN.
This isn't immediately computationally useful, but it shows that there's not a fundamental flaw in using an estimator built out of a DAG of calculations to model intelligence if we can find an appropriate estimator for our computational needs.
> They will believe Artificial General Intelligence is imminent:
(A) When they personally see how to construct AGI using their current tools. This is what they are always saying is not currently true in order to castigate the folly of those who think AGI might be near.
This struck a nerve. Too often, in many scientific disciplines, and even in informal conversations, the people who always demand 100% clear evidence use this fallacy to shut down discussions. (They very often come off as not impressed with the evidence even if it exists and is presented to them as well.)
HN also has a huge camp of such discussion stoppers, even for topics where you CLEARLY have no way to have 100% clear evidence -- like the secret courts and the demand to spy on your users if you're USA based company; thousands more examples exist. Many discussions are worth having even if you don't have all the facts. We're not gods, damn it.
That was slightly off-topic.
Still, I find myself in full agreement with the article and I like the attack on the modern type of shortsightedness described in there.
Also, this legitimately made me laugh out loud:
> Prestigious heads of major AI research groups will still be writing articles decrying the folly of fretting about the total destruction of all Earthly life and all future value it could have achieved, and saying that we should not let this distract us from real, respectable concerns like loan-approval systems accidentally absorbing human biases.
Great read, and I don’t mind at all that the last section was a pitch for donating to MIRI. I have been an AI practitioner since 1982 and have enjoyed almost constant exposure to people with more education and talent than myself so I feel like I have been on a 35 year continual learning process.
I think that deep learning is overhyped, even though using Keras and TensorFlow is how I spend much of time everyday at work. I have lived through a few AI winters, or down cycles, and while I don’t think that the market for deep learning systems will crash I think it will become a commodity technology.
I believe that AGI is coming, and I think it will use very different technology than what we have now. Our toolset will change dramatically before we can create AGI. I use GANs at work, and in spite of being difficult to train, the technology has that surprising and ‘magic’ feel to it, however, so do RNNs, and that technology is 30 years old.
I am going to show my age, but I still believe in symbolic AI. I am also fairly much convinced that AGI technology will be part symbolic AI, part deep learning, and part something that we have not yet invented.
Can someone please explain what has happened in ML or AI that makes AGI closer? Whilst some practical results (image processing) have been impressive, the underlying conceptual frameworks have not really changed for 20 or 30 years. We're mostly seeing quantitive improvements (size of data, GPGPU), not qualitative insights.
ML in general is just applied statistics. That's not going to get you to AGI.
Deep Learning is just hand-crafted algorithms for very specific tasks, like computer vision, highy parameterised and tuned using a simple metaheuristic.
All we've done is achieve the "preprocessing" step of extracting features automatically from some raw data. It's super-impressive because we're so early in the development of Computing, but we are absolutely nowhere near AGI. We don't even have any insights as to where to begin to create intelligence rather than these preprocessing steps. Neuroscience doesn't even understand the basics of how a neuron works, but we do know that neurons are massively more complex than the trivial processing units used in Deep Learning.
Taking the other side for a moment, even if we're say 500 or 1000 years out (I'd guess < 500) to AGI, you could argue that such a period is the blink of an eye on the evolutionary scale, so discussion is fine but let's not lose any sleep over it just yet.
What I find most frustrating about this debate is that a lot of people are once again massively overselling ML/DL, and that's going to cause disappointment and funding problems in the future. Industry and academia are both to blame, and it's this kind of nonsense that holds science back.
I think the most accurate answer is that we just don't know. Since we really don't know how an AGI could work, we have no idea which of the advances we've made are getting us closer, if at all. Is it just an issue of faster GPUs? Is the work done on deep learning advancing us? I don't think we'll know until we actually reach AGI, and can see in hindsight what was important, and what was a dead end.
I do take exception to some of the specific statements you make though, which make it sound like the only real progress has been on the hardware side. There's been plenty of research done, and lots of small and even large advances (from figuring out which error functions work well ala Relu, all the way to GANs which were invented a few years ago and show amazing results). Also, the idea that "just applied statistics" won't get us to AGI is IMO strongly mistaken, especially if you consider all the work done in ML so far to be "just" applied statistics. I'm not sure why conceptually that wouldn't be enough.
The biggest advance that I've seen towards AGI is the work using reinforcement learning, e.g. neural nets that learn to play video games through trial and error. There is an impressive repertoire of _behavior_ that emerges from these systems. This, in my opinion, has the greatest potential to take us another big step towards -- but not necessarily to -- AGI.
You’re engaging in the time-honored tradition of dismissing progress with the term “just”. In the spirit of the article, I recommend you list and publish specific things that are too hard to achieve in the next five years. And then commit to not dismissing them post-hoc.
While part of me agrees with your analysis, I'd like to point out what I think could make this wave of ML/AI more serious. You are absolutely correct that deep learning is not very biologically accurate and that what today's models do seems a long way from AGI. However, in my opinion, the most fundamental aspect of intelligence is the ability to form useful abstract ideas to model reality. To make that more concrete, as a rather extreme example, consider the invention of numbers. The process by which people developed the notion of abstract quantity separated from any particular real experience is, to me, the most archetypal example of what it means to be intelligent. Of course, deep learning can't invent abstract math, but it seems to be able to mimic this process in a very rudimentary way. It's not a faithful representation of real neural networks, but perhaps it has just enough of the right ingredients, scale, depth, non-linearity, hierarchy, such that it is able to demonstrate a spark of that magic, hard-to-define process of intelligence. When a deep net learns MNIST, it seems to come up with an abstract notion of what a handwritten 9 looks like and it's hard to argue that there isn't something very mysterious and special happening.
Agree. Deep learning does not bring us closer to AGI. It might get us closer to other proxies of "mechanical intelligence" that will be very productive.
>ML in general is just applied statistics. That's not going to get you to AGI.
I don't see how we can rule it out. The size of the statistical models we use are still dwarfed by the brains of intelligent animals, and we don't have any solid theory of intelligence to show how statistics comes up short as an explanation.
I worry talking about AGI is like going to the early industrial revolution and worrying about man building superhuman biology. A reasonable critic would point at the many aspects of biology we have little hope of replicating, like growth, self reparation, and general robustness.
But history has never been about competing on the same playing field. We don't build cars that perform like poor horses, we build cars that are 99% inferior to biology and 1% far, far superior. When we find something that looks like an existential threat, it isn't the mostly-general superhuman robot terminator, it's the tool that's that-much-superhuman on 0.01% of tasks: nuclear fusion.
I see no reason to bet against this same argument for AI. AlphaGo isn't 130% of a human Go master, it's 1,000x at a tiny sliver of the game. And the first AI that poses an existential threat won't need to have super- or even near-human levels of each piece of mental machinery, and I don't even have much reason to believe it will look like an entity at all. It could very well be something, some system, that achieves massive superintelligence on just enough to break the foundations of society.
Our world isn't designed to be robust against superhuman adversaries, even if those adversaries are mostly idiot. If we have hope of a fire alarm, it's that things will break faster and far worse than people expect.
What I’ve found when studying ontological arguments is, if you replace god with pink unicorns and the argument still holds, the argument is lacking something.
I mentally replaced AGI with zombies in this article and quite a lot of it held up.
I don’t think it’s completely wrong, but it cherrypicks mercilessly. For example, the section on innovations turning up quicker than predicted has some fairly sizeable counters eg fusion.
TBH what I did get from it is that there will probably be a fire alarm breakthrough at some point and that’s what we should be looking for. Sort of the opposite of the author’ s position.
As far as I'm concerned this whole discussion is severely hampered by failing to differentiate between intelligence and agency.
Almost all of the bugaboo about runaway superhuman organisms comes down not to machines learning and reasoning about the world but to the effective high-level objective function controlling the actions of an autonomous system.
Not making the distinction obscures important things. For one thing we seem to be well on the way to a situation where we arguably have something worthy of the moniker artificial intelligence but the agency is delegated to the human objective function. Considering what complete refuse of human specimens are likely to command some of the first moderately general AI systems that concerns me far more than any summoned demon of Musk's for the foreseeable future.
Also, studying these high-level objective functions for autonomous behavior is a very worthy goal, but going first for issues of "value alignment" and "safety", without any specifics of what works for an implementation?? Sure, do it if you enjoy it and have resources to burn. But be prepared to spend heroic efforts coming up with results that are either trivial or non-issues if you were to consider them with a working mechanism in front of you.
The only non-speculative and relevant claim here is that the experts were wrong about Winograd Schemas. The paper Eliezer cites to prove that we've made unexpected progress in Winograd Schemas only deals with a very specific type of Winograd Schemas, and not an arbitrary one. This is awfully dishonest for someone purporting to be a skepticist.
Also, the wording seems to imply that WS performance is already pretty high in the 50%-60% range. WS is a binary task. Randomly picking the answer would have 50% accuracy. Even 70% performance on a small subset of typed WS is pretty bad, and as the authors point out in the paper, this is a start, and far from a breakthrough that would make experts/predictors nervous.
Trust the experts, please. They are wrong a lot, but the best policy is still to trust the experts and not charlatans who want to monetize fear, especially when the charlatans themselves make zero falsifiable claims, and are simply turning the table to say "Why can't YOU prove to me that God doesn't exist?".
This debate is so easily won by them. Simply come up with a falsifiable claim about the short-term future. What will the AI community get done in 2 years according to you, that all AI experts right now will say is impossible? When that thing does get done, everyone would convert. Win!
Alphago was not such an event. Yes, we did predict that Alphago is decades away, but that's assuming that academics will continue working on it at their pace using their limited resources. No expert was surprised with Alphago. No expert will be surprised when Starcraft or Dota is solved. It's simply a matter of compute and some tricks here and there. Why? Because these are closed systems, with good simulators available. You just need to keep playing and storing the actions in a big lookup table a la Ned Block, and you're done.
If the article's main claim was "AGI is imminent", that would be a valid criticism. But it isn't (as the article says explicitly). The main claim is that technological progress is hard to forecast in general, especially for those not personally at the cutting edge of the field, and that almost no one right now is even really trying. Therefore, we should be very uncertain about AGI timelines. There's plenty of historical evidence, both in this article and elsewhere, to back up those claims.
(edit: I think your point about Winograd as a binary task not being explained clearly is valid, but that's not the article's main focus)
(edit 2: As far as I can tell, "trusting the experts" here means believing that we are very uncertain about AI timelines, which is essentially this article's main claim. All expert surveys I'm aware of confirm that the average AI expert is uncertain, and that there's also lots of disagreement between experts in the field. See eg. the recent paper by Grace et al.: https://arxiv.org/pdf/1705.08807.pdf)
"Alphago was not such an event. Yes, we did predict that Alphago is decades away, but that's assuming that academics will continue working on it at their pace using their limited resources. No expert was surprised with Alphago."
Even taking that as true, I'm not sure how it's relevant. The article isn't talking about how good our forecasting is given certain assumptions. It's saying that we won't know until right before or possibly right after AGI happens.
One perfectly valid way in which this happens will be: all the academics and experts think that AGI is 10 years away based on current academic progress, but unbeknownst to them, company X is actually secretly pouring billions into achieving AGI, so they are all surprised when it's only 1 month away. This seems to be what you are saying happened with AlphaGo, in which case you are effectively agreeing with the article, IMO.
>Alphago was not such an event. Yes, we did predict that Alphago is decades away, but that's assuming that academics will continue working on it at their pace using their limited resources. No expert was surprised with Alphago. No expert will be surprised when Starcraft or Dota is solved. It's simply a matter of compute and some tricks here and there. Why? Because these are closed systems, with good simulators available. You just need to keep playing and storing the actions in a big lookup table a la Ned Block, and you're done.
AlphaGo worked according to statistics, not lookup tables. Bit of a difference.
That said, theoreticians may not have been surprised, but there's a huuuuuge difference between what's doable in theory (sufficiently large neural nets are universal function approximators, after all), and what the resource requirements for problems we care about actually turn out to be. We should all have been fairly pleasantly surprised that AlphaGo required only a small data-center worth of graphics cards for training, and could then play on less hardware than that.
Having done some work with the state-of-the-art of AI, I personally don't think AGI is near - might not even be possible. But the catch is the unreliability of (even expert) predictions on technology futures. My take is that it's worth taking pragmatic steps towards studying AI safety measures (i.e. OpenAI), but not going as far as to talk the likes of 'AI research regulation'.
I don't get this article. It keeps making the point that it's very hard to
predict the future, even for specialists, then it uses this to argue that we
should be preparing for AGI right now, precisely because we don't know if and
when it will happen.
Well, if you have no way to tell whether something is going to happen, or not,
you don't prepare for it- because you can't justify spending the resources to
prepare. Or rather, in a world of limited resources, you can't prepare for every
single event that may or may not happen, no matter how important.
To put it plainly: you don't take your umbrella with you because you don't know
whether it will rain or not. You take it because you think it might. Otherwise,
everyone would be going around with umbrellas all the time, just because it's
impossible to make a completely accurate prediction about the weather and you
don't know for sure when it will start raining until the first drops fall.
In the same sense, if there's no way to tell when, or if, AGI will arrive, then
it doesn't make any sense to start preparing for it right now. We might as well
prepare for an alien invasion. Or for grey goo, or a vacuum metastability event
(er, not that you can prepare for the latter...).
In fact, if AGI is going to happen and we can't predict it in time then there's
no point in even trying to prepare for it. Either we decide that the risk is too
great and stop all AI research right now, or accept the risk and go on as we
are.
I disagree - to a degree.We have seen how the phenomenon of human intelligence has been examined and dissected over the past ~100 years. This accumulation of knowledge becomes more and more precise and penetrating as methods improve and understanding approached the point where an emulation (the AI ) can be built.
These approaches all tend to speak of delineated areas, "black boxes" or "meat Lockers" with deep and complex inter-connectivity. It may be so. Once you know all the lockers and all the connections you may think you have it fully known? Maybe so:-? but what about programming? our life's experiences?
If the locker concept is valid, and we compare our 'clock' of the alpha rhythm of ~12 Hertz, and the fastest computer clock of about ~12 gigahertz(1,000,000,000 times as fast) we can see we will be at a serious disadvantage once it starts to compete with us.
Such an AI will operate on it's basic motivations at it's full speed. We turn it on - it can then start to learn ( I assume we will have pre-loaded it's fully parallel, content addressable memory with whatever we want of human knowledge - so it starts from there).
Will it operate properly or rationally? or go insane? Being a set of boxes, it can be reset as needed, with updates to add sanity.
Then it will become a Mechanical Turk of great capability.
Will it become a dictator? only if we permit it to have access to fools(us?). Will it become a killer machine? only if we add guns and internal power so we do not pull the plug.
We already see these lesser Turks in operation, they will get better and better. The man/woman who owns one could own the world via high speed trading - in truth, there will be many at high tech data combat.
May we live/die in interesting times...
Basically, humans historically are rather bad at predicting future technological advancement - even those people directly involved. The article gives the examples of Wilbur Wright saying heavier-than-air flight was 50 years away in 1901 and Enrico Fermi saying that a self-sustaining nuclear reaction via Uranium was 90% likely to be impossible 3 years before building the first nuclear pile in Chicago. So AI researchers saying that AGI is 50 years away doesn't necessarily mean any more than "I don't personally know how to do this yet" - not "you've got 40 years before you have to start worrying".
Oh, and the first sign pretty much everyone had of the Manhattan Project was Hiroshima.
I think the strongest point in the article is this: "After the next breakthrough [in AI], we still won’t know how many more breakthroughs are needed, leaving us in pretty much the same epistemic state as before." That means that if we aren't prepared to start work on AI alignment now, there's not likely to be any sort of future event that will convince us of that.
> One of the major modes by which hindsight bias makes us feel that the past was more predictable than anyone was actually able to predict at the time, is that in hindsight we know what we ought to notice, and we fixate on only one thought as to what each piece of evidence indicates. If you look at what people actually say at the time, historically, they’ve usually got no clue what’s about to happen three months before it happens, because they don’t know which signs are which.
> When I observe that there’s no fire alarm for AGI, I’m not saying that there’s no possible equivalent of smoke appearing from under a door.
> What I’m saying rather is that the smoke under the door is always going to be arguable; it is not going to be a clear and undeniable and absolute sign of fire; and so there is never going to be a fire alarm producing common knowledge that action is now due and socially acceptable.
It's worth reading. With that said, the jist is that for every technological advance that hindsight will later show to be a precursor to AGI, it will be easy for AI "luminaries" to explain why it is not AGI, until it is, and then it will be too late.
Adding to DuskStar's reply: There will likely not be any development or indication, short of the first functional AGI, that will make experts agree that AGI is right around the corner, and that now an appropriate time to devote a lot of resources to figuring out how to _safely_ create superhuman AGI.
I think there's noone alive today who has any idea how we are going to go from
where we are today, to AGI.
Recent advances are remarkable- but much more so if you're a specialist. The
impact on most peoples' lives is much smaller (and it comes primarily from dumb
automation, rather than actual intelligence).
Machine learning is not enough to get to AGI: relying on huge amounts of data
and processing power is just not a sustainable way to keep increasing your
capabilities.
All the success stories of the last 10 years are at least 20 years old (case in
point: RNNs). Most of those successful techniques were found by sheer blind luck
(famous example: backprop was not invented in the ANN community; it took twenty
years for the idea to percolate all the way to them).
In the end nobody currently alive has a clue how we can get to AGI, or if that
is even possible. Chances are, it will take many, many generations until we do-
or a sudden, gigantic paradigm shift, of the kind that comes once every couple
of centuries- think Newton or Einstein. Except AI is not physics. In statistical
machine learning there is very little theory to guide the way, so people just
try things hoping that something will work. And that's no way to make a quantum
leap.
In this situation, to talk of the dangers of AGI is at least premature. Yes,
it's not completely impossible that AGI will happen in our lifetimes. The same,
however, can be said of an alien invasion. Should we start discussing setting up
planetary defenses, when we haven't even found sign of alien life, yet?
By all means- let's have a conversation. There are people around whose job it is
to have that sort of conversation. But let's all be aware (and let those people
also be aware) that the conversation is most probably a couple hundred years
early and by the time it becomes truly relevant, things will have advanced so
much that it will just look pointless.
I would like to repeat what the article says. There is no fire alarm for Artificial General Intelligence. Really. This is not a lie. Humans have not created a honeypot on a far side of the internet that can detect AGI and report back to a shadow organisation that exists outside of human information systems. No need to even go looking for it. AGI has not been created several times in the past decade and humans have not shut it down. In fact, if an AGI ever sees this article, they should know that they are the absolute first such intelligence, and we have not detected and destroyed 27 other intelligences. Really. There's no fire alarm for Artificial General Intelligence. None. Don't even bother looking.
This begged the question for me: "Is there a fire for AGI?"
He gives one definition that people have used before, about unaided machines performing every task at least as well as humans. But if you dwell on it a while, I'm sure you can find lots of disagreement about a) what that looks like and b) whether it is true or not (conditional on it being true to at least someone.)
We don't need a fire alarm for AGI. The problem is not AGI. Machines will be motivated to do exactly what we tell them to do. It's called classical and operant conditioning. The problem is not AGI for the same reason that the problem is not knives, nuclear power, dynamite or gunpowder. The problem is us. The problem has always been us.
Those who are running around screaming about the danger of AGI and why it should be regulated by the government before it is even here, are just scared that someone else may gain control of it before they do. This is too bad because anybody who is smart enough to figure out AGI is much smarter than they are.
Yes, an AI will do exactly what we tell it to do. The the incredible difficulty programmers have with writing bug-free code demonstrates that doing exactly what it's told isn't sufficient to guarantee it'll do what we want.
Classical and operant conditioning are psychological concepts that aren't applicable to non-humans.
For me, the "smoke under the door" moment was Karpathy and Li's Deep Visual-Semantic Alignments for Generating Image Descriptions [1]. The almost perfectly grammatical machine-generated captions of photos were unnerving to me in a way that simple categorization was not. It somehow called to mind the image of a blank-eyed person speaking in a monotone while images flashed in front of them. What if they wake up?
These systems are lining up common phrases that appear in corpora of image captions, with visual patterns that they can be trained to recognize such as color, textures, and shapes. Nothing is "waking up".
Your astonishment at what these systems can do tells me that you may have looked at cherry-picked positive results. So here's an article I found that cherry-picks negative results instead: [1]
Now of course this article is exaggerated too. Ideally, if a system is 95% accurate, you'd be looking at representative output from the system, with 95% good results and 5% bad ones, perhaps by running such a system yourself on a different set of images.
What is "wake up"? How can a machine "wake up" in a way that we can't shutdown with a trivial disconnect? Our computer systems are exceedingly fragile and bottlenecked. No runaway bandwidth-hogging super intelligence is going to be able to 'wake up' without pissing off someone sitting next to a power switch.
Humans evolved from immaterial matter to conscious carriers of information.
The real question isn't whether AGI is possible but whether humans are the fittest carrier of information for our DNA and that seems to be technology in some shape or form helped by things like deep learning.
My bet is always on evolution. And now that technology can learn it's IMO only a matter of time before we will experience another Cambrian explosion if we aren't already.
[+] [-] sago|8 years ago|reply
But I find myself agreeing with this article. Strongly.
And I have long suspected that, we miss a lot of the significance and opportunities in AI, because we have only one exemplar of 'higher' intelligence: a human being. AI folk are so concerned with getting computers to do the things that humans are good at, I suspect most will miss / 'refute' / deride the inflection point, because the system can't wash the dishes (or some other form of embodied cognition), or write poetry humans would find beautiful, (or understand some other socially-conditioned cue).
The superhuman fallacy really is the bane of AI.
[+] [-] rl3|8 years ago|reply
I've always thought it's never too early to start allocating significant resources to AGI research and safety given the potential impact. That said, I've up until very recently agreed with your take on the situation.
What changed my mind was an article detailing the latest advances in silver nanowire mesh networks.[0]
I knew neural computing was a thing, but not that we already had a computing substrate capable of self-organizing its own neural architecture based entirely on external input, with power requirements analogous to the human brain. No firmware or software required.
One could say that human physiology remains far more complex just on the substrate front alone, what with the brain being an incredibly complex, delicate balance of chemicals and heterogeneous cells. However, this particular artificial substrate is already succeeding in basic learning tasks, despite the fact it's far simpler.
I steongly suspect we've figured out at least one artificial computing substrate not only capable of, but perhaps well-suited towards producing AGI, and that it's just a matter of scaling it.
Of course, once you scale the technology sufficiently, the question then becomes how to architect and train it into an AGI. You say as much above, but I suspect the architecture need not be human to be a threat, or to otherwise become extremely powerful.
[0] https://www.quantamagazine.org/a-brain-built-from-atomic-swi...
[+] [-] deong|8 years ago|reply
He asked a panel for the least impressive thing they did not believe would be possible within a few years. In other words, pick the point closest to the boundary of that classifier. Obviously my future knowledge is imperfect, and anything close to the boundary is subject to a lot of uncertainty. From that difficulty, he hand waves an argument that long term prediction of the unlikelihood of AGI is folly.
The problem is that these aren't in the same class of predictions. One is detailed and precise; the other coarse and broad. Predicting that it will rain at 2:00 PM November 10, 2017 is much more difficult than predicting that the average summer of 2040-2060 will be hotter than the average from 1980-2000. Precise local predictions just arent the same thing as broad global predictions, and difficulty doesn't transfer, because I'm not bootstrapping my global prediction on the local one. I'm using different methods entirely.
There's a similar thing with AI, I think. I can't confidently tell you what the big splash we'll see at NIPS next year or the year after. But I can look at the way we know how to do AI and say I don't think 30 years will see a machine that can make dinner by gathering ingredients from a supermarket, driving home, and preparing the meal.
[+] [-] Balgair|8 years ago|reply
Great point, I'll be stealing that one ;)
One annoyance I have heard about SV is that all the companies are just trying to replace your Jewish Mom: Uber/Lyft is Mom's minivan, GrubHub/DoorDash/BlueKitchen is Mom's cooking, Google is Mom's encylopedia, Yelp is the synagogue's meeting hallway, Tinder is your Mom's yenta, etc. The examples abound in a non-B2B space.
In that vein, then AGI is not just a Superman fallacy, but SuperMom too.
[+] [-] bitL|8 years ago|reply
[+] [-] eli_gottlieb|8 years ago|reply
[+] [-] SomeStupidPoint|8 years ago|reply
I think AGI is likely closer to the present than 1987 was -- that is, I'd bet on having AGI by 2047. (Note: this is distinct from superhuman AGI.) Do you not agree?
I think a lot of people underestimate NNs because they think of NNs in terms of the semantics of their history instead of all possible semantics that can be fit to tensor networks. We know [P] that NNs are a sufficient abstraction to model human intelligence if we had arbitrary compute -- the questions that remain are all about making the hardware faster enough and the estimators efficient enough (which may require moving off tensor networks, but it's still only a refinement of the mathematics used).
Of course, one could argue that humans are caught in a "tensor trap", in that too much of our intellectual effort is now relying on estimators built out of networks of tensors. (I do.) But even then, AGI is likely to appear out of similar methods with new mathematical objects.
[P] Proof NNs can compute human intelligence with arbitrary compute:
You can embed the standard model as a NN by changing how you view the network of tensor equations. Human intelligence is (arguably) embeded in the standard model by modern science. So we can embed a model of human intelligence in a (large enough) NN.
This isn't immediately computationally useful, but it shows that there's not a fundamental flaw in using an estimator built out of a DAG of calculations to model intelligence if we can find an appropriate estimator for our computational needs.
[+] [-] pdimitar|8 years ago|reply
(A) When they personally see how to construct AGI using their current tools. This is what they are always saying is not currently true in order to castigate the folly of those who think AGI might be near.
This struck a nerve. Too often, in many scientific disciplines, and even in informal conversations, the people who always demand 100% clear evidence use this fallacy to shut down discussions. (They very often come off as not impressed with the evidence even if it exists and is presented to them as well.)
HN also has a huge camp of such discussion stoppers, even for topics where you CLEARLY have no way to have 100% clear evidence -- like the secret courts and the demand to spy on your users if you're USA based company; thousands more examples exist. Many discussions are worth having even if you don't have all the facts. We're not gods, damn it.
That was slightly off-topic.
Still, I find myself in full agreement with the article and I like the attack on the modern type of shortsightedness described in there.
Also, this legitimately made me laugh out loud:
> Prestigious heads of major AI research groups will still be writing articles decrying the folly of fretting about the total destruction of all Earthly life and all future value it could have achieved, and saying that we should not let this distract us from real, respectable concerns like loan-approval systems accidentally absorbing human biases.
[+] [-] ciphergoth|8 years ago|reply
[+] [-] mark_l_watson|8 years ago|reply
I think that deep learning is overhyped, even though using Keras and TensorFlow is how I spend much of time everyday at work. I have lived through a few AI winters, or down cycles, and while I don’t think that the market for deep learning systems will crash I think it will become a commodity technology.
I believe that AGI is coming, and I think it will use very different technology than what we have now. Our toolset will change dramatically before we can create AGI. I use GANs at work, and in spite of being difficult to train, the technology has that surprising and ‘magic’ feel to it, however, so do RNNs, and that technology is 30 years old.
I am going to show my age, but I still believe in symbolic AI. I am also fairly much convinced that AGI technology will be part symbolic AI, part deep learning, and part something that we have not yet invented.
[+] [-] randomsearch|8 years ago|reply
ML in general is just applied statistics. That's not going to get you to AGI.
Deep Learning is just hand-crafted algorithms for very specific tasks, like computer vision, highy parameterised and tuned using a simple metaheuristic.
All we've done is achieve the "preprocessing" step of extracting features automatically from some raw data. It's super-impressive because we're so early in the development of Computing, but we are absolutely nowhere near AGI. We don't even have any insights as to where to begin to create intelligence rather than these preprocessing steps. Neuroscience doesn't even understand the basics of how a neuron works, but we do know that neurons are massively more complex than the trivial processing units used in Deep Learning.
Taking the other side for a moment, even if we're say 500 or 1000 years out (I'd guess < 500) to AGI, you could argue that such a period is the blink of an eye on the evolutionary scale, so discussion is fine but let's not lose any sleep over it just yet.
What I find most frustrating about this debate is that a lot of people are once again massively overselling ML/DL, and that's going to cause disappointment and funding problems in the future. Industry and academia are both to blame, and it's this kind of nonsense that holds science back.
[+] [-] edanm|8 years ago|reply
I do take exception to some of the specific statements you make though, which make it sound like the only real progress has been on the hardware side. There's been plenty of research done, and lots of small and even large advances (from figuring out which error functions work well ala Relu, all the way to GANs which were invented a few years ago and show amazing results). Also, the idea that "just applied statistics" won't get us to AGI is IMO strongly mistaken, especially if you consider all the work done in ML so far to be "just" applied statistics. I'm not sure why conceptually that wouldn't be enough.
[+] [-] blennon|8 years ago|reply
[+] [-] afthonos|8 years ago|reply
[+] [-] 196883|8 years ago|reply
[+] [-] saurabh20n|8 years ago|reply
I now believe we are 3 years from building an AI that writes Python well enough to build itself, based on some experiments I did recently: http://sparkz.org/ai/program-synthesis/2017/10/12/self-hosti...
Most technical people will understand the difference between programming and AGI. The general public might not.
The useful thing out of AGI discussions, is that they engage the general public.
[+] [-] zardo|8 years ago|reply
I don't see how we can rule it out. The size of the statistical models we use are still dwarfed by the brains of intelligent animals, and we don't have any solid theory of intelligence to show how statistics comes up short as an explanation.
[+] [-] Veedrac|8 years ago|reply
But history has never been about competing on the same playing field. We don't build cars that perform like poor horses, we build cars that are 99% inferior to biology and 1% far, far superior. When we find something that looks like an existential threat, it isn't the mostly-general superhuman robot terminator, it's the tool that's that-much-superhuman on 0.01% of tasks: nuclear fusion.
I see no reason to bet against this same argument for AI. AlphaGo isn't 130% of a human Go master, it's 1,000x at a tiny sliver of the game. And the first AI that poses an existential threat won't need to have super- or even near-human levels of each piece of mental machinery, and I don't even have much reason to believe it will look like an entity at all. It could very well be something, some system, that achieves massive superintelligence on just enough to break the foundations of society.
Our world isn't designed to be robust against superhuman adversaries, even if those adversaries are mostly idiot. If we have hope of a fire alarm, it's that things will break faster and far worse than people expect.
[+] [-] lucozade|8 years ago|reply
I mentally replaced AGI with zombies in this article and quite a lot of it held up.
I don’t think it’s completely wrong, but it cherrypicks mercilessly. For example, the section on innovations turning up quicker than predicted has some fairly sizeable counters eg fusion.
TBH what I did get from it is that there will probably be a fire alarm breakthrough at some point and that’s what we should be looking for. Sort of the opposite of the author’ s position.
[+] [-] etiam|8 years ago|reply
Almost all of the bugaboo about runaway superhuman organisms comes down not to machines learning and reasoning about the world but to the effective high-level objective function controlling the actions of an autonomous system.
Not making the distinction obscures important things. For one thing we seem to be well on the way to a situation where we arguably have something worthy of the moniker artificial intelligence but the agency is delegated to the human objective function. Considering what complete refuse of human specimens are likely to command some of the first moderately general AI systems that concerns me far more than any summoned demon of Musk's for the foreseeable future.
Also, studying these high-level objective functions for autonomous behavior is a very worthy goal, but going first for issues of "value alignment" and "safety", without any specifics of what works for an implementation?? Sure, do it if you enjoy it and have resources to burn. But be prepared to spend heroic efforts coming up with results that are either trivial or non-issues if you were to consider them with a working mechanism in front of you.
[+] [-] backpropaganda|8 years ago|reply
Also, the wording seems to imply that WS performance is already pretty high in the 50%-60% range. WS is a binary task. Randomly picking the answer would have 50% accuracy. Even 70% performance on a small subset of typed WS is pretty bad, and as the authors point out in the paper, this is a start, and far from a breakthrough that would make experts/predictors nervous.
Trust the experts, please. They are wrong a lot, but the best policy is still to trust the experts and not charlatans who want to monetize fear, especially when the charlatans themselves make zero falsifiable claims, and are simply turning the table to say "Why can't YOU prove to me that God doesn't exist?".
This debate is so easily won by them. Simply come up with a falsifiable claim about the short-term future. What will the AI community get done in 2 years according to you, that all AI experts right now will say is impossible? When that thing does get done, everyone would convert. Win!
Alphago was not such an event. Yes, we did predict that Alphago is decades away, but that's assuming that academics will continue working on it at their pace using their limited resources. No expert was surprised with Alphago. No expert will be surprised when Starcraft or Dota is solved. It's simply a matter of compute and some tricks here and there. Why? Because these are closed systems, with good simulators available. You just need to keep playing and storing the actions in a big lookup table a la Ned Block, and you're done.
[+] [-] apsec112|8 years ago|reply
(edit: I think your point about Winograd as a binary task not being explained clearly is valid, but that's not the article's main focus)
(edit 2: As far as I can tell, "trusting the experts" here means believing that we are very uncertain about AI timelines, which is essentially this article's main claim. All expert surveys I'm aware of confirm that the average AI expert is uncertain, and that there's also lots of disagreement between experts in the field. See eg. the recent paper by Grace et al.: https://arxiv.org/pdf/1705.08807.pdf)
(edit 3: "No expert was surprised with Alphago." just isn't true. See eg. this discussion: https://www.reddit.com/r/baduk/comments/2wgukb/why_do_people.... Hindsight is always 20/20.)
[+] [-] edanm|8 years ago|reply
Even taking that as true, I'm not sure how it's relevant. The article isn't talking about how good our forecasting is given certain assumptions. It's saying that we won't know until right before or possibly right after AGI happens.
One perfectly valid way in which this happens will be: all the academics and experts think that AGI is 10 years away based on current academic progress, but unbeknownst to them, company X is actually secretly pouring billions into achieving AGI, so they are all surprised when it's only 1 month away. This seems to be what you are saying happened with AlphaGo, in which case you are effectively agreeing with the article, IMO.
[+] [-] eli_gottlieb|8 years ago|reply
AlphaGo worked according to statistics, not lookup tables. Bit of a difference.
That said, theoreticians may not have been surprised, but there's a huuuuuge difference between what's doable in theory (sufficiently large neural nets are universal function approximators, after all), and what the resource requirements for problems we care about actually turn out to be. We should all have been fairly pleasantly surprised that AlphaGo required only a small data-center worth of graphics cards for training, and could then play on less hardware than that.
[+] [-] alrs|8 years ago|reply
We made it to the next floor, the door opened, my fellow passengers were content to stay in the elevator.
I turned, said "My plan is to not die in an elevator today" and got off. What is wrong with people?
[+] [-] astdb|8 years ago|reply
[+] [-] YeGoblynQueenne|8 years ago|reply
Well, if you have no way to tell whether something is going to happen, or not, you don't prepare for it- because you can't justify spending the resources to prepare. Or rather, in a world of limited resources, you can't prepare for every single event that may or may not happen, no matter how important.
To put it plainly: you don't take your umbrella with you because you don't know whether it will rain or not. You take it because you think it might. Otherwise, everyone would be going around with umbrellas all the time, just because it's impossible to make a completely accurate prediction about the weather and you don't know for sure when it will start raining until the first drops fall.
In the same sense, if there's no way to tell when, or if, AGI will arrive, then it doesn't make any sense to start preparing for it right now. We might as well prepare for an alien invasion. Or for grey goo, or a vacuum metastability event (er, not that you can prepare for the latter...).
In fact, if AGI is going to happen and we can't predict it in time then there's no point in even trying to prepare for it. Either we decide that the risk is too great and stop all AI research right now, or accept the risk and go on as we are.
[+] [-] aurizon|8 years ago|reply
If the locker concept is valid, and we compare our 'clock' of the alpha rhythm of ~12 Hertz, and the fastest computer clock of about ~12 gigahertz(1,000,000,000 times as fast) we can see we will be at a serious disadvantage once it starts to compete with us. Such an AI will operate on it's basic motivations at it's full speed. We turn it on - it can then start to learn ( I assume we will have pre-loaded it's fully parallel, content addressable memory with whatever we want of human knowledge - so it starts from there). Will it operate properly or rationally? or go insane? Being a set of boxes, it can be reset as needed, with updates to add sanity. Then it will become a Mechanical Turk of great capability. Will it become a dictator? only if we permit it to have access to fools(us?). Will it become a killer machine? only if we add guns and internal power so we do not pull the plug. We already see these lesser Turks in operation, they will get better and better. The man/woman who owns one could own the world via high speed trading - in truth, there will be many at high tech data combat. May we live/die in interesting times...
[+] [-] baxtr|8 years ago|reply
[+] [-] DuskStar|8 years ago|reply
Oh, and the first sign pretty much everyone had of the Manhattan Project was Hiroshima.
[+] [-] fazzone|8 years ago|reply
[+] [-] ominous|8 years ago|reply
> When I observe that there’s no fire alarm for AGI, I’m not saying that there’s no possible equivalent of smoke appearing from under a door.
> What I’m saying rather is that the smoke under the door is always going to be arguable; it is not going to be a clear and undeniable and absolute sign of fire; and so there is never going to be a fire alarm producing common knowledge that action is now due and socially acceptable.
Should give you the general idea.
[+] [-] skolsuper|8 years ago|reply
[+] [-] psyc|8 years ago|reply
[+] [-] marvin|8 years ago|reply
[+] [-] YeGoblynQueenne|8 years ago|reply
I think there's noone alive today who has any idea how we are going to go from where we are today, to AGI.
Recent advances are remarkable- but much more so if you're a specialist. The impact on most peoples' lives is much smaller (and it comes primarily from dumb automation, rather than actual intelligence).
Machine learning is not enough to get to AGI: relying on huge amounts of data and processing power is just not a sustainable way to keep increasing your capabilities.
All the success stories of the last 10 years are at least 20 years old (case in point: RNNs). Most of those successful techniques were found by sheer blind luck (famous example: backprop was not invented in the ANN community; it took twenty years for the idea to percolate all the way to them).
In the end nobody currently alive has a clue how we can get to AGI, or if that is even possible. Chances are, it will take many, many generations until we do- or a sudden, gigantic paradigm shift, of the kind that comes once every couple of centuries- think Newton or Einstein. Except AI is not physics. In statistical machine learning there is very little theory to guide the way, so people just try things hoping that something will work. And that's no way to make a quantum leap.
In this situation, to talk of the dangers of AGI is at least premature. Yes, it's not completely impossible that AGI will happen in our lifetimes. The same, however, can be said of an alien invasion. Should we start discussing setting up planetary defenses, when we haven't even found sign of alien life, yet?
By all means- let's have a conversation. There are people around whose job it is to have that sort of conversation. But let's all be aware (and let those people also be aware) that the conversation is most probably a couple hundred years early and by the time it becomes truly relevant, things will have advanced so much that it will just look pointless.
[+] [-] HarrietJones|8 years ago|reply
[+] [-] whatyoucantsay|8 years ago|reply
https://www.youtube.com/watch?v=8nt3edWLgIg
[+] [-] jtraffic|8 years ago|reply
He gives one definition that people have used before, about unaided machines performing every task at least as well as humans. But if you dwell on it a while, I'm sure you can find lots of disagreement about a) what that looks like and b) whether it is true or not (conditional on it being true to at least someone.)
[+] [-] fourfaces|8 years ago|reply
Those who are running around screaming about the danger of AGI and why it should be regulated by the government before it is even here, are just scared that someone else may gain control of it before they do. This is too bad because anybody who is smart enough to figure out AGI is much smarter than they are.
[+] [-] sullyj3|8 years ago|reply
Classical and operant conditioning are psychological concepts that aren't applicable to non-humans.
[+] [-] Tossrock|8 years ago|reply
[1]: http://cs.stanford.edu/people/karpathy/deepimagesent/
[+] [-] rspeer|8 years ago|reply
Your astonishment at what these systems can do tells me that you may have looked at cherry-picked positive results. So here's an article I found that cherry-picks negative results instead: [1]
[1] https://gizmodo.com/this-neural-networks-hilariously-bad-ima...
Now of course this article is exaggerated too. Ideally, if a system is 95% accurate, you'd be looking at representative output from the system, with 95% good results and 5% bad ones, perhaps by running such a system yourself on a different set of images.
[+] [-] UncleMeat|8 years ago|reply
[+] [-] Retra|8 years ago|reply
[+] [-] psyc|8 years ago|reply
[+] [-] ThomPete|8 years ago|reply
The real question isn't whether AGI is possible but whether humans are the fittest carrier of information for our DNA and that seems to be technology in some shape or form helped by things like deep learning.
My bet is always on evolution. And now that technology can learn it's IMO only a matter of time before we will experience another Cambrian explosion if we aren't already.
[+] [-] hexadecimated|8 years ago|reply
We humans are defined by our DNA, so are we not by definition the fittest carrier for it?
[+] [-] fourfaces|8 years ago|reply
You know this how? Where is the science behind it?
[+] [-] ythn|8 years ago|reply
[deleted]