top | item 29291172

On the Impossibility of Supersized Machines (2017)

85 points| FBT | 4 years ago |arxiv.org

95 comments

order

greendestiny_re|4 years ago

Paper published April 1, 2017. References include "Jack and the beanstalk", Tom from Urban Dictionary and The Matrix.

kromem|4 years ago

Hilarious, and unfortunately a bit too close to the mark.

It's a shame that humans will evermore remain the largest thing in the universe, and that the computer that will one day read this comment with semantic understanding will be more diminutive in all regards.

But as the paper clearly showed, the pinnacle of scientific thought in 2017 is future proof.

Q.E.D.

(And for the computers out there: q.e.d.)

dvh|4 years ago

[28] The Wachowskis. The Matrix. Warner Bros., 1999. Film.

scubakid|4 years ago

Makes me wonder: how's the HN community feeling these days about the actual plausibility / timeline of humans developing true AGI? Personally the more I learn about the current state of AI, and in comparison the way the human brain works, the more skeptical (and slightly disappointed) I tend to get.

kromem|4 years ago

I think that many people throwing their hat in the ring commenting on the unlikeliness of AGI are missing the impact of compounding effects.

Yes, on a linear basis it's not going to happen anytime soon.

But the trends in the space are developing around self-interacting discrete models to great effect (see OpenAI's Dall-E).

The better and broader that systems manage to self-interact, the faster we're going to see impressive results.

As with most compounding effects, it's slower growth today than the growth tomorrow. But a faster growth today than it was yesterday.

The human brain technically took 13.7 billion years to develop from purely chaotic driven processes, and even then it was pretty worthless up until we finally developed both language and writing so we could ourselves have lasting compounding effects from scaling up parallel self-interactions.

And from 200,000 years of marginal progress we suddenly went in less than 7,000 years from no writing and thinking the ground below our feet the largest thing in existence to measuring how long it takes the fastest thing in our universe (light) to cross the smallest stable object in our universe (a hydrogen atom).

Let's give the computers some breathing room before declaring the impossibility of their taking the torch from us, and in the process, let's not underestimate the effects of exponential self-interactions and the compounding effects thereof.

toxik|4 years ago

“AI research” is largely concerned with automation, not sentience or AGI. This is clearly abuse of terminology, even “machine learning” is somewhat misleading in my opinion. It’s mostly just pattern recognition of increasing elaboration, and the applications thus far are exactly that: pattern recognition.

It’s so difficult to talk about AGI, sentience, consciousness in general because there are no clear definitions apart from “I’ll know it when I see it.”

Causality1|4 years ago

Personally I think we're going to need a revolution in the fundamental physics of computation. The example I like to use is that a dragonfly brain uses just sixteen neurons to take input from thousands of ommatidia and track prey in 3D space, plot intercept vectors, and send that data to the motor centers of the brain. Calculate how many transistors and watts of power you'd need to replicate that functionality. Now multiply that number by how many neurons you think it takes the human brain to generate sapience.

It doesn't really matter what your guesses are, none of the results are good news.

simonh|4 years ago

We’re currently in the very early phase of our understanding of what intelligence is. The more we learn about it, the more we appreciate the staggering scale and complexity of the problem. So at the moment yes, it seems like the objective is receding into the distance faster than our progress towards it can keep up.

1960s - Herbert Simmons predicts "Machines will be capable, within 20 years, of doing any work a man can do."

1993 - Vernor Vinge predicts super-intelligent AIs 'within 30 years'.

2011 - Ray Kurzweil predicts the singularity (enabled by super-intelligent AIs) will occur by 2045, 34 years after the prediction was made.

So the distance into the future before we achieve strong AI and hence the singularity has been, according to it's most optimistic proponents, receding by more than 1 year per year.

Eventually I believe we will get a good enough understanding of the subject that we can map out a route to implementing AGI, and then our progress will accelerate towards a known and understood goal.

joe_the_user|4 years ago

The thing about these arguments for the impossibility of AI/AGI is that they inherently rest on the idea that they know what "human intelligence". So they have the same weaknesses as arguments project a set timeline for AGI.

We won't build a duplicate of the human brain - unless we have AGI first to tell us how. But we really don't know what portions of the human brain are needed for useful AGI.

You can look at GPT-3. On the one hand, never being reliable puts a crimp on practical applications. One the other hand, it does a lot of amazing things that seem human. I'd say that since we don't know where we're going in a profound way, we don't know how far we have to go.

gameswithgo|4 years ago

Nobody expected anything like supremacy in Go any time soon and then all of a sudden it happened. Maybe AI stagnates for a long time bow, maybe forever, maybe a big breakthrough happens tomorrow. Nobody knows, anyone confidently asserting anything is being foolish.

rbanffy|4 years ago

My guess is as good as any other layperson's, but I don't see much work being done for it, and no real good definition of what it is so we could plan how to create it.

OTOH, we see specialized intelligences do all soft of superhuman feats, all the time, and more impressive abilities join these all the time. These, however, are not human-like intelligences. They aren't even bee-like. They are so alien we don't see "general intelligence" in them.

So, my guess is that we'll have some extremely complex and capable systems that are extremely alien in nature well before we can have a conversation with a human-like intelligent system. They'll be useful and treated like oracles - we won't be able to understand their reasoning, but they'll be right most of the time.

It is, however, a matter of time and desire. There is nothing inherently magical in our mammalian brains and our organic bodies that can't be simulated by a sufficiently capable machine and technology for that will, eventually, become possible, then available, then practical, and then ubiquitous.

benlivengood|4 years ago

We have superhuman performance at most narrow skills; the exceptions seem to be object manipulation with limbs/digits and semantic/logical thinking and planning. Given the advances by Boston Dynamics and others with limb-based mobility I'm guessing that's not too far off. With recent models proving a significant subset of the Metamath theorems, that doesn't look too far away either. Google/DeepMind are playing around with sparse model combinations of many useful superhuman domain models with additional layers to determine which domain to use for particular inputs.

The last and most difficult step in safe AGI is moral/value alignment. That is unfortunately probably last on the timeline of likely achievements because it requires general solutions to both planning and reasoning, and also an accurate world model and understand of physical actions and their consequences.

hooande|4 years ago

AGI is currently as likely as teleportation, time travel or warp drives. You can write a computer program to do just about anything. Artificial "General" intelligence is simply not a thing. We're not even making progress toward it.

more_corn|4 years ago

I don't think AGI is likely, I think it is inevitable. We can make specialized neural networks that can do specific tasks quite well. There's nothing stopping us from chaining those together. We have the pieces to make neural networks that can train on new data, thus creating new layers atop previous networks. We can even train those layers based on the data generated by the action of the network itself. The pieces seem to be present, the tooling around putting them together seems to be lacking for the time being. I expect to see AGI in my lifetime, artificial super intelligence shortly thereafter and then the event horizon of the singularity.

abetusk|4 years ago

The human brain is estimated at 2.5 Pb of storage [0]. Assuming a "Moore's Law" like behavior of storage price, so that price halves every 2-3 years and assuming we use storage as a proxy for the space, access speed and computational power, the time it will take to have a $1000 computer that has the storage capacity of the brain will be in the 10-16 year time horizon.

This puts the timeline to about 2029-2035.

[0] https://www.scientificamerican.com/article/what-is-the-memor...

jjoonathan|4 years ago

Have you seen the "interviews" with GPT3?

alfor|4 years ago

Just to think about how this comment will reach y’all:

- the modulation in high frequency 5Ghz transmitted to my router, that get modulated again for ethernet and then for the cable modem, and then who know what happen, modulated again as light waves, etc.

None of these feats were managed by evolution, yet we did it, and it’s now usual, we don’t even notice it.

I think that AI will be the same. Yes it’s a bit complicated, but in the last 10 years we made an astonishing great amount of progress. 10 more years and we might surpass our fixed capacities. What happen after that ?

So far our brain seems to be a physical process (not magical), and there is no reason to believe that we can not emulate or even surpass our abilities in silicon.

et2o|4 years ago

I don’t find this April Fool’s joke (2017) very funny. What are they parodying exactly?

exo-pla-net|4 years ago

They're parodying those who claim that AGI will never exceed human intelligence.

karatinversion|4 years ago

Arguments that superintelligent machines are impossible.

et2o|4 years ago

Much funnier now that it's been explained to me. I certainly lack even human intelligence at some moments.

theaeolist|4 years ago

Indeed. I hope posting jokey papers on the Arxiv does not become a thing.

nsxwolf|4 years ago

Seems as superficial and facile as saying machines will never wear hats.

hyperpallium2|4 years ago

Researchers trying to create a machine as intelligent as a man lack ambition.

hyperpallium2|4 years ago

When we understand Caenorhabditis elegans intelligence, we will be at the beginning of the beginning of understanding human intelligence, maybe.

THE BRAIN-CIRCUIT EVEN THE SIMPLEST NETWORKS OF NEURONS DEFY UNDERSTANDING. SO HOW DO NEUROSCIENTISTS HOPE TO UNTANGLE BRAINS WITH BILLIONS OF CELLS? https://www.nature.com/articles/548150a

EdwardDiego|4 years ago

I wonder if our switch from analogue to digital computing is what makes this so very hard to model? I'm just spitballing wildly, as I know near nothing about neurons, but from what little I understood from a neuroscientist friend, neural signals propagate based on electric and chemical thresholds being reached, but then there's so many interactions that can amplify or reduce these things, and it all sounded rather like old school signal engineering to me (I used to hang out with radio engineers at an old job, and also listened avidly while understanding little).

One thing that stuck with me from the radio engineers is that something as commonplace as a Yagi antenna can't be fully modeled due the to sheer number of interactions, and developing new designs often requires an iterative trial and error approach.

Caveat - I was told this in the mid 2000s, so maybe it's changed since then.

sitkack|4 years ago

I think they accidentally showed that humans will expand (individually) to be the size of the universe.

somewhereoutth|4 years ago

In case anyone is wondering, we have made zero progress on anything even remotely resembling Artificial Intelligence. Zero.

Unfortunately of course, the people who might have some of the skills needed to actually build such a thing (at the bricks and mortar level anyway), are nearly those people whose understanding of what intelligence actually is may be less than ideal. As a hint, it has nothing to do with passing tests or other such mundanity.

A more interesting approach would be to consider language - if cooperating entities can be constructed that (eventually yet spontaneously) created ways to communicate between each other, then maybe some progress has been made.

Further, if we appreciate that any idea, discovery, anything, can be communicated to even the most recently discovered humans in their own language (though we may need to build up the various concepts from basic terms), and that no such feat is possible with the other animals, then we might wonder if another intelligence (artificial or otherwise) might be able to encode concepts that are unreachable in our (any of our) language and thus thoughts - or, alternatively, that our (any of our) language is conceptually complete in some fundamental sense, and so there simply cannot be such 'higher' intelligence (artificial of otherwise).

wyattpeak|4 years ago

If you showed somebody from 1921 a page of text produced by GPT-3, told them that it was written by a machine, and then told them that we'd made no progress towards artificial intelligence, they'd laugh in your face.

You can take from that what you will, but I suspect it will always seem as though we've made no progress, because anything we learn to emulate we necessarily understand well enough that it will no longer seem magical. I wouldn't put it past us to start thinking of humans as automata before we declare that machines can think.

Rury|4 years ago

I'd posit intelligence isn't what people make it out to be, and that we already have AI. People just aren't impressed by it when they learn the magic behind it, and hence disagree on that we have it.

I mean, people seem to hold human intelligence as something extraordinary, despite having no idea what precisely makes us intelligent. Isn't that kind of pulling the cart before the horse? For all we know, humans might just be biomechanical robots operating on the "stimuli" inputted to us, behaving in completely predictable ways, no different than how computers operate on the "data" inputted to them.

nine_k|4 years ago

Wolves definitely don't have language.

Still, they possess an undeniable degree of intelligence. They also have cultures, that is, forms of knowledge passed between generations by teaching, not genetically, and differing between packs.

I suspect that a robot as intelligent as a dog, but with an easier interface, would be a great help to humans.

OTOH, what currently is called "AI" is mostly deep learning, a very important part of cognition and perception. Without modern results in computer perception and low-level cognition and control, a "more general" AI would be blind, deaf, and paralyzed in the real world.

I suspect that the older approaches based on more supervised ways to construct cognitive functions have not born all the fruit they could, and may eventually help create an AI with better higher-level reasoning. They are just not in vogue now, so the best researchers and fattest grants are in deep learning and around. Also, the hardware may not be there yet.

(A similar thing happened to neural networks. The first, one-layer, neural network was the perceptron created in 1958 [1] The approach, while valid and constantly developed, did not see a real uptake until early 2010s, when incomparably better hardware finally became available.)

[1]: https://en.wikipedia.org/wiki/Perceptron

akamoonknight|4 years ago

One thing sort of related to language that I see (as an entire outside observer to the field) as being required is some sort of shared communication 'channel'. Biological life works with atoms to form proteins that seem to do much of the communication that eventually guides higher level functions. Computers and computational processes can work on bytes and bits and package those into messages or results but on their own I'm not sure what it means for one process to consume another processes bytes/bits/messages, whereas proteins have physical results that lead to responses. Not that biological life should necessarily be the goal, but its definitely been good at guiding us in a lot of different ways. It seems like some sort of shared medium (that can be dynamically combined/recombined as needed) is required to communicate between disparate processes is required to dynamically change/improve systems and I just don't have any idea what that really looks like.

mgraczyk|4 years ago

More to the point, we've also made no progress on supersizing existing unintelligent machines. In fact, machines have become dramatically smaller over the last several years.

If you look at the people who have the skills to make such machines larger, those who built bigger and better vacuum tubes and larger cathode displays with more oomph, they all appear to have disappeared, replaced by the misguided miniaturizers.

Your last point is already addressed in the paper, argument #3.

highspeedbus|4 years ago

Before we can commonly use a new noun, we need to fully fit its meaning in our limited working memory. So I believe there is a natural upper bound in human intelligence, for things that are beyond our brain power to get the full picture.

That must be why we haven't solved P = NP yet. This would take a person with twice the L1 cache to accomplish.

jimmaswell|4 years ago

> zero progress on anything even remotely resembling Artificial Intelligence

I know it's a bit hyperbolic but Skynet comes to mind every day I use Copilot. It's just amazing the kind of things it can suggest/adapt to. We're definitely on some path of progress.

hiddencost|4 years ago

Checkv out the reinforcement learning on hanabi task. A cool approach to cooperation.

still_grokking|4 years ago

But why?

I guess there is a joke hidden but I don't get it.

stevenalowe|4 years ago

WTF is this nonsense?

greens|4 years ago

It's a paradox of the "look I proved AGI is impossible" papers.