At the Biennale in Venice (one of the most important art shows there is) I saw a work which looked like this:
There was a metal frame holding two glas plates with ventian sediment inbetween (sand, soil, mud). In the center there was another metal frame which formed a hole. There also were to PCB boards with ATMEGA micro controllers.
In the text the artist claimed she controlled the biome of the soil with an AI using various sensors and pumps.
This was clearly a fake, as you could see nothing like that on the PCB.
Accidentally (?) she managed to create the best representation of AI I have seen in art: all that counts is that you call it AI even if it is a simple algorithm. AI is the phrase behind which magic hides and people love magic. Everything that has the aura of “humans don’t fully understand how it works in detail” will be used by charlartans, snake oil salesmen and conmen.
If even artists slap “AI” onto their works to sell it, you know we are past the peak now.
This happened right before the first AI Winter in the late 80s: AI (in the form of expert systems) solved a number of hard problems and was hyped as being able to solve every problem. Reality set in when we figured out:
1. It didn't scale and
2. Getting 80% of the problem solved was easy, but getting that last 20% was very, very hard. Maybe several orders of magnitude harder than the first 80%.
Nowadays we don't seem to have problem 1 quite so much, but problem 2 is still there in a big way. Witness self-driving cars, where driving on an interstate highway in broad daylight is easy, but driving through a snow-covered construction zone at night is impossible. Or just dealing with a bicyclist on the road without killing them.
In 1949, some years after the invention of neural networks, Norbert Weiner, one of the leading minds of the time, was convinced that AI (AGI as you may call it) or a full understanding of the brain is no more than five years away. Alan Turing thought Weiner was delusional, and that it may take as much as fifty years. Seventy years later, we are nowhere near insect-level intelligence.
I don't see any fundamental barrier preventing us from achieving AI, but if someone from the future came to me and said that AI will be achieved in 2130, I would find that quite reasonable. If they said it will be achieved in 2030 or 2230, I would find those equally reasonable. Our current scientific understanding is that we have no idea how far we are from AI, we don't know what the challenges are, and we don't even know what intelligence is. We certainly have no idea whether the approach we are now taking (statistical clustering, AKA deep learning) is a path that leads to AI or not.
In the sixties, the leading minds of that time were also working hard on the problem and did not find it any further away from us as we do today. That some people are optimistic is irrelevant. The fact is that we just have no idea.
> Seventy years later, we are nowhere near insect-level intelligence.
That's arguable: For instance, we have the entire connectome of c. elegans mapped out; we can easily simulate it, and it seems to act the same as the actual nematode. So, in one sense, we are at that level.
However, we still have no clue how such a simple system actually works to produce the level of "intelligence" it has. So in that sense, we're not at that level at all.
> We certainly have no idea whether the approach we are now taking (statistical clustering, AKA deep learning) is a path that leads to AI or not.
One clue we do have:
We may not be on the right path with that method; it's something the "grandfather" (or whatever) of AI (Hinton) has mentioned, and which I have stated before about...
That is, the fact that we currently have no understanding of the mechanism by which biological neural networks implement anything like "backpropagation". From what we currently understand, as I currently understand it, we have yet to find such a mechanism that would allow for it.
It's also one of the leading reasons why our current artificial neural networks consume so much power, as compared to biological systems...
But Weiner's Cybernetics withered on the vine, or floated off into fluffy "Second Order" cybernetics.
There was an experiment, I don't have the details to hand at the moment, I'm sorry, but Gordon Pask and someone else mage a cybernetics "machine" out of a dish of chemicals, and got it to grow an "ear" (filaments that were sensitive to certain sound vibrations, just like the hair cells in your inner ear)!
If you really think about what they did (and you have to know some Cybernetics to understand it) then it's actually pretty scary. Like more-scary-than-atom-bomb scary.
I'm only mentioning it here because we're about to need to grapple with this sort of thing in a minute or two...
>> Deep learning algorithms have proven to be better than humans at spotting lung
cancer, a development that if applied at scale could save more than 30,000
patients per year.
It's not easy to scale deep learning because deep neural nets have a very
strong tendency to overfit to their training dataset and are very bad at
generalising outside their training dataset.
In a medical context this means that, while a particular deep learning image
classifier might be very good at recognising cancer in images of patients'
scans collected from a specific hospital, the same classifier will be much
worse in the same task on images from a different hospital (or even from a
different department in the same hospital).
To overcome this limitation, the only thing anyone knows that works to some
extent is to train deep neural nets with a lot of data. If you can't avoid
overfitting, at least you can try to overfit to a big enough sample that most
common kinds of instances in your domain of interest will be included in it.
So basically to scale a diagnostic system based on deep neural net image
classification to the nation level one would have to train a deep learning
image classifier with the data from all hospitals in that nation.
This is not an easy task, to say the least. It's not undoable, but it's not as
simple as having someone at Hospital X download a pretrained model in
Tensorflow and train its last few layers on some CT scans.
In my opinion the term intelligence itself is misplaced for machine learning tasks. Every problem that is solved with ML and "big data" appears to me to be a perception problem (which wouldn't be surprising because the mechanism is inspired by human vision, not cognition, which it lends itself to naturally).
As a specific example, a few months ago or so openai released their text generation tool and branded it as "too dangerous too release", claiming it could , with the help of AI, generate believable texts.
But what it generated was simply natural sounding gibberish. There were plenty of sentences in the text along the lines of "before the first human walked the earth, humans did..""
What, for me at least, lies at the core of intelligence is understanding semantics. An intelligent system can recognise the sentence above as flawed because it could extract meaning.
Everything coming out of the field of ML seems to me just like sophisticated statistics. In many ways symbolic AI to me still seems more valuable, profit aside.
I agree. Extracting meaning is NOT a math problem, meaning comes from the human context and context is infinite. Hence different humans extract different meanings.
I think AI and ML are great for processing large amounts of data and looking for patterns. Patterns on their own don't mean anything though, it's always up to us to interpret them.
Right, that tool makes gibberish and didn't understand much of anything.
AI research actually started by focusing on symbolic AI but eventually it was found to be too difficult to define all of the symbols. See the Cyc project.
AGI as a field aside from narrow AI/narrow ML has been making useful but not mind-blowing progress for decades. The sidebar and recent posts/post history on Reddit r/agi has useful links for learning about the field. Also more and more posts on r/machinelearning are providing more general purpose tools that address some problems like better semantic understanding.
There is a promising strain of research that is focusing on core AGI requirements. One of the big challenges is bridging the gap between low level sensory information and high level concepts. This is known as the symbol grounding problem. In my mind the approaches tackling that type of challenge have a lot of promise. And the amount of research in that area is growing.
In the text generation tool outlined above (and indeed many of the convnet-based visual networks), the hidden layers are there precisely to extract 'meaning'. The lower layers (closer to the source input) deal with syntax and feed upwards to hidden layers that extract semantic features, which in turn feed upwards to more layers, each with a bigger overview of the semantic features and thus ultimately the context. That's the idea anyway.
I can attest. while doing research in a T1 university, all the professors were mildly disgusted by the hype pushed out by startups, and even Google's own internal marketing department.
Nonetheless they too are minting the same nonsense in the "introduction" part of academic research, it's a clear case of everyone is playing the game, so "I have to play or be left behind."
I used to work on fundamental molecular microbiology. We looked at what happened when DNA replication went wrong in E. coli.
What I used to do when writing or speaking about it was to start with cancer or antibiotic resistance as if anyone in my field gave a crap about either of those topics. Sure, we do care about those things in the broad sense, but we didn't consider ourselves to be on the front line of solving either of those problems.
The author seems confused about what artificial general intelligence is. People have not meaningfully moved towards AGI - it's still a distant pipe dream.
The closest we've gotten is probably a Dota bot that's pretty good as long as you give the bot a huge advantage. Which is an incredible piece of technology, but about as close to AGI as an ant is to a human.
The ant to human analogy is surprisingly apt in a way you might not consider though - evolutionarily speaking the ant and humans diverged relatively recently, if you count things from the beginning of life. In that way, we might also be closer to AGI than some might think..
Agreed. This paragraph (in an otherwise insightful essay) was particularly jarring:
"Remarkable things are happening in the field of artificial (general) intelligence. Deep learning algorithms have proven to be better than humans at spotting lung cancer."
But that's precisely the point. A microtargetting solution for McDonalds isn't A[G]I, and as the article states - everyone knows it. But the brand power of calling something AI is too strong to resist it.
The hype is BS but narrow AI in the context of automation is here.
Job are so specialised nowadays (driving, cashiers, fulfilment, paralegal, diagnostician ...) that a narrow AI (i.e. a glorified automation algorithm) that can do just 10% better at a cheaper cost will take down the job.
The confusion is real (AI, AGI, terminator...) but pattern recognition softwares powered with big data has already proven business value and are here to stay.
> The technologists know it’s bullshit. Fed up with the fog that marketers have created, they’ve simply ditched A.I. and moved on to a new term called “artificial general intelligence.”
Not to detract from an otherwise excellent BS takedown, but unfortunately the author fails to mention that there’s a non-zero possibility that AGI itself is merely taking the bullshit to the next level.
It continues to astound me how some technologists actually believe AGI is not just inevitable but around the corner. When to my naive perspective (as a machine learning rank amateur but with several decades experience as a professional human being) all I see is machines that can do some form of pattern recognition, but nothing resembling the common sense that the words “general intelligence” seemed to indicate at one point.
Minor quibbles about truth and meaning of words aside, I have to support any article that skewers the soft underbelly of the phony AI ecosystem as effectively as this one does.
The real issue we are facing is that everything that we thought was not going to be pattern matching and tree search has turned out to be pattern matching and tree search. I remember my father telling me computers were never going to be able to play Chess, because it required creativity for example. Nowadays a neural network with tree search plays chess that looks remarkably human. A lot of problem domains have fallen to what is basically pattern match and tree search.
Extrapolating the trend of the last 30 years, there is evidence that computers will be able to solve every task a human can using pattern matching. If that isn't AGI, it might turn out to be better than intelligence.
The technological future is unknowable, so believing AGI is certain is too much. But believing it certainly isn't around the corner is also too little. If computers can do anything a human can intellectually, they have reached AGI. The list of discrete tasks (games, decision making once the parameters are defined) a computer can't do is a very short list.
If someone finds an objective function for deciding what decision parameters are important AGI could be upon us very quickly. As a postcript, I think people radically overestimate human intelligence.
While it's true the AI hype is getting a bit overboard, couldn't one make the argument that humans, and to a lesser extent other animals, are just a collection responses to pattern recognition like when I see a thing that looks like this I should eat it, but when I see a thing that sounds like that I should run away? After all, what is common sense but "believe it when I see it" type thinking which is most certainly pattern intuition? The overhype part is that there are armies of lowly paid Mechanical Turk tier workers tagging all these data sets, so that these pattern recognition algorithms have something to train on.
But also the whole concept of AI is so deeply fascinating and even frightening to people that maybe there is no other way than to buy into this. Once it happens, everybody wants to be a part of it ;)
@mindgam3 .. “Minor quibbles about truth and meaning of words aside, I have to support any article that skewers the soft underbelly of the phony AI ecosystem as effectively as this one does.”
I've been collecting examples of where the ads that I see are based on extremely simple algorithms of the type that could have easily been supported 30 years ago, and yet I keep reading articles that suggest that the advertising industry is deploying sophisticated tools to target ads to me. I wrote about this recently:
-------------------------
Despite much talk about Machine Learning and AI improving advertising results, what I’m seeing is getting worse and worse. Despite billions invested, the ads shown to me are much less relevant than that ads that I saw on the Web 10 years ago.
I hired 3 developers from Fullstack Academy. They were all great, so I went and checked out the website, curious about the curriculum. And now, every website I go to, I see an advertisement for Fullstack Academy. (See screenshot.)
I’ve been writing software for 20 years. I’ve written semi-famous essays about software development. I am not going back to school. I do not need to go to a dev bootcamp. So why show me ads, as if I’m thinking of going to school?
For the last several years I’ve been seeing articles about the surveillance economy. In theory, advertisers know more about me than ever before. In theory, they know about my entire life. And yet, the ads I see are less targeted than what I used to see online 10 years ago.
I'm reminded of 80s and 90s sales lead phone lists - those used to be marketed as precision means to reach your choice of age, job-level, city, income etc. I once worked for a company that tried a few of these, from allegedly fresh, first generation data. They were universally crap, with the same errors and copies of everyone else's wildly wrong and obsolete garbage. Lists priced per record. Aha!
Adtech is burning down the web and everyone's CPU with all that precise tracking and ML targeting that tells them nothing. Priced per click. How surprising. When Google and Facebook opened some of their profiles to be looked at, maybe 5 years or so back, Google got every major thought about me wrong - my gender, my age, my interests. As it had with most in the office. The whole myth around precision seems no more than a marketing fairy tale to sell ads and justify tracking, very badly.
Peak for advertising being useful was very early web with static page ads, simple keyword ads on search, and the odd site sponsorship. Oh, and "customers also bought" on Amazon, that worked well for books and CDs, but doesn't work at all for the 499 other categories they now sell.
These days I block everything - JS, uBlock, PiHole. I think there's 10 or 20 sites allowed a little JS, and the odd reluctant exception for bloody hateful reCaptcha. The web's speed is lovely again. Haven't seen a web ad for years - except the odd one or two at work.
I willing to take a punt and guess that the models they’re using are using short term/isolated data, not 10 years worth of your entire browsing history.
Apparently you didn't get the targeted ad telling you that advertisers' dark patterns have become so sophisticated that free will is literally fiction now.
If everyone sophisticated enough to be on this site would just use the term “applied computational statistics” (even just in their own thoughts) instead of “deep learning” or AI, the world would be a better place. Gradient descent finds some fun minimia (my current venture is heavily based on that idea) but to assign more agency to Adam or RMSProp than they merit is just an exercise in feeding the trolls.
Couldn’t agree more. All these delusional discussions: Is it intelligence? Is it true intelligence? How far is it to become true...? Skynet rising?
To be fair: The last question is certainly adequate regarding the application of unverified/-able algorithms in a life-changing incarnation as a virtually unsupervised decision making quality. This is horrible. But it is another question (and better answered skipping the pseudo-philosphical part)
Could you please explain in what sense deep learning is "applied computational statistics"?
What about classical planning, SAT solvers, automated theorem proving, game-playing agents and classical search? Could you please explain how one or more of those are "applied computational statistics"?
Further- I don't understand the comment about "agency". Could you clarify? Why is "agency" required for a technique or an algorithm to be considered an AI technique?
There are non-statistical methods for training neural nets (no backprop) so 'applied computational statistics' really wouldn't capture it. Beyond that what is wrong with the term deep learning? I can at least understand objections to the use of the term AI (even though it was originally used to refer to narrow AI but was appropriated by hollywood) but deep learning seems like a fine term to me.
In 1996 I made this AIML (Artificial Intelligence Marketing Language) parody by taking an actual VRML article from some shameless trade rag, and globally replacing "Virtual Reality" with "Artificial Intelligence".
(from "ArtificialPostModernIntelligenceInterActivity", V2 #4 April 1996, p. 20)
Another closely related technology is BSML: Bull Shit Markup Language. (Note: most of the features described in the BLINK tag extension were eventually implemented by FLASH!)
At one point years later, somebody actually emailed me, asking me to take it down, because they were developing a "real AIML [TM]" product, and found my parody of their unique original idea to be beneath their dignity, distracting, and confusing to their potential customers using google to search for their prestigious "AIML" product.
> In this way, Dynamic Yield is part of a generation of companies whose core technology, while extremely useful, is powered by artificial intelligence that is roughly as good as a 24-year-old analyst at Goldman Sachs with a big dataset and a few lines of Adderall. For the last few years, startups have shamelessly re-branded rudimentary machine-learning algorithms as the dawn of the singularity, aided by investors and analysts who have a vested interest in building up the hype. Welcome to the artificial intelligence bullshit-industrial complex.
As an AI researcher, I think a lot of people are a little too sensitive to the term "AI" and make a lot of big assumptions upon hearing it. It's a very general term that doesn't really imply any particular degree of complexity or sophistication. Labeling simple machine learning algorithms and heuristics as "AI" isn't at all unique to this era of hype that began in the last ~5 years -- rather that's how the term has been used in academia for many decades. If you took a college class called "AI" or looked up some of the most popular textbooks on AI [1], you'd find that a lot of it is dedicated to search algorithms (breadth-first, depth-first, A*), linear classifiers, and feature engineering. If you think "artificial intelligence" is a bad name for these things, fine -- but don't blame the recent wave of hype, this is what the term AI means and has pretty much always meant. So go ahead and call your startup's linear regression "AI", and if the VCs leap to fund you under the impression that it means you'll be behind the singularity, that's on them. AI != deep learning. AI != AGI.
[1] e.g., "Artificial Intelligence: A Modern Approach" by Russell and Norvig
Of all the hypes going around (blockchain mostly lol) I think ai is going to have the most substance to it though. I would say the breadth of problems being solved are much wider and there is still a lot of research which hasn't really found its way to actual implementation yet
I think honestly think westworld (yes the tv-series) has the best explanation of why general intelligence is a hard problem to solve.
They mention consciousness but I think the same apply to intelligence in general. Humans in my mind aren't different from say a program you write except that we have a lot more input and possible outputs depending on a much larger variant of external variables.
If we could build machines that have eyesight just as we do, muscles just as we do etc I'm sure we could reverse-engineer the human being.
Everything is bullshit until is not, humans were talking about transportation without using animal forces for decades before it became a reality, and a lot of people were highly skeptical of such thing being even possible until it actually happened in 1804 (first steam train), same thing happens with Artificial Intelligence, and we are in such uncharted territory that someone could say AGI is just 10 years away and someone else say 100 years away and both get the same amount of credibility, meaning near none cause we don't even know what is that we don't know to achieve AGI.
Your example isn't quite as it seems : "trains" (cars running on rails) were used in mining for hundreds of years prior. The steam engine was first documented in 1698. What happened in 1804 was someone figured out the manufacturing processes to make a steam engine light enough and powerful enough to usefully pull a train of cars over some reasonable distance.
Unless you believe the Kurzweil argument that we will figure what is needed from reverse engineering the human brain in which case you guestimate a timeline.
[+] [-] atoav|6 years ago|reply
There was a metal frame holding two glas plates with ventian sediment inbetween (sand, soil, mud). In the center there was another metal frame which formed a hole. There also were to PCB boards with ATMEGA micro controllers.
In the text the artist claimed she controlled the biome of the soil with an AI using various sensors and pumps.
This was clearly a fake, as you could see nothing like that on the PCB.
Accidentally (?) she managed to create the best representation of AI I have seen in art: all that counts is that you call it AI even if it is a simple algorithm. AI is the phrase behind which magic hides and people love magic. Everything that has the aura of “humans don’t fully understand how it works in detail” will be used by charlartans, snake oil salesmen and conmen.
If even artists slap “AI” onto their works to sell it, you know we are past the peak now.
[+] [-] dreamcompiler|6 years ago|reply
1. It didn't scale and
2. Getting 80% of the problem solved was easy, but getting that last 20% was very, very hard. Maybe several orders of magnitude harder than the first 80%.
Nowadays we don't seem to have problem 1 quite so much, but problem 2 is still there in a big way. Witness self-driving cars, where driving on an interstate highway in broad daylight is easy, but driving through a snow-covered construction zone at night is impossible. Or just dealing with a bicyclist on the road without killing them.
We're not going to have AGI any time soon.
[+] [-] pron|6 years ago|reply
I don't see any fundamental barrier preventing us from achieving AI, but if someone from the future came to me and said that AI will be achieved in 2130, I would find that quite reasonable. If they said it will be achieved in 2030 or 2230, I would find those equally reasonable. Our current scientific understanding is that we have no idea how far we are from AI, we don't know what the challenges are, and we don't even know what intelligence is. We certainly have no idea whether the approach we are now taking (statistical clustering, AKA deep learning) is a path that leads to AI or not.
In the sixties, the leading minds of that time were also working hard on the problem and did not find it any further away from us as we do today. That some people are optimistic is irrelevant. The fact is that we just have no idea.
[+] [-] cr0sh|6 years ago|reply
That's arguable: For instance, we have the entire connectome of c. elegans mapped out; we can easily simulate it, and it seems to act the same as the actual nematode. So, in one sense, we are at that level.
However, we still have no clue how such a simple system actually works to produce the level of "intelligence" it has. So in that sense, we're not at that level at all.
> We certainly have no idea whether the approach we are now taking (statistical clustering, AKA deep learning) is a path that leads to AI or not.
One clue we do have:
We may not be on the right path with that method; it's something the "grandfather" (or whatever) of AI (Hinton) has mentioned, and which I have stated before about...
That is, the fact that we currently have no understanding of the mechanism by which biological neural networks implement anything like "backpropagation". From what we currently understand, as I currently understand it, we have yet to find such a mechanism that would allow for it.
It's also one of the leading reasons why our current artificial neural networks consume so much power, as compared to biological systems...
[+] [-] carapace|6 years ago|reply
There was an experiment, I don't have the details to hand at the moment, I'm sorry, but Gordon Pask and someone else mage a cybernetics "machine" out of a dish of chemicals, and got it to grow an "ear" (filaments that were sensitive to certain sound vibrations, just like the hair cells in your inner ear)!
If you really think about what they did (and you have to know some Cybernetics to understand it) then it's actually pretty scary. Like more-scary-than-atom-bomb scary.
I'm only mentioning it here because we're about to need to grapple with this sort of thing in a minute or two...
"Introduction to Cybernetics" W. Ross Ashby (1956) http://pespmc1.vub.ac.be/ASHBBOOK.html (PDF kindly made available from that page.)
[+] [-] DonHopkins|6 years ago|reply
While the insects have been getting smarter!
https://www.tedmed.com/talks/show?id=7286
https://www.dailystar.co.uk/news/latest-news/403924/spiders-...
[+] [-] YeGoblynQueenne|6 years ago|reply
It's not easy to scale deep learning because deep neural nets have a very strong tendency to overfit to their training dataset and are very bad at generalising outside their training dataset.
In a medical context this means that, while a particular deep learning image classifier might be very good at recognising cancer in images of patients' scans collected from a specific hospital, the same classifier will be much worse in the same task on images from a different hospital (or even from a different department in the same hospital).
To overcome this limitation, the only thing anyone knows that works to some extent is to train deep neural nets with a lot of data. If you can't avoid overfitting, at least you can try to overfit to a big enough sample that most common kinds of instances in your domain of interest will be included in it.
So basically to scale a diagnostic system based on deep neural net image classification to the nation level one would have to train a deep learning image classifier with the data from all hospitals in that nation.
This is not an easy task, to say the least. It's not undoable, but it's not as simple as having someone at Hospital X download a pretrained model in Tensorflow and train its last few layers on some CT scans.
[+] [-] Barrin92|6 years ago|reply
As a specific example, a few months ago or so openai released their text generation tool and branded it as "too dangerous too release", claiming it could , with the help of AI, generate believable texts.
But what it generated was simply natural sounding gibberish. There were plenty of sentences in the text along the lines of "before the first human walked the earth, humans did..""
What, for me at least, lies at the core of intelligence is understanding semantics. An intelligent system can recognise the sentence above as flawed because it could extract meaning.
Everything coming out of the field of ML seems to me just like sophisticated statistics. In many ways symbolic AI to me still seems more valuable, profit aside.
[+] [-] kaolti|6 years ago|reply
I think AI and ML are great for processing large amounts of data and looking for patterns. Patterns on their own don't mean anything though, it's always up to us to interpret them.
[+] [-] ilaksh|6 years ago|reply
AI research actually started by focusing on symbolic AI but eventually it was found to be too difficult to define all of the symbols. See the Cyc project.
AGI as a field aside from narrow AI/narrow ML has been making useful but not mind-blowing progress for decades. The sidebar and recent posts/post history on Reddit r/agi has useful links for learning about the field. Also more and more posts on r/machinelearning are providing more general purpose tools that address some problems like better semantic understanding.
There is a promising strain of research that is focusing on core AGI requirements. One of the big challenges is bridging the gap between low level sensory information and high level concepts. This is known as the symbol grounding problem. In my mind the approaches tackling that type of challenge have a lot of promise. And the amount of research in that area is growing.
[+] [-] Wiretrip|6 years ago|reply
[+] [-] xiaolingxiao|6 years ago|reply
Nonetheless they too are minting the same nonsense in the "introduction" part of academic research, it's a clear case of everyone is playing the game, so "I have to play or be left behind."
[+] [-] ImaCake|6 years ago|reply
What I used to do when writing or speaking about it was to start with cancer or antibiotic resistance as if anyone in my field gave a crap about either of those topics. Sure, we do care about those things in the broad sense, but we didn't consider ourselves to be on the front line of solving either of those problems.
[+] [-] solidasparagus|6 years ago|reply
The closest we've gotten is probably a Dota bot that's pretty good as long as you give the bot a huge advantage. Which is an incredible piece of technology, but about as close to AGI as an ant is to a human.
[+] [-] ramraj07|6 years ago|reply
[+] [-] isolli|6 years ago|reply
"Remarkable things are happening in the field of artificial (general) intelligence. Deep learning algorithms have proven to be better than humans at spotting lung cancer."
This is very emphatically narrow AI.
[+] [-] H8crilA|6 years ago|reply
[+] [-] Causality1|6 years ago|reply
[+] [-] derka0|6 years ago|reply
[+] [-] mindgam3|6 years ago|reply
Not to detract from an otherwise excellent BS takedown, but unfortunately the author fails to mention that there’s a non-zero possibility that AGI itself is merely taking the bullshit to the next level.
It continues to astound me how some technologists actually believe AGI is not just inevitable but around the corner. When to my naive perspective (as a machine learning rank amateur but with several decades experience as a professional human being) all I see is machines that can do some form of pattern recognition, but nothing resembling the common sense that the words “general intelligence” seemed to indicate at one point.
Minor quibbles about truth and meaning of words aside, I have to support any article that skewers the soft underbelly of the phony AI ecosystem as effectively as this one does.
[+] [-] roenxi|6 years ago|reply
Extrapolating the trend of the last 30 years, there is evidence that computers will be able to solve every task a human can using pattern matching. If that isn't AGI, it might turn out to be better than intelligence.
The technological future is unknowable, so believing AGI is certain is too much. But believing it certainly isn't around the corner is also too little. If computers can do anything a human can intellectually, they have reached AGI. The list of discrete tasks (games, decision making once the parameters are defined) a computer can't do is a very short list.
If someone finds an objective function for deciding what decision parameters are important AGI could be upon us very quickly. As a postcript, I think people radically overestimate human intelligence.
[+] [-] povertyworld|6 years ago|reply
[+] [-] jcelerier|6 years ago|reply
what proof do we have that the brain isn't just doing that ?
[+] [-] blablabla123|6 years ago|reply
[+] [-] runciblespoon|6 years ago|reply
I fully concur ..
[+] [-] thrwo434234|6 years ago|reply
[deleted]
[+] [-] baybal2|6 years ago|reply
[+] [-] lkrubner|6 years ago|reply
-------------------------
Despite much talk about Machine Learning and AI improving advertising results, what I’m seeing is getting worse and worse. Despite billions invested, the ads shown to me are much less relevant than that ads that I saw on the Web 10 years ago.
I hired 3 developers from Fullstack Academy. They were all great, so I went and checked out the website, curious about the curriculum. And now, every website I go to, I see an advertisement for Fullstack Academy. (See screenshot.)
I’ve been writing software for 20 years. I’ve written semi-famous essays about software development. I am not going back to school. I do not need to go to a dev bootcamp. So why show me ads, as if I’m thinking of going to school?
For the last several years I’ve been seeing articles about the surveillance economy. In theory, advertisers know more about me than ever before. In theory, they know about my entire life. And yet, the ads I see are less targeted than what I used to see online 10 years ago.
http://www.smashcompany.com/business/when-will-machine-learn...
[+] [-] onion2k|6 years ago|reply
To keep the brand in your head so you post about it on Hackernews.
[+] [-] NeedMoreTea|6 years ago|reply
I'm reminded of 80s and 90s sales lead phone lists - those used to be marketed as precision means to reach your choice of age, job-level, city, income etc. I once worked for a company that tried a few of these, from allegedly fresh, first generation data. They were universally crap, with the same errors and copies of everyone else's wildly wrong and obsolete garbage. Lists priced per record. Aha!
Adtech is burning down the web and everyone's CPU with all that precise tracking and ML targeting that tells them nothing. Priced per click. How surprising. When Google and Facebook opened some of their profiles to be looked at, maybe 5 years or so back, Google got every major thought about me wrong - my gender, my age, my interests. As it had with most in the office. The whole myth around precision seems no more than a marketing fairy tale to sell ads and justify tracking, very badly.
Peak for advertising being useful was very early web with static page ads, simple keyword ads on search, and the odd site sponsorship. Oh, and "customers also bought" on Amazon, that worked well for books and CDs, but doesn't work at all for the 499 other categories they now sell.
These days I block everything - JS, uBlock, PiHole. I think there's 10 or 20 sites allowed a little JS, and the odd reluctant exception for bloody hateful reCaptcha. The web's speed is lovely again. Haven't seen a web ad for years - except the odd one or two at work.
[+] [-] dijksterhuis|6 years ago|reply
[+] [-] foldingmoney|6 years ago|reply
/s, obviously
[+] [-] benreesman|6 years ago|reply
[+] [-] mbeex|6 years ago|reply
To be fair: The last question is certainly adequate regarding the application of unverified/-able algorithms in a life-changing incarnation as a virtually unsupervised decision making quality. This is horrible. But it is another question (and better answered skipping the pseudo-philosphical part)
[+] [-] YeGoblynQueenne|6 years ago|reply
What about classical planning, SAT solvers, automated theorem proving, game-playing agents and classical search? Could you please explain how one or more of those are "applied computational statistics"?
Further- I don't understand the comment about "agency". Could you clarify? Why is "agency" required for a technique or an algorithm to be considered an AI technique?
[+] [-] theferalrobot|6 years ago|reply
[+] [-] kranner|6 years ago|reply
[+] [-] waynecochran|6 years ago|reply
[+] [-] raverbashing|6 years ago|reply
[+] [-] tim333|6 years ago|reply
[+] [-] AstralStorm|6 years ago|reply
[+] [-] j88439h84|6 years ago|reply
[+] [-] DonHopkins|6 years ago|reply
(from "ArtificialPostModernIntelligenceInterActivity", V2 #4 April 1996, p. 20)
https://www.donhopkins.com/home/catalog/text/SupportForAIML....
Another closely related technology is BSML: Bull Shit Markup Language. (Note: most of the features described in the BLINK tag extension were eventually implemented by FLASH!)
https://www.donhopkins.com/home/catalog/text/bsml.html
At one point years later, somebody actually emailed me, asking me to take it down, because they were developing a "real AIML [TM]" product, and found my parody of their unique original idea to be beneath their dignity, distracting, and confusing to their potential customers using google to search for their prestigious "AIML" product.
[+] [-] throwaway287391|6 years ago|reply
As an AI researcher, I think a lot of people are a little too sensitive to the term "AI" and make a lot of big assumptions upon hearing it. It's a very general term that doesn't really imply any particular degree of complexity or sophistication. Labeling simple machine learning algorithms and heuristics as "AI" isn't at all unique to this era of hype that began in the last ~5 years -- rather that's how the term has been used in academia for many decades. If you took a college class called "AI" or looked up some of the most popular textbooks on AI [1], you'd find that a lot of it is dedicated to search algorithms (breadth-first, depth-first, A*), linear classifiers, and feature engineering. If you think "artificial intelligence" is a bad name for these things, fine -- but don't blame the recent wave of hype, this is what the term AI means and has pretty much always meant. So go ahead and call your startup's linear regression "AI", and if the VCs leap to fund you under the impression that it means you'll be behind the singularity, that's on them. AI != deep learning. AI != AGI.
[1] e.g., "Artificial Intelligence: A Modern Approach" by Russell and Norvig
[+] [-] Iv|6 years ago|reply
[+] [-] ackbar03|6 years ago|reply
[+] [-] ecmascript|6 years ago|reply
They mention consciousness but I think the same apply to intelligence in general. Humans in my mind aren't different from say a program you write except that we have a lot more input and possible outputs depending on a much larger variant of external variables.
If we could build machines that have eyesight just as we do, muscles just as we do etc I'm sure we could reverse-engineer the human being.
https://www.youtube.com/watch?v=S94ETUiMZwQ
[+] [-] mattigames|6 years ago|reply
[+] [-] dboreham|6 years ago|reply
[+] [-] tim333|6 years ago|reply