I think there's an interesting disconnect right now between research and practice. Cutting-edge research does feel like it's reaching a plateau - across most AI fields even "major" breakthroughs are only gaining a couple percentage points and we're probably starting to hit the limits of what current approaches can achieve. When the state-of-the-art is 97% on a task, there's only so much room for improvement. Yoav Goldberg posted a tweet about Facebook's RoBERTa model that summed it up pretty well: "oh wow seems like this boring public hyperparameter search is going to take a while" [1]. There's a vague feeling of "What's next?" now that all the benchmarks are fairly well-solved but AI in general clearly doesn't feel solved.
However, state-of-the-art models aren't really used in production yet. I think the trend of "use AI/ML to solve X" has only started to pick up in the past 2 years, and it'll continue well into the 2020s. The process of taking research models and putting them into production is not standardized yet, and many models don't even really work in production - if your model takes a second to do an inference step that's fine for research but maybe not for a real product.
I think in the next decade, on the research side, benchmarks will be beaten less often, and instead there will be more focus on trying out radically new things, understanding weaknesses in current techniques, and finding new measurements that assess those weaknesses. On the industry side, there will still be lots of cool and exciting new achievements as already-known techniques are applied to old problems that haven't been addressed by AI yet.
As an aside, this was the first time in my life that I read the phrase "10s" referring to the 2010-2019. Kind of an odd-feeling moment!
Andrej Karpathy showed in the Tesla autonomy day how Tesla had to retrain their DNNs such that they don't get confused by bicycles mounted on vehicles. If 97% means your models get confused by something you see on the road every day, I wouldn't be too pleased about the state of the art.
I'm always bemused by the idea that AI is nothing but machine learning, and ML is nothing but predictive analytics. Equating research in AI with a "boring hyperparameter search" shows how narrow it's become; saying you've "gotten 97% on a problem" refers to, obviously, classification accuracy of a model on a set of labeled instances. "Use AI/ML to solve X" means finding a way to translate X into a prediction task over feature vectors.
There's an old saying "If all you have is a hammer, all your problems start to look like nails." We may see an AI winter come about simply because we run out of things to pound with our hammer.
> Cutting-edge research does feel like it's reaching a plateau
It's really not. The second half of last year alone had MuZero and Megatron-LM, to name just a couple that most scream to me that we are actually progressing towards AGI.
You say ‘When the state-of-the-art is 97% on a task’, but solved tasks are the least interesting tasks.
The current SoTA achieve this 97% with a high cost in the number of samples. We are a living proof that it can be better. I believe that there Will be a push in achieving same generalization with less data.
I think we're in the valley that AI can do a lot of things, but are hitting limits in accuracy where humans are still better at some of these things sometimes. That is, the AI isn't always better than humans, even at a specific, non-general task.
Now don't get me wrong, we've made a lot of progress, but I wonder if we can get these things to a place better than humans before the next economic recession. I think the biggest risk to AI is having the money dry up. Right now the hype is strong and the money is (nearly) free. If one of those changes, we could put this back on the shelf for another decade. If we go into a recession, labor will be cheap, so why bother automating with AI?
For example, we had self driving freeway cars back in the 90's.[0][1] Here's one of the lessons learned:
> In 1987, some UK Universities expressed concern that the industrial focus on the project neglected import traffic safety issues such as pedestrian protection.
And who doesn't remember the brilliant Dr. Sbaitso, my childhood therapist. [2]
> Now don't get me wrong, we've made a lot of progress, but I wonder if we can get these things to a place better than humans before the next economic recession.
I don't think it's necessary to completely solve superhuman performance to achieve automation of great economic value. Some of the most famous AI achievements leverage a fairly modest intelligence improvement with massive amounts of classic automation. E.g., the AlphaGo policy/value networks, combined with MCTS.
It may also be possible for automation to reliably determine when it's encountering a situation it's going to fare badly in, and then hand off to human telepresence control. It wouldn't surprise me if the first self-driving systems worked that way.
As amusing as the Final Fantasy translation project is, it simply is not the right comparison. It's a case of using the wrong tool for the job.
There are better systems for natural language translation that attempt to keep track of the subject, something that is very often implicit from context in Japanese. Additionally, that subject could be implicit based on the visuals of what is on screen. The enemy names are abbreviations due to artificial limits of the system...
All in all, it's like using a hammer to drive a screw. It might sort of work, but you shouldn't be surprised when it fails. It's just a case of using the wrong tool for the job. That is hardly proof that Hammers are bad, or the screws can never be successfully inserted into a board.
As for the translations - even a seemingly innocent sentence like "the sand people ride single file to hide their numbers" is fraught with ambiguity. I wouldn't call "unable to tell file-the-tool from file-the-data-representation from file-the-line" mastery.
Useful for a rough meaning in a completely unfamiliar language, sure - the hyperhype surrounding current SoA is actually contributing to the descent to AI winter.
I don't disagree that translation is far from mastered, but bear in mind that Google Translate isn't state of the art, mostly because of computational constraints, and 2016 GTranslate was even worse.
No.
Maybe an "AI Fall", but I doubt there will ever be another true "AI Winter". The AI we have today is too good, and creates too much value... at this point, there is no longer any question as to whether or not there is value in continuing to research and invest in AI.
What will happen, almost without doubt, is that particular niches within the overall rubric of "AI" will go in and out of vogue, and investment in particular segments will fluctuate. For example, the steam will run out of the "deep learning revolution" at some point, as people realize that DL alone is not enough to make the leap to systems that employ common sense reasoning, have a grasp of intuitive physics, have an intuitive metaphysics, and have other such attributes that will be needed to come close to approximating human intelligence.
Disclaimer: credit for the observation about "intuitive physics" and "intuitive metaphysics" goes to Melanie Mitchell, via her recent AI Podcast interview with Lex Fridman.
One other observation... while we still don't know how far away AGI is (much less ASI), or even if it's possible, the important thing is that we don't need AGI to do many amazing and valuable things. I also doubt many people are actually all that disillusioned that we aren't yet living in The Matrix (or are we???).
We could still have the bottom fall out of the term "AI", since there's a big gap between the present reality - no matter how useful - and the aspirational nature of the phrase "Artificial Intelligence". Take any business that brands itself as an "AI startup", any quote from Mark Zuckerberg about solving Facebook's content problem with AI, etc., and replace "AI" with "statistical algorithms" and it just doesn't have nearly the same ring to it. That alone means we're due for some kind of big correction.
> the important thing is that we don't need AGI to do many amazing and valuable things
that's absolutely true but I think a lot of people still consider actual human-like intelligence, common sense and so on to be important features of what we call AI or AGI. I think it's very obvious when we look at the cultural impact of AI in fiction or even in discussion around the dangers of AI that this is what many people in and outside the field are thinking about.
I think it's true that the commercial success of current techniques will endure but then maybe we should start differentiating between a sort of 'automation science' and cognitive/intelligent systems. Because on the latter I really don't think we are seeing much or maybe even any progress.
Reminder: Most of the seminal accomplishments of this era's AI wave were actually developed in the 70s-90s. Yes, even GANs and RL. This industry has been riding on the NVIDIA welfare program for the past 10 yrs. How long until the hardware gets maxed out?
I've been waiting for the hype and marketing to collapse for a few years.
Probably the fastest way to tarnish public perception of AI would be to keep pushing "AI-enhanced" products in front of the consumer as has been done. These things tend to demo well and have a nice cool factor for the first fifteen minutes or so, but after any kind of prolonged usage the limitations and rough-edges come up quick.
This is brand new technology. It's going to take a few years to reliably productionize - and most of the applied solutions will look nothing like the research. Many real world problems are going to combine multiple neural nets into systems with specific applications and there's a lot of detail to work out.
The hype may collapse in the short term, but that's only because many of the first movers are stereotypical tech startups who overpromise without truly understanding the problem or solution spaces and therefore underdeliver.
But, speaking from personal experience, some of the tech has already been proven - one example is massively accelerated modeling as an alternative to slow finite difference/finite element simulation with 99% accuracy, which will in the next 6-12 months totally change the approach to a wide range of modeling problems, and enable a totally new form of work where instead of setting up a model and waiting days or weeks, one may iterate effectively in real time. There are emerging solutions to knowledge management and "intelligent" data harvesting, where ML outputs are being manipulated in a rudimentary form of reasoning. Think specialized industries like petroleum, mechanical engineering, EM engineering - plenty of "layman" related features like recommendation engines are going to flop, but the cat is out of the bag for heavy industrial knowledge work. Just give it some time - we are on the cusp of a monumental leap in R&D across the spectrum of human endeavor. Very exciting times.
In the sense of AGI, it's all been hype. We are in an ML summer and have been for the past few years.
But "deep learning" is nothing more than that, nothing to do with AGI, we're not approaching an AGI winter except for people who were daft enough to fall for the hype.
There have been no advances in AGI in decades, it's already winter, and we've long been in it.
In terms of research and innovation, yes, but in economic terms, it has not even begun. There is still huuuge VC and government money being pumped into anything with AI on it. The last AI winter started when the financiers discovered the disconnect between the money they put in and delivery on promises.
AI went from an obscure hard CS field that only a few graybeards at MIT knew anything about, to this worldwide meme. Before the default thing your grandmother would tell you to study in college was business. Now your grandmother would tell you to study AI. I'm seeing a lot of people enter this space with the vague goal of getting rich quick. This is the same cohort that jumped into tech in the late 90s and the real estate market in the mid-2000s. It's not the AI of the Norvig and Marvin Minsky days.
I couldn't be more bearish about AI. I still love it though. I won't stop studying it when it becomes not cool anymore.
I think we're unfortunately fixated on a very literal reading of the famous Turing test (i.e. cleverly emulating humans = intelligence).
Consider language, for instance. Dolphin communication is intelligent, but does not emulate humans well; whereas the computer program ELIZA (1964) lacked intelligence, but was able to emulate humans well enough to entertain many people for quite some time.
Our current state-of-the-art NLP is - after copious research, talent, and computation - able to emulate human language somewhat better than ELIZA. But is it intelligence? There's certainly a lot of complexity involved, and neural networks show some interesting building-block patterns, but the lack of these algorithms' ability to generalize into new spaces, grow our fundamental understanding of the world around us, or really do anything besides pretend to be a human makes one wonder whether our current "AI" is just a (very good) party trick - a better version of ELIZA.
I have worked in the field since 1982, so I have experienced “the need to work on other things for a while” to earn a living.
My prediction is that we are going to see a small revolution in cost reduction: hardware for deep learning will get cheaper; great educational materials like fast.ai and Andrew Ng’s lessons will increase the hiring pool of people who know enough to be useful; the large AI companies will continue to share technology and trained models to help their hiring funnel and general PR; programmer less modeling will really start to be a real thing.
Alot of the cost in AI projects now are in training or education, but instead in problem solving and plumbing. AI/ML free projects using things like Kafka and Flink are not cheap.
Coding up a CNN or MLP is not a big deal, but it never really was - it was work to build a c back propagation implementation but if I did it in 1995 then anyone could. The question and real differentiator is in answering three problems :
- what's the problem?
- how can we get the data to the system?
- how do we frame the data and output in terms of (any) AI technology?
All of these steps are closely coupled and require expertise.
On the programmer less modelling; I still have not seen a tool that is better than code for expressing a model precisely and testably, and my experience is that until we have some running code we don't really know that we understand the system.
The AI summer/winter cycle feels to me like something similar to a search algorithm, we have a phase of exploration, which seem to have slower, if any, progress and no one is very sure what's the next big thing so they perhaps start trying many things (the winter "skepticism") and eventually someone finds some breakthrough and gets everyone to an exploitation phase in which everyone knows where to invest and comparatively small effort is required to create progress (the summer "hype"). And eventually all low-hanging fruits are over and the search seems to converge to a local maximum and again larger exploration is required.
So maybe the winter is just as important as the summer. Each winter lead to a summer with different focus points (specialist systems and logic followed by neural networks, bayesian models and SVMs and finally deep learning). And after each cycle we have more and more tools, each more useful than the last. And also maybe the key to avoid this strict cycle would be to encourage more exploration during the exploitation phase, giving full support to both incremental ideals that improve on the state of the art and (potentially) revolutionary ideas that give poor immediate results but create new venues to investigate.
Of course that's a simplification and there are many aspects to it, including data availability, hardware and tooling that can easily prevent brilliant ideas that were had too soon.
Can't comment on every industry, but in medicine - especially the 'pattern-recognition-specialties' such as formost pathology and radiology - the actual implementation/usefulness/impact of "AI" (ML/DL) has not yet taken a foothold.
Yes it's hyped, but the match between even the current state of DL and what is needed and possible in these specialties is so close to being perfect, and the gain is so close. What is holding us back is regulatory issues and technical implementation issues that have nothing to with the state of DL, just basic IT problems, lack of standards.
Investments may fall back and companies may stop adversing it as "AI", but the impact of ML/DL in medicine will not fall back.
The "AI" we see today is already effective, just not applied at scale.
Why would there be an AI winter? Was there a car winter after cars became a growing product? Was there a processor winter after microprocessors became a growing product? ERP software?
Didn’t the previous AI winter happen because the hardware wasn’t advanced enough to make the technology useful to most people? Since that is no longer the case, why this consistent belief that there will be another winter?
Research progress might slow down for a bit in some areas of machine learning but the commercialization of existing technology will keep us busy for the next 10 years.
Unlike the past two winters, deep learning is actually enabling a ton of applications that wouldn't have been possible otherwise and we now live in a world with a lot more data and computers to apply it to.
We don't want AI, we want systems that work better autonomously. We have lots of autonomous systems, mostly run by people (a shop keeper for a shop owner, for instance). Now that we have reached certain limits of pure digital systems, more innovations (ie, changes leading to better system outcomes) will happen due to human involvement in the data understanding and automation. It's just going to look more like people going to work.
The idea that AI (ML models) would be designed once is silly. The tuning and application always involves human judgment over time. We just hide the human contributions to AI/ML systems because it gets too complicated. But really, all good/practicable/in-the-wild AI systems involve a lot of people-in-the-loop!
IMO, no. Unlike the last time, things actually work this time. Perceptual things especially. People in the thread seem to be dismissive of those "single digit percentage point" gains that are being made nearly every month in some important tasks, but those last few percentage points often decide whether the system is garbage or useful. Compare e.g. Siri and Google Assistant, for example. Likely relatively small difference on metrics which results in a _huge_ difference in usability.
Another mistake people make is they look at model performance on academic dataset and make unsubstantiated conclusions about usefulness of models. Guess what, practical tasks _do not_ involve academic datasets. Some of academic datasets are _stupid hard_ on purpose (e.g. ImageNet, which forces your net to recognize _dog breeds_ that few humans can recognize). If your problem is more constrained, and the dataset is large enough and clean enough, you can often get very good results on practical problems, even with models that do not do all that well in published research.
I actually think that basic deep learning is well on its way into the plateau of productivity. It's not going to be used strictly as AI though, just a more robust type of model fitting than traditional ML which required cleaner data and better extracted features.
While the expectation vs. reality dichotomy is very real, the cost vs. return is just as vital and, ultimately, more easily solvable in the years to come. Curbing the expectations of the money-givers in regards to what they might get out of these ventures is always tough but using tech is going to be cheaper because, well, the price trends for tech have been downward for a while.
Personally, hoping to see more shifts toward trying new things rather than attempting to perfect the already existing models. This would, well, not solve but circumvent the need to try and improve something when the tools are not there yet. This way, a broader groundwork will be laid.
I think the total economic impact of AI will be greatest for tasks that output high-dimensional data, such as GANs. For the simple reason that it can replace a lot more human labor. A great many jobs could be augmented with such tech.
Furthermore, I think the results from GPT-2 and similar language models show that researchers have found a scalable technique for sequence understanding. They are likely to just work better and better as you throw more data and training time at them. Imagine what GPT-2 could do if trained on 1000x more data and had 1000x more parameters. It would probably show deep understanding in a great variety of ideas and if prompted properly would probably pass a lot of Turing tests. There is evidence that this type of model learns somewhat generally, that is, structures it learns in one domain do help it learn faster in other domains. I am not sure exactly what would be possible with such a model, but I suspect it would be extremely impressive and meaningful economically.
I think we are likely to see that type of progress in the next year or two, and for there to be no AI winter.
While I don't think there's going to be an AI winter either, I don't think GPT-2 will achieve sentience or anything close to it.
And that's for the same reason that no matter how much data they feed Tesla's self-driving AI, it will still try to kill you now and then. The problem space is just too big. All the people I know in this space don't think it will be solved for at least a decade and maybe not even then.
But I do suspect the 2020s will see the creation of agents combining classical algorithms with deep neural networks to do amazing things in domains that are closed and constant. But they're all going to be glorified (yet wonderful) unitaskers.
The only thing that worries me is that I don't trust FAANG to do the right thing ever anymore, and it's amazing to me that so many have opted into the panopticon of things in exchange for the ability to order stuff and turn their gadgets on and off.
Does GPT-2 really "understand" anything? I feel like this is pretty quickly going to devolve into a semantic argument, but having interacted with some trained GPT-2 models, it seems to produce only what Orwell would have called duckspeak[0]. There's very clearly no mind behind the words, so it's hard for me to credit it with understanding.
[+] [-] rococode|6 years ago|reply
However, state-of-the-art models aren't really used in production yet. I think the trend of "use AI/ML to solve X" has only started to pick up in the past 2 years, and it'll continue well into the 2020s. The process of taking research models and putting them into production is not standardized yet, and many models don't even really work in production - if your model takes a second to do an inference step that's fine for research but maybe not for a real product.
I think in the next decade, on the research side, benchmarks will be beaten less often, and instead there will be more focus on trying out radically new things, understanding weaknesses in current techniques, and finding new measurements that assess those weaknesses. On the industry side, there will still be lots of cool and exciting new achievements as already-known techniques are applied to old problems that haven't been addressed by AI yet.
As an aside, this was the first time in my life that I read the phrase "10s" referring to the 2010-2019. Kind of an odd-feeling moment!
[1] https://twitter.com/yoavgo/status/1151977499259219968
[+] [-] jean-|6 years ago|reply
When models commonly achieve 97% on a task, it means it's time to define a harder task, as it's long stopped providing any useful signal.
[+] [-] KKKKkkkk1|6 years ago|reply
[+] [-] drongoking|6 years ago|reply
There's an old saying "If all you have is a hammer, all your problems start to look like nails." We may see an AI winter come about simply because we run out of things to pound with our hammer.
[+] [-] Veedrac|6 years ago|reply
It's really not. The second half of last year alone had MuZero and Megatron-LM, to name just a couple that most scream to me that we are actually progressing towards AGI.
You say ‘When the state-of-the-art is 97% on a task’, but solved tasks are the least interesting tasks.
[+] [-] tintor|6 years ago|reply
[+] [-] Findeton|6 years ago|reply
[+] [-] fredguth|6 years ago|reply
[+] [-] cbanek|6 years ago|reply
> translating text into practically every language
Note: they said they have mastered these tasks.
Yeah... I'm not sure a lot of native speakers would agree. Here's a great example of using Google Translate to automatically translate a video game.
https://www.youtube.com/watch?v=_uNkubEHfQU
> Driving cars
I'm not so sure about that one either.
I think we're in the valley that AI can do a lot of things, but are hitting limits in accuracy where humans are still better at some of these things sometimes. That is, the AI isn't always better than humans, even at a specific, non-general task.
Now don't get me wrong, we've made a lot of progress, but I wonder if we can get these things to a place better than humans before the next economic recession. I think the biggest risk to AI is having the money dry up. Right now the hype is strong and the money is (nearly) free. If one of those changes, we could put this back on the shelf for another decade. If we go into a recession, labor will be cheap, so why bother automating with AI?
For example, we had self driving freeway cars back in the 90's.[0][1] Here's one of the lessons learned:
> In 1987, some UK Universities expressed concern that the industrial focus on the project neglected import traffic safety issues such as pedestrian protection.
And who doesn't remember the brilliant Dr. Sbaitso, my childhood therapist. [2]
[0] https://en.wikipedia.org/wiki/Eureka_Prometheus_Project [1] https://www.youtube.com/watch?v=I39sxwYKlEE [2] https://en.wikipedia.org/wiki/Dr._Sbaitso
[+] [-] AlexCoventry|6 years ago|reply
I don't think it's necessary to completely solve superhuman performance to achieve automation of great economic value. Some of the most famous AI achievements leverage a fairly modest intelligence improvement with massive amounts of classic automation. E.g., the AlphaGo policy/value networks, combined with MCTS.
It may also be possible for automation to reliably determine when it's encountering a situation it's going to fare badly in, and then hand off to human telepresence control. It wouldn't surprise me if the first self-driving systems worked that way.
[+] [-] ggggtez|6 years ago|reply
There are better systems for natural language translation that attempt to keep track of the subject, something that is very often implicit from context in Japanese. Additionally, that subject could be implicit based on the visuals of what is on screen. The enemy names are abbreviations due to artificial limits of the system...
All in all, it's like using a hammer to drive a screw. It might sort of work, but you shouldn't be surprised when it fails. It's just a case of using the wrong tool for the job. That is hardly proof that Hammers are bad, or the screws can never be successfully inserted into a board.
[+] [-] Piskvorrr|6 years ago|reply
Useful for a rough meaning in a completely unfamiliar language, sure - the hyperhype surrounding current SoA is actually contributing to the descent to AI winter.
https://www.everything2.com/title/The+sand+people+ride+in+si...
[+] [-] redisman|6 years ago|reply
[+] [-] Veedrac|6 years ago|reply
[+] [-] mindcrime|6 years ago|reply
What will happen, almost without doubt, is that particular niches within the overall rubric of "AI" will go in and out of vogue, and investment in particular segments will fluctuate. For example, the steam will run out of the "deep learning revolution" at some point, as people realize that DL alone is not enough to make the leap to systems that employ common sense reasoning, have a grasp of intuitive physics, have an intuitive metaphysics, and have other such attributes that will be needed to come close to approximating human intelligence.
Disclaimer: credit for the observation about "intuitive physics" and "intuitive metaphysics" goes to Melanie Mitchell, via her recent AI Podcast interview with Lex Fridman.
One other observation... while we still don't know how far away AGI is (much less ASI), or even if it's possible, the important thing is that we don't need AGI to do many amazing and valuable things. I also doubt many people are actually all that disillusioned that we aren't yet living in The Matrix (or are we???).
[+] [-] _bxg1|6 years ago|reply
[+] [-] Barrin92|6 years ago|reply
that's absolutely true but I think a lot of people still consider actual human-like intelligence, common sense and so on to be important features of what we call AI or AGI. I think it's very obvious when we look at the cultural impact of AI in fiction or even in discussion around the dangers of AI that this is what many people in and outside the field are thinking about.
I think it's true that the commercial success of current techniques will endure but then maybe we should start differentiating between a sort of 'automation science' and cognitive/intelligent systems. Because on the latter I really don't think we are seeing much or maybe even any progress.
[+] [-] zelly|6 years ago|reply
http://people.idsia.ch/~juergen/deep-learning-miraculous-yea...
[+] [-] nabla9|6 years ago|reply
Maybe the correct way to measure advance in AI is Turing Awards.
[+] [-] pauljurczak|6 years ago|reply
Nvidia has been very profitable for the past 10 years: https://www.macrotrends.net/stocks/charts/NVDA/nvidia/net-in.... I would call it a synergy, but it does smell of intellectual welfare.
[+] [-] zitterbewegung|6 years ago|reply
[+] [-] thrower123|6 years ago|reply
Probably the fastest way to tarnish public perception of AI would be to keep pushing "AI-enhanced" products in front of the consumer as has been done. These things tend to demo well and have a nice cool factor for the first fifteen minutes or so, but after any kind of prolonged usage the limitations and rough-edges come up quick.
[+] [-] allovernow|6 years ago|reply
The hype may collapse in the short term, but that's only because many of the first movers are stereotypical tech startups who overpromise without truly understanding the problem or solution spaces and therefore underdeliver.
But, speaking from personal experience, some of the tech has already been proven - one example is massively accelerated modeling as an alternative to slow finite difference/finite element simulation with 99% accuracy, which will in the next 6-12 months totally change the approach to a wide range of modeling problems, and enable a totally new form of work where instead of setting up a model and waiting days or weeks, one may iterate effectively in real time. There are emerging solutions to knowledge management and "intelligent" data harvesting, where ML outputs are being manipulated in a rudimentary form of reasoning. Think specialized industries like petroleum, mechanical engineering, EM engineering - plenty of "layman" related features like recommendation engines are going to flop, but the cat is out of the bag for heavy industrial knowledge work. Just give it some time - we are on the cusp of a monumental leap in R&D across the spectrum of human endeavor. Very exciting times.
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] KaoruAoiShiho|6 years ago|reply
[+] [-] mellosouls|6 years ago|reply
But "deep learning" is nothing more than that, nothing to do with AGI, we're not approaching an AGI winter except for people who were daft enough to fall for the hype.
There have been no advances in AGI in decades, it's already winter, and we've long been in it.
[+] [-] zelly|6 years ago|reply
In terms of research and innovation, yes, but in economic terms, it has not even begun. There is still huuuge VC and government money being pumped into anything with AI on it. The last AI winter started when the financiers discovered the disconnect between the money they put in and delivery on promises.
AI went from an obscure hard CS field that only a few graybeards at MIT knew anything about, to this worldwide meme. Before the default thing your grandmother would tell you to study in college was business. Now your grandmother would tell you to study AI. I'm seeing a lot of people enter this space with the vague goal of getting rich quick. This is the same cohort that jumped into tech in the late 90s and the real estate market in the mid-2000s. It's not the AI of the Norvig and Marvin Minsky days.
I couldn't be more bearish about AI. I still love it though. I won't stop studying it when it becomes not cool anymore.
[+] [-] xenocyon|6 years ago|reply
Consider language, for instance. Dolphin communication is intelligent, but does not emulate humans well; whereas the computer program ELIZA (1964) lacked intelligence, but was able to emulate humans well enough to entertain many people for quite some time.
Our current state-of-the-art NLP is - after copious research, talent, and computation - able to emulate human language somewhat better than ELIZA. But is it intelligence? There's certainly a lot of complexity involved, and neural networks show some interesting building-block patterns, but the lack of these algorithms' ability to generalize into new spaces, grow our fundamental understanding of the world around us, or really do anything besides pretend to be a human makes one wonder whether our current "AI" is just a (very good) party trick - a better version of ELIZA.
[+] [-] mark_l_watson|6 years ago|reply
My prediction is that we are going to see a small revolution in cost reduction: hardware for deep learning will get cheaper; great educational materials like fast.ai and Andrew Ng’s lessons will increase the hiring pool of people who know enough to be useful; the large AI companies will continue to share technology and trained models to help their hiring funnel and general PR; programmer less modeling will really start to be a real thing.
[+] [-] sgt101|6 years ago|reply
Coding up a CNN or MLP is not a big deal, but it never really was - it was work to build a c back propagation implementation but if I did it in 1995 then anyone could. The question and real differentiator is in answering three problems :
- what's the problem? - how can we get the data to the system? - how do we frame the data and output in terms of (any) AI technology?
All of these steps are closely coupled and require expertise.
On the programmer less modelling; I still have not seen a tool that is better than code for expressing a model precisely and testably, and my experience is that until we have some running code we don't really know that we understand the system.
[+] [-] ddragon|6 years ago|reply
So maybe the winter is just as important as the summer. Each winter lead to a summer with different focus points (specialist systems and logic followed by neural networks, bayesian models and SVMs and finally deep learning). And after each cycle we have more and more tools, each more useful than the last. And also maybe the key to avoid this strict cycle would be to encourage more exploration during the exploitation phase, giving full support to both incremental ideals that improve on the state of the art and (potentially) revolutionary ideas that give poor immediate results but create new venues to investigate.
Of course that's a simplification and there are many aspects to it, including data availability, hardware and tooling that can easily prevent brilliant ideas that were had too soon.
[+] [-] catoc|6 years ago|reply
Yes it's hyped, but the match between even the current state of DL and what is needed and possible in these specialties is so close to being perfect, and the gain is so close. What is holding us back is regulatory issues and technical implementation issues that have nothing to with the state of DL, just basic IT problems, lack of standards.
Investments may fall back and companies may stop adversing it as "AI", but the impact of ML/DL in medicine will not fall back.
The "AI" we see today is already effective, just not applied at scale.
[+] [-] glial|6 years ago|reply
[+] [-] richk449|6 years ago|reply
Didn’t the previous AI winter happen because the hardware wasn’t advanced enough to make the technology useful to most people? Since that is no longer the case, why this consistent belief that there will be another winter?
[+] [-] soVeryTired|6 years ago|reply
[+] [-] m_ke|6 years ago|reply
Unlike the past two winters, deep learning is actually enabling a ton of applications that wouldn't have been possible otherwise and we now live in a world with a lot more data and computers to apply it to.
[+] [-] catoc|6 years ago|reply
[+] [-] dr_dshiv|6 years ago|reply
The idea that AI (ML models) would be designed once is silly. The tuning and application always involves human judgment over time. We just hide the human contributions to AI/ML systems because it gets too complicated. But really, all good/practicable/in-the-wild AI systems involve a lot of people-in-the-loop!
[+] [-] m0zg|6 years ago|reply
Another mistake people make is they look at model performance on academic dataset and make unsubstantiated conclusions about usefulness of models. Guess what, practical tasks _do not_ involve academic datasets. Some of academic datasets are _stupid hard_ on purpose (e.g. ImageNet, which forces your net to recognize _dog breeds_ that few humans can recognize). If your problem is more constrained, and the dataset is large enough and clean enough, you can often get very good results on practical problems, even with models that do not do all that well in published research.
[+] [-] zmmmmm|6 years ago|reply
[+] [-] WilTimSon|6 years ago|reply
Personally, hoping to see more shifts toward trying new things rather than attempting to perfect the already existing models. This would, well, not solve but circumvent the need to try and improve something when the tools are not there yet. This way, a broader groundwork will be laid.
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] KKKKkkkk1|6 years ago|reply
[+] [-] etaioinshrdlu|6 years ago|reply
Furthermore, I think the results from GPT-2 and similar language models show that researchers have found a scalable technique for sequence understanding. They are likely to just work better and better as you throw more data and training time at them. Imagine what GPT-2 could do if trained on 1000x more data and had 1000x more parameters. It would probably show deep understanding in a great variety of ideas and if prompted properly would probably pass a lot of Turing tests. There is evidence that this type of model learns somewhat generally, that is, structures it learns in one domain do help it learn faster in other domains. I am not sure exactly what would be possible with such a model, but I suspect it would be extremely impressive and meaningful economically.
I think we are likely to see that type of progress in the next year or two, and for there to be no AI winter.
[+] [-] scottlegrand2|6 years ago|reply
And that's for the same reason that no matter how much data they feed Tesla's self-driving AI, it will still try to kill you now and then. The problem space is just too big. All the people I know in this space don't think it will be solved for at least a decade and maybe not even then.
But I do suspect the 2020s will see the creation of agents combining classical algorithms with deep neural networks to do amazing things in domains that are closed and constant. But they're all going to be glorified (yet wonderful) unitaskers.
The only thing that worries me is that I don't trust FAANG to do the right thing ever anymore, and it's amazing to me that so many have opted into the panopticon of things in exchange for the ability to order stuff and turn their gadgets on and off.
[+] [-] ForHackernews|6 years ago|reply
[0] http://www.orwelltoday.com/duckspeak.shtml
[+] [-] tomrod|6 years ago|reply
[+] [-] blackrock|6 years ago|reply
Maybe call it:
Cybernetic Research (CR)
Computational Cognition (CC)
Statistical Reasoning (SR)
Computational Reasoning (CR)