This level of hype reminds me of the AI winter. Concerned that public interest hits a peak and then a few months later disillusion sets in and AI becomes a discredited failure in the public's eye, since even rapid progress moves slower than an election or typical news cycle.
AI Winter was not the result of changing public interest.
It was lost interest from investors and government.
All that money poured into AI research produced little rewards. There were expert systems that worked well and they became profitable businesses, but othervice there was little to show. In retrospect I think it was good idea to adjust the money to match the results and wait until computer scientist come up with new ideas.
Current Ai boom is result from 'Canadian mafia' diligently working and actually producing results and faster computing, especially GPCPU.
Unless we get constant stream of new ideas that build up on the current ones, there should reduced interest and investments once most benefits have been materialized.
I am optimistic, AI winter is no longer the case this time.
We almost solved image/speech recognition in the past 5 years. Once those works went out of academia to real application, the amount of disruption to the current society is pretty hard to imagine.
From the press release: "In education, AI has the potential to help teachers customize instruction for each student’s needs."
Does anybody actually do that? Most "online education" still seems to be canned lectures. There was work on this from the 1960s to the 1990s, but efforts seem to have stalled.[1] There are drill-and-practice systems, but they're really just workbooks with automatic scoring.
There's a hip startup downtown working on something just like that.[0]
Will these systems be any better than an automatic scoring system with spaced repetition? Well. More fundamentally, what are a student's needs? Sesame Street and other organizations[1] have known for a long time that the most "engaged" learners also happen to be having fun.
The only particularly controversial thing in that quote is the word teachers. Left relatively unsupervised, the kids today will voraciously seek youtube videos to teach themselves how to make a Turing machine in Minecraft. What an intelligent tutor really needs to be able to do, is pay attention to what a student is curious about. Ubiquitous sensors will probably play into some of the new efforts. But the biggest leaps will come from systems that help kids learn from each other, together.
Yes, it's happening, but inexpensive tools aren't widely available or integrated into a lot of content, and content isn't yet being widely developed with dynamic content in mind. There needs to be work in how 'teachers customize instruction' that challenge widely used instructional design models.
There's a lot of work being done in places were the pockets are deeper (military, simulations, medicine). Recently, standards (xAPI, Caliper) have started to emerge to enable the decoupling of content, content delivery, and interaction (think MVC pattern) and to enable pervasive, multi-modal learning activity tracking.
With Khan Academy I recall they tried exposing some measure of competencies on different aspects of a subject based on their testing, so as the interactive human / 'teacher' you could help the student more specifically and not waste time drilling on what they know, and especially not waste time on canned lectures done better elsewhere which everyone can watch as homework.
A fun if unrealistic alternative, have the professor move fast enough to give the illusion of individual clones for each student: https://youtu.be/ZJy8qH8Fw5s
Siyavula [0] is making big advances in this area for maths and science high school education in South Africa. Their intelligent practice platform uses machine learning to pick the 'best' exercise for individuals when they practice to ensure that everyone gets the optimal difficulty practice question.
With all the people scared from AI, I can imagine if they would replace politicians and government officials with AI in just small part of the country, after the initial surprise, within weeks satisfaction would be through the roof and we would have AI running the country.
Even the best AI in the world can't overcome the fact that different interest groups in a country often have orthogonal demands. It might just become better at lying than current politicians.
For software based AI to be successful at what it does, it must compete. If it does not compete well, it will suck, and the government will suck just as bad. Government, at least the way we've been thinking about it thus far, will have to change for AI to be good at it.
AI is, at the end of the day, just software. We have the intellectual tools to enable us to make high quality software and systems, stemming from industrial experience stretching back three quarters of a century. We need to free that knowledge; ossified in dozens of mil-std and similar institutional documents: re-institutionalising it in the public domain, making free tools and systems available to support the (public) quality processes that a distributed, heterogeneous, partially open-source future AI requires.
The folks here commenting about entrenched power structures should remember that Ed Felten (who put his name on the release) is not a career bureaucrat at all.
True. As FTC CTO, his able past work in privacy and data security was certainly relevant to traditional FTC interests. As Deputy US CTO, I think his agenda has broadened.
There's already a ton of gov't interest/activity on surveillance and security issues, mostly via the military services and adjuncts. I assume this initiative ain't more of that.
This announcement seems to presage greater federal gov't interest and involvement in how computing might be used toward less defensive/clandestine ends, especially in governance (social good), control, and safety, as well as legal implications -- adding AI as the means to serving gov't ends, so to speak.
If so, great. I'd love to see greater OPEN use of computing in government, especially in gathering unbiased metrics and making better use of them to evaluate the outcome of changes in policy.
I suspect we've already replaced 100,000+ jobs in call centers with AI -- you know the menu system that you get before you talk to somebody. (You might not think of that as AI now, but that's "success" -- 20 years ago it undoubtedly was AI.)
I'm not all that excited about interacting with more AI.
I really hate politico speak. What does this really mean?
"to spur public dialogue on artificial intelligence and machine learning and identify challenges and opportunities related to this emerging technology."
It's a quite clear statement (though in a niche jargon) - such language means that they intend to dedicate resources to organize some events/discussion panels about the topic, and possibly even to some research grants.
The only unclear thing about politico speak generally is that is not clear if they will do X or if they just wanted to publicly claim that they'll do X for some PR or voter support, with no intention to actually do it.
It means nothing. The government (21st century US gov.) is so poorly structured to accomplish or influence anything regarding growth it should be taken as an ROI/sales pitch invitation. They are saying, in the future we will have a bucket of money to give to our friends.... please be our friends so you can try to build something. The first implementation of Obama-care should be a clear indication of this. However, Im sure there is true investment in interesting technology on the defense side of the fence. Probably more money, and way more interesting problems.
The idea of a computer deciding whether someone is guilty or not is a scary prospect. This is a bandwagon I'm not so sure the government should be so eager to jump on.
Given election cycles and how expansive it will be to deal with this problem, none will address this until the effects are strongly felt by avg. voter.
[+] [-] paulsutter|10 years ago|reply
[+] [-] nabla9|10 years ago|reply
It was lost interest from investors and government.
All that money poured into AI research produced little rewards. There were expert systems that worked well and they became profitable businesses, but othervice there was little to show. In retrospect I think it was good idea to adjust the money to match the results and wait until computer scientist come up with new ideas.
Current Ai boom is result from 'Canadian mafia' diligently working and actually producing results and faster computing, especially GPCPU.
Unless we get constant stream of new ideas that build up on the current ones, there should reduced interest and investments once most benefits have been materialized.
[+] [-] kordless|10 years ago|reply
[+] [-] roel_v|10 years ago|reply
[+] [-] studentrob|10 years ago|reply
I won't hold my breath for them to produce anything of value here.
[+] [-] eva1984|10 years ago|reply
We almost solved image/speech recognition in the past 5 years. Once those works went out of academia to real application, the amount of disruption to the current society is pretty hard to imagine.
[+] [-] Animats|10 years ago|reply
Does anybody actually do that? Most "online education" still seems to be canned lectures. There was work on this from the 1960s to the 1990s, but efforts seem to have stalled.[1] There are drill-and-practice systems, but they're really just workbooks with automatic scoring.
[1] https://en.wikipedia.org/wiki/Intelligent_tutoring_system
[+] [-] dvanduzer|10 years ago|reply
Will these systems be any better than an automatic scoring system with spaced repetition? Well. More fundamentally, what are a student's needs? Sesame Street and other organizations[1] have known for a long time that the most "engaged" learners also happen to be having fun.
The only particularly controversial thing in that quote is the word teachers. Left relatively unsupervised, the kids today will voraciously seek youtube videos to teach themselves how to make a Turing machine in Minecraft. What an intelligent tutor really needs to be able to do, is pay attention to what a student is curious about. Ubiquitous sensors will probably play into some of the new efforts. But the biggest leaps will come from systems that help kids learn from each other, together.
[0] https://www.youtube.com/watch?v=1lG4xBoEgZo
[1] http://www.instituteofplay.org
[+] [-] ap22213|10 years ago|reply
There's a lot of work being done in places were the pockets are deeper (military, simulations, medicine). Recently, standards (xAPI, Caliper) have started to emerge to enable the decoupling of content, content delivery, and interaction (think MVC pattern) and to enable pervasive, multi-modal learning activity tracking.
[+] [-] Jach|10 years ago|reply
A fun if unrealistic alternative, have the professor move fast enough to give the illusion of individual clones for each student: https://youtu.be/ZJy8qH8Fw5s
[+] [-] sixhobbits|10 years ago|reply
[0] http://www.siyavula.com/
[+] [-] desireco42|10 years ago|reply
[+] [-] jcfrei|10 years ago|reply
[+] [-] kordless|10 years ago|reply
[+] [-] w_t_payne|10 years ago|reply
[+] [-] erubin|10 years ago|reply
[+] [-] randcraw|10 years ago|reply
There's already a ton of gov't interest/activity on surveillance and security issues, mostly via the military services and adjuncts. I assume this initiative ain't more of that.
This announcement seems to presage greater federal gov't interest and involvement in how computing might be used toward less defensive/clandestine ends, especially in governance (social good), control, and safety, as well as legal implications -- adding AI as the means to serving gov't ends, so to speak.
If so, great. I'd love to see greater OPEN use of computing in government, especially in gathering unbiased metrics and making better use of them to evaluate the outcome of changes in policy.
[+] [-] clarkmoody|10 years ago|reply
[+] [-] chubot|10 years ago|reply
I'm not all that excited about interacting with more AI.
[+] [-] ryanx435|10 years ago|reply
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] mtgx|10 years ago|reply
[+] [-] bogomipz|10 years ago|reply
"to spur public dialogue on artificial intelligence and machine learning and identify challenges and opportunities related to this emerging technology."
[+] [-] PeterisP|10 years ago|reply
The only unclear thing about politico speak generally is that is not clear if they will do X or if they just wanted to publicly claim that they'll do X for some PR or voter support, with no intention to actually do it.
[+] [-] ProAm|10 years ago|reply
[+] [-] bsbechtel|10 years ago|reply
[+] [-] comboy|10 years ago|reply
We have a law to make humans work exactly the same way. Law is a set of rules that say who is guilty and who's not. Only that humans are bad at being objective: http://blogs.discovermagazine.com/notrocketscience/2011/04/1...
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] exit|10 years ago|reply
[+] [-] qaq|10 years ago|reply
[+] [-] mikerichards|10 years ago|reply
[deleted]
[+] [-] amelius|10 years ago|reply
[deleted]
[+] [-] nitroxxls|10 years ago|reply
[deleted]
[+] [-] stephenhess|10 years ago|reply