top | item 11624142

Preparing for the Future of Artificial Intelligence

174 points| apsec112 | 10 years ago |whitehouse.gov | reply

90 comments

order
[+] paulsutter|10 years ago|reply
This level of hype reminds me of the AI winter. Concerned that public interest hits a peak and then a few months later disillusion sets in and AI becomes a discredited failure in the public's eye, since even rapid progress moves slower than an election or typical news cycle.
[+] nabla9|10 years ago|reply
AI Winter was not the result of changing public interest.

It was lost interest from investors and government.

All that money poured into AI research produced little rewards. There were expert systems that worked well and they became profitable businesses, but othervice there was little to show. In retrospect I think it was good idea to adjust the money to match the results and wait until computer scientist come up with new ideas.

Current Ai boom is result from 'Canadian mafia' diligently working and actually producing results and faster computing, especially GPCPU.

Unless we get constant stream of new ideas that build up on the current ones, there should reduced interest and investments once most benefits have been materialized.

[+] kordless|10 years ago|reply
At this point, I really don't think our government has any idea how to deal with what is coming.
[+] roel_v|10 years ago|reply
It's a self-fulfilling prophecy - we're now at the 6th season of hearing "winter is coming", it's bound to happen any time now.
[+] studentrob|10 years ago|reply
Yes. Where was the White House tech group (OSTP) during the Apple & FBI encryption debate? Silent!

I won't hold my breath for them to produce anything of value here.

[+] eva1984|10 years ago|reply
I am optimistic, AI winter is no longer the case this time.

We almost solved image/speech recognition in the past 5 years. Once those works went out of academia to real application, the amount of disruption to the current society is pretty hard to imagine.

[+] Animats|10 years ago|reply
From the press release: "In education, AI has the potential to help teachers customize instruction for each student’s needs."

Does anybody actually do that? Most "online education" still seems to be canned lectures. There was work on this from the 1960s to the 1990s, but efforts seem to have stalled.[1] There are drill-and-practice systems, but they're really just workbooks with automatic scoring.

[1] https://en.wikipedia.org/wiki/Intelligent_tutoring_system

[+] dvanduzer|10 years ago|reply
There's a hip startup downtown working on something just like that.[0]

Will these systems be any better than an automatic scoring system with spaced repetition? Well. More fundamentally, what are a student's needs? Sesame Street and other organizations[1] have known for a long time that the most "engaged" learners also happen to be having fun.

The only particularly controversial thing in that quote is the word teachers. Left relatively unsupervised, the kids today will voraciously seek youtube videos to teach themselves how to make a Turing machine in Minecraft. What an intelligent tutor really needs to be able to do, is pay attention to what a student is curious about. Ubiquitous sensors will probably play into some of the new efforts. But the biggest leaps will come from systems that help kids learn from each other, together.

[0] https://www.youtube.com/watch?v=1lG4xBoEgZo

[1] http://www.instituteofplay.org

[+] ap22213|10 years ago|reply
Yes, it's happening, but inexpensive tools aren't widely available or integrated into a lot of content, and content isn't yet being widely developed with dynamic content in mind. There needs to be work in how 'teachers customize instruction' that challenge widely used instructional design models.

There's a lot of work being done in places were the pockets are deeper (military, simulations, medicine). Recently, standards (xAPI, Caliper) have started to emerge to enable the decoupling of content, content delivery, and interaction (think MVC pattern) and to enable pervasive, multi-modal learning activity tracking.

[+] Jach|10 years ago|reply
With Khan Academy I recall they tried exposing some measure of competencies on different aspects of a subject based on their testing, so as the interactive human / 'teacher' you could help the student more specifically and not waste time drilling on what they know, and especially not waste time on canned lectures done better elsewhere which everyone can watch as homework.

A fun if unrealistic alternative, have the professor move fast enough to give the illusion of individual clones for each student: https://youtu.be/ZJy8qH8Fw5s

[+] sixhobbits|10 years ago|reply
Siyavula [0] is making big advances in this area for maths and science high school education in South Africa. Their intelligent practice platform uses machine learning to pick the 'best' exercise for individuals when they practice to ensure that everyone gets the optimal difficulty practice question.

[0] http://www.siyavula.com/

[+] desireco42|10 years ago|reply
With all the people scared from AI, I can imagine if they would replace politicians and government officials with AI in just small part of the country, after the initial surprise, within weeks satisfaction would be through the roof and we would have AI running the country.
[+] jcfrei|10 years ago|reply
Even the best AI in the world can't overcome the fact that different interest groups in a country often have orthogonal demands. It might just become better at lying than current politicians.
[+] kordless|10 years ago|reply
For software based AI to be successful at what it does, it must compete. If it does not compete well, it will suck, and the government will suck just as bad. Government, at least the way we've been thinking about it thus far, will have to change for AI to be good at it.
[+] w_t_payne|10 years ago|reply
AI is, at the end of the day, just software. We have the intellectual tools to enable us to make high quality software and systems, stemming from industrial experience stretching back three quarters of a century. We need to free that knowledge; ossified in dozens of mil-std and similar institutional documents: re-institutionalising it in the public domain, making free tools and systems available to support the (public) quality processes that a distributed, heterogeneous, partially open-source future AI requires.
[+] erubin|10 years ago|reply
The folks here commenting about entrenched power structures should remember that Ed Felten (who put his name on the release) is not a career bureaucrat at all.
[+] randcraw|10 years ago|reply
True. As FTC CTO, his able past work in privacy and data security was certainly relevant to traditional FTC interests. As Deputy US CTO, I think his agenda has broadened.

There's already a ton of gov't interest/activity on surveillance and security issues, mostly via the military services and adjuncts. I assume this initiative ain't more of that.

This announcement seems to presage greater federal gov't interest and involvement in how computing might be used toward less defensive/clandestine ends, especially in governance (social good), control, and safety, as well as legal implications -- adding AI as the means to serving gov't ends, so to speak.

If so, great. I'd love to see greater OPEN use of computing in government, especially in gathering unbiased metrics and making better use of them to evaluate the outcome of changes in policy.

[+] clarkmoody|10 years ago|reply
How wonderful would it be to replace many government jobs with AI ;-)
[+] chubot|10 years ago|reply
I suspect we've already replaced 100,000+ jobs in call centers with AI -- you know the menu system that you get before you talk to somebody. (You might not think of that as AI now, but that's "success" -- 20 years ago it undoubtedly was AI.)

I'm not all that excited about interacting with more AI.

[+] mtgx|10 years ago|reply
Don't worry, we'll still pay 50%+ of the budget to defense contractors somehow.
[+] bogomipz|10 years ago|reply
I really hate politico speak. What does this really mean?

"to spur public dialogue on artificial intelligence and machine learning and identify challenges and opportunities related to this emerging technology."

[+] PeterisP|10 years ago|reply
It's a quite clear statement (though in a niche jargon) - such language means that they intend to dedicate resources to organize some events/discussion panels about the topic, and possibly even to some research grants.

The only unclear thing about politico speak generally is that is not clear if they will do X or if they just wanted to publicly claim that they'll do X for some PR or voter support, with no intention to actually do it.

[+] ProAm|10 years ago|reply
It means nothing. The government (21st century US gov.) is so poorly structured to accomplish or influence anything regarding growth it should be taken as an ROI/sales pitch invitation. They are saying, in the future we will have a bucket of money to give to our friends.... please be our friends so you can try to build something. The first implementation of Obama-care should be a clear indication of this. However, Im sure there is true investment in interesting technology on the defense side of the fence. Probably more money, and way more interesting problems.
[+] bsbechtel|10 years ago|reply
The idea of a computer deciding whether someone is guilty or not is a scary prospect. This is a bandwagon I'm not so sure the government should be so eager to jump on.
[+] exit|10 years ago|reply
they should discuss a universal basic income to counter the pervasive unemployment which ai will bring with it.
[+] qaq|10 years ago|reply
Given election cycles and how expansive it will be to deal with this problem, none will address this until the effects are strongly felt by avg. voter.
[+] stephenhess|10 years ago|reply
Love this but they're really not holding one of these in the Bay Area? -_-