top | item 14904767

(no title)

adityab | 8 years ago

This stuff sounds funny now, and some of us grad students had a good laugh.

But I am worried about the future of ML reporting. The "field" is growing fast and I think we don't have nearly as many science communicators for AI/ML in particular and CS in general, as in other fields.

I saw comments by lots of genuinely afraid laypeople who were producing platitudes to the effect that scientists don't have common sense, that we're "playing god"... etc. Also scary stuff things like the need to take action against evil scientists before it's too late.

There are genuinely bad things that could come of such reporting. Like knee-jerk regulations being imposed on AI research due to irrational fears, or worse - scared and angry vigilantes going after researchers personally.

It's not practical to educate everyone in ML, I wonder how we will solve this problem.

discuss

order

blowski|8 years ago

Seems like the same problem as nuclear power, GM foods, and just about every other new but complex technology. People don't understand it, and we always fear what we don't understand.

coldtea|8 years ago

>Seems like the same problem as nuclear power, GM foods, and just about every other new but complex technology. People don't understand it, and we always fear what we don't understand.

Well, it's also that people who do understand it, can also be severely worried about scientists not understanding it and playing fast and loose for profit.

Medicine/biology can not even put out a decent non-conflicting dietary advice that holds its position for more than 10 years, but they are allowed to assemble genes they half-understand and put them out in an ecosystem whose interactions and complex interplays they understand 10% and just see what happens...

adityab|8 years ago

True, but none of those fields have the same kind of "end of humanity" connotations attached, in the general psyche.

Clarifications by well-known researchers don't travel as far and wide as urgency-signaling clickbait...

CamperBob2|8 years ago

True, and it doesn't exactly help that respected people like Elon Musk are cranking up a lot of irrational FUD for reasons known only to themselves.

schoen|8 years ago

> or worse - scared and angry vigilantes going after researchers personally.

The Unabomber did target people connected to computer science and IT, including trying to kill people based on his perception of their research vision and agenda. For example, in his letter to his victim David Gelernter, a prominent computer scientist, he complained about "the way techno-nerds like you are changing the world" and cited Gelernter's ideas from Mirror Worlds (a book about technologies that might currently be called VR, AR, and simulation).

http://www.punkcommunity.com/unapack/press/outside/gvmm68e/l...

bduerst|8 years ago

We saw the same layman rhetoric with GMO crops in the late 90's to early 00's. Slippery-slope nightmare scenarios, accusations of playing god, corporate greed run unchecked, etc. It seems to be a recurrent theme for new technology.

Typically, reasonable people don't buy into most of these scare tactics, even if the tactics are being used as clickbait.

j9461701|8 years ago

I think typical, reasonable people do fall for this stuff, though not because they're scared. They fall for it because it's all they ever hear about the issue.

Even the smartest of us can't know everything, and so if all you ever hear about say... IQ tests is "They're bunk, they don't test anything, they're gibberish, they're just an excuse for ivory tower academics to feel better than us" - it becomes a part of your natural understanding of the world, which you don't even think to question. The lies become part of the cultural fabric, and indistinguishable from truth without conducting your own research on what the scientists are actually saying. What you don't know you don't know is the most dangerous stuff of all.

pjc50|8 years ago

There was a real turning of the historical tide between about the 60s and the 80s with regard to this sort of thing; we went from "new technology must be a miracle" to "we've had all these revelations of the hidden downsides - thalidomide, leaded petrol, CFCs, acid rain, nuclear fallout, superfund sites - that anything new is suspect".

One tech business behaving in an untrustworthy manner poisons the pool for everyone else. Sometimes in a very literal way.

shadykiller|8 years ago

We've always played with nature without understanding the repercussions. Some turned out good and some bad. So the best strategy is to have a kill switch.

Unfortunately GMO has turned out to be a bad experiment(widespread usage of glyphosates) which has badly affected our environment and health and we are nowhere close to killing it.

https://gmo-awareness.com/resources/glyphosate/

Spooky23|8 years ago

Many of these things have played out. We have declining biodiversity of staple crops, dependence on a small number of herbicides, etC.

nxsynonym|8 years ago

Unfortunately non-sensationalist news doesn't sell.

The layperson is guided by fear and is quick to trust any headline that comes across their newsfeed.

AI (and not Machine Learning to an extent) have a bad association with far-fetched sci-fi plots and worst case scenarios. Maybe the best solution would be to re-brand AI/ML as something more abstract.

adityab|8 years ago

I loved how in Arrival, they build a statistical model to map between concepts in the two languages, ostensibly via a joint embedding space.

They don't elaborate on it in the movie, but I could totally see such ideas being explained with style (and tense background music) in future sci-fi films about AI. Make it as banal as possible.

ahartman00|8 years ago

I've been thinking Big Statistics would be a more accurate description, and doesn't sound scary.

arjo1|8 years ago

Perhaps a good layman type explanation would be that nueral networks are essentially curve fitting on steroids. (Hopefully at some point people have done curve fitting in school and remember drawing lines of best fit). Therefore the term AI is essentially a misnomer. I would even go as far as to emphasize that nueral networks are boring mathematical equations which do not actually mimic the inner workings of our brains.

redcalx|8 years ago

> essentially curve fitting on steroids

Biology is essentially simple chemical reactions on steroids. I.e. you have assumed there is a qualitative distinction between biological brains and artificial neural nets that cannot be overcome by scaling up. However (A) AI models are many, varied and new variants are being explored all the time, and (B) there are systems where new new dynamics appear at larger scales, thus producing a qualitatively different system based on the same underlying rules, e.g. physics -> chemistry -> biology -> human brains -> social networks.

shiftpgdn|8 years ago

I have a bachelor's degree and have no idea what curve fitting is.

Scea91|8 years ago

How do you know that our brains are not 'just curve fitting on steroids' too?

mcrad|8 years ago

Perhaps it's the ML students who are the laypeople. In my experience, it's not the scientists but the politicians, bankers, and project managers who ultimately will do the damage. AI in an industrial context with real market forces - that's normal. AI sponsored by a consumer products or media giant like Facebook - tryna play god, that's true.

jo78p|8 years ago

We don't have too. The sewing machine was invented a hundred years ago and it's inventors factory was burnt down and he was chased out of town. But everyone eventually came around. Fast. Same goes for anything else. Dumb people will do dumb things. Our professional reactor class will react for their upvotes/likes/retweets and viewcounts. But progress doesn't give a shit as the sewing machine tells us.

mathperson|8 years ago

you laughed? I put my head through a wall... It seems like AI is the new stem cell in terms of public attitudes towards the research. It isn't just lay people. Elon Musk seems to have no idea what a neural net actually is and he is funding a private AI lab!

backpropaganda|8 years ago

"It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"

In Elon's case, it's brand/profit/investments instead of salary.

aoeusnth1|8 years ago

Elon is certainly not worried about the insect-level neural networks we have now.

Have you read or engaged with the arguments in Superintelligence? Elon has. Your limited knowledge of the arguments behind AI-risk is more pathetic than the uninformed lay-person's enthusiasm or irrational fear, because you pretend to know what you're talking about.

eli_gottlieb|8 years ago

Just wait until some hypster finds out about the link between machine learning and data compression, then starts referring to zip files as "AI language" all over the place.

EGreg|8 years ago

Ummm you know something is serious when the actual scientists and researchers are clamoring for regulation out of fear.

http://io9.gizmodo.com/prominent-scientists-sign-letter-of-w...

setzer22|8 years ago

There are serious concerns any reasonable person should have about AI, like:

- Should we let a complex statistical test determine whether I am capable for a certain job?

And also unreasonable concerns like:

- Will machines rebel against their human overlords by abusing their power and end up enslaving the human race?

Regulations should be established so AI is used in an ethical way whenever its outcome will be used to affect the lifes of people. We should stop assuming that all concerns about AI are in the latter group.

throw2016|8 years ago

These stories are usually pushed out to the media. Ideally AI researchers and startups would communicate more accurately about their current capabilities and how far away they are from anything resembling intelligence or sentience but it's easy to give in to excitement and speculate about possibilities and scenarios that are far away.

That could help but the media is diverse and you can stop the media misreporting, and AI is not the only affected area, as much as you can prevent hype.

The idea of AI has been the stuff of science fiction for decades so there is always some latent interest. Add to that some heavily promoted film or TV show that touches on these topics and the media frenzy and scare mongering hits peak again.

coldtea|8 years ago

>I saw comments by lots of genuinely afraid laypeople who were producing platitudes to the effect that scientists don't have common sense, that we're "playing god"... etc. Also scary stuff things like the need to take action against evil scientists before it's too late.

Laypeople? Maybe not on this incident, but on general AI progress, that's echoed by Musk, Kurzweil, Hawkings, et al.

joemi|8 years ago

What are the counter arguments that should be put forward to those worried about AI? I don't know much/anything about AI/ML research at all, so I don't even know where to begin allaying fears.

ahartman00|8 years ago

Note, I am a novice, so please correct me, but...

TL;DR; it is very limited in what it can do, the accuracy is sometimes/(often?) not near 100%, it is an old field, meaning the progress has not all been in the past decade[1], and there is a lot of hype.

I have noticed that various techniques are only good at very specific tasks. CNNs are good at image recognition, RNNs are good for language/grammar, etc. Of course, it can only recognize images it has been trained on. There are some impressive applications of these specific tasks. For example, with image recognition that can recognize road signs, pedestrians, etc., you could build a rudimentary self driving car. But it would be wrong to think that anything is possible. IIUC, we have been taking some basic building blocks and constructing systems from them. Cool, but it doesn't mean general AI is right around the corner.

Even then, good can mean 80% accuracy. I can't think of the paper right now, but I read one where they improved the handling of negation in different parts of the sentences for sentiment analysis. They improved the state of the art from ~80% to 86%, IIRC. They were excited, and I know that science/research is built on incremental progress. But that's going from 1/5 wrong to 3/20. Take a look at the generated images from image generation pictures. Impressive, but a skilled photoshopper can do much better, based on what i have seen. And some papers are over hyped[2]. I hope I haven't been too hard on anyone's hard work, I'm just trying to ease fears here.

Also, as mentioned in [1], it is a fairly old field, relative to computer standards of course. For example, backpropagation was a huge breakthrough, but that happened in the 80's. There have been recent breakthroughs, notably deep learning. But it would be just wrong to think that everything you are seeing is the result of the past 10 years. (Which is what I thought until a few months ago :S) Like other science research, it would also be wrong to assume it will continue linearly. In fact, there have been multiple AI winters[1].

1. https://en.wikipedia.org/wiki/History_of_artificial_intellig... 2. https://medium.com/@yoav.goldberg/an-adversarial-review-of-a...

EGreg|8 years ago

I wonder how long until AI chats up women online better than 99% of guys.

pvdebbe|8 years ago

Now you said it! How long until someone writes an AI to speed up Tinder discussions so that you can just wait for confirmed physical dates to go to. While ubering to the cafe, you can read a summary the AI prepared for you, with some tips and tricks.

taneq|8 years ago

Now there's a research project for ya. Maybe someone at Tinder is working on it right now with their massive dataset of pickup lines.

jacquesm|8 years ago

> I saw comments by lots of genuinely afraid laypeople who were producing platitudes to the effect that scientists don't have common sense, that we're "playing god"...

We have the same here on HN, see some of the comments in this thread:

https://news.ycombinator.com/item?id=14877920

The fear mongering is reaching pretty high levels.

truthexposer|8 years ago

I think this lack of understanding of computer technology by the media has been revealed time and time again, especially in cybersecurity. 10 years ago with Anonymous, and now perhaps with the Russians.