(no title)
adityab | 8 years ago
But I am worried about the future of ML reporting. The "field" is growing fast and I think we don't have nearly as many science communicators for AI/ML in particular and CS in general, as in other fields.
I saw comments by lots of genuinely afraid laypeople who were producing platitudes to the effect that scientists don't have common sense, that we're "playing god"... etc. Also scary stuff things like the need to take action against evil scientists before it's too late.
There are genuinely bad things that could come of such reporting. Like knee-jerk regulations being imposed on AI research due to irrational fears, or worse - scared and angry vigilantes going after researchers personally.
It's not practical to educate everyone in ML, I wonder how we will solve this problem.
blowski|8 years ago
coldtea|8 years ago
Well, it's also that people who do understand it, can also be severely worried about scientists not understanding it and playing fast and loose for profit.
Medicine/biology can not even put out a decent non-conflicting dietary advice that holds its position for more than 10 years, but they are allowed to assemble genes they half-understand and put them out in an ecosystem whose interactions and complex interplays they understand 10% and just see what happens...
adityab|8 years ago
Clarifications by well-known researchers don't travel as far and wide as urgency-signaling clickbait...
CamperBob2|8 years ago
schoen|8 years ago
The Unabomber did target people connected to computer science and IT, including trying to kill people based on his perception of their research vision and agenda. For example, in his letter to his victim David Gelernter, a prominent computer scientist, he complained about "the way techno-nerds like you are changing the world" and cited Gelernter's ideas from Mirror Worlds (a book about technologies that might currently be called VR, AR, and simulation).
http://www.punkcommunity.com/unapack/press/outside/gvmm68e/l...
bduerst|8 years ago
Typically, reasonable people don't buy into most of these scare tactics, even if the tactics are being used as clickbait.
j9461701|8 years ago
Even the smartest of us can't know everything, and so if all you ever hear about say... IQ tests is "They're bunk, they don't test anything, they're gibberish, they're just an excuse for ivory tower academics to feel better than us" - it becomes a part of your natural understanding of the world, which you don't even think to question. The lies become part of the cultural fabric, and indistinguishable from truth without conducting your own research on what the scientists are actually saying. What you don't know you don't know is the most dangerous stuff of all.
pjc50|8 years ago
One tech business behaving in an untrustworthy manner poisons the pool for everyone else. Sometimes in a very literal way.
shadykiller|8 years ago
Unfortunately GMO has turned out to be a bad experiment(widespread usage of glyphosates) which has badly affected our environment and health and we are nowhere close to killing it.
https://gmo-awareness.com/resources/glyphosate/
Spooky23|8 years ago
unknown|8 years ago
[deleted]
nxsynonym|8 years ago
The layperson is guided by fear and is quick to trust any headline that comes across their newsfeed.
AI (and not Machine Learning to an extent) have a bad association with far-fetched sci-fi plots and worst case scenarios. Maybe the best solution would be to re-brand AI/ML as something more abstract.
adityab|8 years ago
They don't elaborate on it in the movie, but I could totally see such ideas being explained with style (and tense background music) in future sci-fi films about AI. Make it as banal as possible.
ahartman00|8 years ago
arjo1|8 years ago
redcalx|8 years ago
Biology is essentially simple chemical reactions on steroids. I.e. you have assumed there is a qualitative distinction between biological brains and artificial neural nets that cannot be overcome by scaling up. However (A) AI models are many, varied and new variants are being explored all the time, and (B) there are systems where new new dynamics appear at larger scales, thus producing a qualitatively different system based on the same underlying rules, e.g. physics -> chemistry -> biology -> human brains -> social networks.
schoen|8 years ago
https://news.ycombinator.com/item?id=14790673
shiftpgdn|8 years ago
Scea91|8 years ago
red75prime|8 years ago
Maybe they actually do. http://www.nature.com/articles/srep27755
mcrad|8 years ago
jo78p|8 years ago
mathperson|8 years ago
backpropaganda|8 years ago
In Elon's case, it's brand/profit/investments instead of salary.
aoeusnth1|8 years ago
Have you read or engaged with the arguments in Superintelligence? Elon has. Your limited knowledge of the arguments behind AI-risk is more pathetic than the uninformed lay-person's enthusiasm or irrational fear, because you pretend to know what you're talking about.
eli_gottlieb|8 years ago
EGreg|8 years ago
http://io9.gizmodo.com/prominent-scientists-sign-letter-of-w...
setzer22|8 years ago
- Should we let a complex statistical test determine whether I am capable for a certain job?
And also unreasonable concerns like:
- Will machines rebel against their human overlords by abusing their power and end up enslaving the human race?
Regulations should be established so AI is used in an ethical way whenever its outcome will be used to affect the lifes of people. We should stop assuming that all concerns about AI are in the latter group.
throw2016|8 years ago
That could help but the media is diverse and you can stop the media misreporting, and AI is not the only affected area, as much as you can prevent hype.
The idea of AI has been the stuff of science fiction for decades so there is always some latent interest. Add to that some heavily promoted film or TV show that touches on these topics and the media frenzy and scare mongering hits peak again.
coldtea|8 years ago
Laypeople? Maybe not on this incident, but on general AI progress, that's echoed by Musk, Kurzweil, Hawkings, et al.
joemi|8 years ago
ahartman00|8 years ago
TL;DR; it is very limited in what it can do, the accuracy is sometimes/(often?) not near 100%, it is an old field, meaning the progress has not all been in the past decade[1], and there is a lot of hype.
I have noticed that various techniques are only good at very specific tasks. CNNs are good at image recognition, RNNs are good for language/grammar, etc. Of course, it can only recognize images it has been trained on. There are some impressive applications of these specific tasks. For example, with image recognition that can recognize road signs, pedestrians, etc., you could build a rudimentary self driving car. But it would be wrong to think that anything is possible. IIUC, we have been taking some basic building blocks and constructing systems from them. Cool, but it doesn't mean general AI is right around the corner.
Even then, good can mean 80% accuracy. I can't think of the paper right now, but I read one where they improved the handling of negation in different parts of the sentences for sentiment analysis. They improved the state of the art from ~80% to 86%, IIRC. They were excited, and I know that science/research is built on incremental progress. But that's going from 1/5 wrong to 3/20. Take a look at the generated images from image generation pictures. Impressive, but a skilled photoshopper can do much better, based on what i have seen. And some papers are over hyped[2]. I hope I haven't been too hard on anyone's hard work, I'm just trying to ease fears here.
Also, as mentioned in [1], it is a fairly old field, relative to computer standards of course. For example, backpropagation was a huge breakthrough, but that happened in the 80's. There have been recent breakthroughs, notably deep learning. But it would be just wrong to think that everything you are seeing is the result of the past 10 years. (Which is what I thought until a few months ago :S) Like other science research, it would also be wrong to assume it will continue linearly. In fact, there have been multiple AI winters[1].
1. https://en.wikipedia.org/wiki/History_of_artificial_intellig... 2. https://medium.com/@yoav.goldberg/an-adversarial-review-of-a...
unknown|8 years ago
[deleted]
EGreg|8 years ago
pvdebbe|8 years ago
taneq|8 years ago
jacquesm|8 years ago
We have the same here on HN, see some of the comments in this thread:
https://news.ycombinator.com/item?id=14877920
The fear mongering is reaching pretty high levels.
truthexposer|8 years ago