top | item 28020109

(no title)

bhntr3 | 4 years ago

> Are you at all close to this space?

I am.

> The example Cory puts on policing

My most upvoted comment on this website was discussing this exact scenario. https://news.ycombinator.com/item?id=23655487

Could you perhaps clarify the generalization you're making about me and people like me so I can understand it?

discuss

order

3gg|4 years ago

Excellent. One problem in my mind that I don't see discussed enough -- and also not in your other post -- is that there is a large divide between those who use the technology (the cops in this case) and those who supply it, and there is no accountability in any of the two groups when something goes wrong. Like you write in your other post, "the system works (according to an objective function which maximizes arrests.)", and that is as far as the engineer goes. On the other hand, the cop picks up the technology and blindly applies it. To make any improvement to the system would require both groups to work together, but as far as I know, that is not happening. A recent example can be found in the adventures of Clearview AI. So from that perspective, I do think that the engineers (and the cops, and everybody else) are out to lunch, each doing their own work in a bubble and not paying enough attention to (or caring about) the side effects of the applications of this technology.

Also, the lack of thought and accountability that I mention above I think is fairly general from my experience, even outside of policing. That is why I don't generally agree with the lunch statement. Guys are having a hell of a party as far as I can tell -- at the expense of horror stories suffered by the victims of these systems.

salawat|4 years ago

I second this. I spend a great deal of time digging through where we've positioned big data models to steer population scale behavior, and very infrequently do the implementers of the system ever stop to analyze the changes they are seeding or think beyond the first or second degree consequences once things take off.

That is all part of engineering to me, so by definition, I think many in the field are in fact, out to lunch.

skmurphy|4 years ago

   "Don't say that he's hypocritical
   Say rather that he's apolitical
   'Once the rockets are up, who cares where they come down?
   That's not my department!' says Wernher von Braun

   Some have harsh words for this man of renown
   But some think our attitude
   Should be one of gratitude
   Like the widows and cripples in old London town
   Who owe their large pensions to Wernher von Braun"
Tom Lehrer "Wernher von Braun"

bhntr3|4 years ago

I actually think you're being too generous. Most people who work in ML are not ignorant that it has risks and flaws.

Many people are very resistant to the idea that their particular work can have a negative impact or that they should take responsibility for that. See Yan Lecun quitting Twitter (https://syncedreview.com/2020/06/30/yann-lecun-quits-twitter...)

Other people are very aware of the dangers of their work. But, when the money gets big enough, they take their concerns to the bank and their therapist. See Sam Altman's concerns about the dangers of machine intelligence before he invested in OpenAI (https://blog.samaltman.com/machine-intelligence-part-1) Contrast that with his decision to become the CEO, take the company private and license GPT-3 exclusively to Microsoft. (https://www.technologyreview.com/2020/02/17/844721/ai-openai...) He had reasons. He posts here. He might defend himself. But to me it seems like the kind of moral drift I've seen happen when people in silicon valley have to make hard choices about money and power.

There are also applications of ML that are generally safe and can be of benefit to society. See the many medical uses including cancer detection.(https://www.nature.com/articles/d41586-020-00847-2) Most of the work being done to expose the risks and biases of ML is being done by researchers who are at least somewhat within the field. In my math and computer science program, two and a half of the 25 students are doing their thesis in safe ML. (I'm giving myself a half because I'm working on logic based ML.) I don't think it's fair to believe that every person working in ML is participating in something negative for society.

Ultimately, I think we need some reasonable regulation and a lot more funding for research into safe ML. Corporations and governments want ML for purposes that can be unethical. Unfortunately they also control a lot of the research grants. So they have a disincentive to fund AI ethics or safe ML over pushing the boundaries of what ML can accomplish.

Finally, I think many engineers would like their work to be positive for society. Unfortunately, with what we know now, a lot of the edge cases we run into are unfixable. When Google Photos started classifying black people as gorillas, Google just removed primates from the search terms. Years later, they hadn't fixed it. (https://www.wired.com/story/when-it-comes-to-gorillas-google...) I'm sure most engineers on the project knew that was a hack. When faced with an unfixable issue like that, the engineer either tries to get the company to stop using ML for that problem, compartmentalizes and ignores the issue, or they quit. Where do you draw the ethical line? It's good to hold people accountable but it's unrealistic to expect that to solve the problem.

dundarious|4 years ago

3gg was replying to version_five. You're bhntr3. There is no generalization being made about you or even people like you, in a post that is a specific response to an account that is not yours.

bhntr3|4 years ago

I believe they are disagreeing whether "engineers working in this space are out to lunch" and since I have been "an engineer working in this space" I was asking for more clarification about what it meant to be "out to lunch".