(no title)
bhntr3 | 4 years ago
I am.
> The example Cory puts on policing
My most upvoted comment on this website was discussing this exact scenario. https://news.ycombinator.com/item?id=23655487
Could you perhaps clarify the generalization you're making about me and people like me so I can understand it?
3gg|4 years ago
Also, the lack of thought and accountability that I mention above I think is fairly general from my experience, even outside of policing. That is why I don't generally agree with the lunch statement. Guys are having a hell of a party as far as I can tell -- at the expense of horror stories suffered by the victims of these systems.
salawat|4 years ago
That is all part of engineering to me, so by definition, I think many in the field are in fact, out to lunch.
skmurphy|4 years ago
bhntr3|4 years ago
Many people are very resistant to the idea that their particular work can have a negative impact or that they should take responsibility for that. See Yan Lecun quitting Twitter (https://syncedreview.com/2020/06/30/yann-lecun-quits-twitter...)
Other people are very aware of the dangers of their work. But, when the money gets big enough, they take their concerns to the bank and their therapist. See Sam Altman's concerns about the dangers of machine intelligence before he invested in OpenAI (https://blog.samaltman.com/machine-intelligence-part-1) Contrast that with his decision to become the CEO, take the company private and license GPT-3 exclusively to Microsoft. (https://www.technologyreview.com/2020/02/17/844721/ai-openai...) He had reasons. He posts here. He might defend himself. But to me it seems like the kind of moral drift I've seen happen when people in silicon valley have to make hard choices about money and power.
There are also applications of ML that are generally safe and can be of benefit to society. See the many medical uses including cancer detection.(https://www.nature.com/articles/d41586-020-00847-2) Most of the work being done to expose the risks and biases of ML is being done by researchers who are at least somewhat within the field. In my math and computer science program, two and a half of the 25 students are doing their thesis in safe ML. (I'm giving myself a half because I'm working on logic based ML.) I don't think it's fair to believe that every person working in ML is participating in something negative for society.
Ultimately, I think we need some reasonable regulation and a lot more funding for research into safe ML. Corporations and governments want ML for purposes that can be unethical. Unfortunately they also control a lot of the research grants. So they have a disincentive to fund AI ethics or safe ML over pushing the boundaries of what ML can accomplish.
Finally, I think many engineers would like their work to be positive for society. Unfortunately, with what we know now, a lot of the edge cases we run into are unfixable. When Google Photos started classifying black people as gorillas, Google just removed primates from the search terms. Years later, they hadn't fixed it. (https://www.wired.com/story/when-it-comes-to-gorillas-google...) I'm sure most engineers on the project knew that was a hack. When faced with an unfixable issue like that, the engineer either tries to get the company to stop using ML for that problem, compartmentalizes and ignores the issue, or they quit. Where do you draw the ethical line? It's good to hold people accountable but it's unrealistic to expect that to solve the problem.
dundarious|4 years ago
bhntr3|4 years ago