Teach AI using our behaviour, AI learns our behaviour. A bit like our children. I'm genuinely confused as to the alternative.
The objection seems to be based on the falasy that technological progress equals social or political "progress". Why on earth would we expect AI descision making to display a lack of prejudice when human decision making is suffused with it.
The only people who expect technology to act like a benevolent god are the ones who have replaced their god with it. All technological progress does is to increase the power and influence of human beings. The progress the writer seems to want is socio- political, not technological.
> Why on earth would we expect AI descision making to display a lack of prejudice when human decision making is suffused with it.
Particularly when we consider what we mean by prejudice, which is presumably something like Making a decision on grounds which we deem it important to ignore. This is a very complex concept. It's a function of society, and changes with society. It's not something with a rigorous definition.
Obvious example: reasonable modern people know it's indefensible to make an engineering hiring decision on the grounds of ethnicity, regardless of whether there are any correlations associated with ethnicity. This is even enshrined in law in many countries.
To make a decision on the grounds of someone's qualifications and job experience, however, does not count as prejudice.
We should expect a machine learning system to act as a correlation-seeker (that is after all what it is designed to do), without a nuanced understanding of what prejudice means.
We've seen this issue crop up in the context of an AI having a say in parole decisions. [0] also relevant discussion at [1]
We are humans, living in a human society, with human values and purposes, and the associated processes and conflicts, at every point of social scale. Furthermore, we are a process, with a past and an unguessable future, at every point of social scale.
It is patently an act of incredible destruction to increasingly, globally and inescapably “control” important domains of human lives, relationships and society with extremely superficial centralized mathematical models of what humanity is.
There will be unintended consequences. They will be horrifying, and we may not even grasp what we've lost.
>> Why on earth would we expect AI descision making to display a lack of prejudice when human decision making is suffused with it.
According to the article, that is how AI decision making is presented by "technocrats everywhere". For instance:
Empiricism-washing is the top ideological dirty trick of technocrats everywhere: they assert that the data “doesn’t lie,” and thus all policy prescriptions based on data can be divorced from “politics” and relegated to the realm of “evidence.”
The article is presenting, and supporting, the opposite opinion to that of "technocrats everywhere".
Maybe the writer should be understood here as criticising the hype around “AI”. AI is hyped as somehow allowing us to transcend our current approaches and find new patterns in the data, but what it really does is make us dig our heels deeper into our existing methodologies.
> "Teach AI using our behaviour, AI learns our behaviour. A bit like our children. I'm genuinely confused as to the alternative."
The ultimate goal of teaching, however, is teaching how we arrived at a given behavior. And amongst the most prominent themes in this is dealing with new situations and adapting to change. Teaching is not about imitation, it's about transcending the example.
Ever hear the phrase "truth from the mouth of babes"? ML systems are a lot like children, and like children, frequently state things that are both true and impolite. Sometimes, though, these things need to be said.
Conservative in this sense means something like 'resistant to deviation from established norms'. I think a lot of the headline-only readers take conservative to mean 'of the character of a specific political movement' which ironically seems more activistic than change resistant.
Perhaps you could call it "integral controller" (like the I in "PID controller")? Because systems that have a memory behave like ones, and we are definitely in a feedback loop with those systems.
And, from what I remember from my control theory classes, the integral part of a controller introduces lag, inertia, generally making the output more resistant to input changes.
(Also note that the "non-conservative" Proportional and Derivative components in a PID by definition don't learn - they react to input and its change.)
Mostly only in America does it have this association. I read it as a bias towards not changing unless it's something clearly related to the US political system
The article calls machine learning "conservative" because it only tells us what is, and not what should be. I don't think that's a useful framing. It's more accurate to say that, like all statistical techniques, machine learning is descriptive, not prescriptive. Not everything in the world has to be prescriptive.
Examples are good and on point, the conclusion is not. When he tries to frame it all in some grand political/quasi-philosophical manner, it becomes outright wrong and stupid, but I won't argue with that part, because it won't be useful to anyone.
What I want to point out is that nothing he says should be attributed specifically to "machine learning". Machine learning is a set of techniques to make inferences from data automatically, but there is no implicit restriction on what the inferences should be. So machine learning is not "conservative" — almost all popular applications of it are. There is no inherent reason why an ML-algorithm should suggest the most similar videos to the ones you watched recently. The same way you can use (say) association learning to find most common item sets, you can use it to find the least common item sets with the given item, and recommend them instead. Or anything in between. But application designers usually choose the less creative option to recommend (understandably so) stuff similar to what you already got.
Sometimes it's ok: if the most popular thing to ask somebody to sit on nowadays is "my face" it's only logical to advice that, I see nothing wrong with this application. But many internet shops indeed could benefit from considering what a user has already bought (from this very shop), because it isn't likely he will want to buy a second similar, but different TV anytime soon. Or, when recommending a movie, you could try to optimize for something different than "the most popular stuff watched by people that watched a lot of stuff you did watch" — which is a "safe" thing to recommend, but at the same time not really interesting. Of course, finding another sensible approach is a lot of work, but it doesn't mean there isn't one: maybe it could be "a movie with unusually high score given by somebody, who also highly rated a number of movies you rated higher than the average".
While Cory is talking about a slightly different facet of conservatism, I found it quite ironic that he made a rather conservative statement himself:
>Nor is machine learning likely to produce a reliable method of inferring intention: it’s a bedrock of anthropology that intention is unknowable without dialogue. As Cliff Geertz points out in his seminal 1973 essay, “Thick Description,” you cannot distinguish a “wink” (which means something) from a “twitch” (a meaningless reflex) without asking the person you’re observing which one it was.
Was this mathematically proven? It's definitely an interesting statement, since a lot of "AI" systems try to predict intention and do a piss poor job of it, but to quote that the anthropological ancestors have proclaimed for eternity that a computer can never know even the slightest fraction of intention from just observation seems hypocritical.
He specifically used the example of predictive policing. I have a question: is it racist to send police patrolling areas of high crime, if those areas have a majority ethnic demographic? Should that information be discarded, and all places patrolled equally? Should they be patrolled less? It seems like there is no winning. You're either wasting resources, ignoring a problem, or being perceived as racist.
The problems of appropriate policing in so-called "high crime" neighborhoods are well understood. Many academic studies, as well as popular non-fiction like Jill Leovy's Ghettoside and Chris Hayes' Colony in a Nation, have discussed the issue. To sum up the literature in a few sentences, the problem is that minority neighborhoods are both overpoliced and underpoliced. There are a lot of useless arrests for minor crimes like jaywalking which makes the residents of these neighborhoods hostile to the police. (These arrests are driven by the debunked theory of broken window policing.) Simultaneously, there's not enough effort put into solving serious crimes like murder. In this context, the actual effect of predictive policing is that it ends up doing more of the useless over-policing, which unfortunately makes these neighborhoods even worse.
So, how does this relate to your question? The point is that predictive policing is solving the wrong problem. What's needed are not more accurate neural nets predicting crime, but techniques for addressing the underly sociological factors that cause crime.
Taking a step back and speaking broadly, Cory's point is that the focus on data and quantitative analysis is causing problems in two ways: (i) people are using quantitative methods to solve the wrong problems, and (ii) they seem to be oblivious to (and in some cases actively hostile to acknowledging) the harms being perpetrated by their methods. Both of these problems seem to be driven by a lack of understanding of well understood (but non-quantitative) social science literature.
Here are three different examples that would meet your description, with very different answers:
1) The X community has exactly the same, or lower, crime rates than the entire nation. However, nationwide anti-X sentiment means that despite this, most convicted criminals are from the X community. This makes it look like the regions where the X community live are high-crime regions.
This is “accidentally racist”. Researchers know about this problem.
2) The X community is more prone to crime, and as this is purely an example, it just is and there’s no need to justify that.
Extra police in this scenario is not racist, though I suspect anyone who jumps right in and assumes it to be true about the world might well be racist themselves.
3) A confounding variable, such as income or education, means that members of the X community are more likely to commit crimes than the general population, albeit it at the same rate as the equivalent income/education/confounding sunset of the general population.
In this case, while it would not be racist to send in more police, it would totally be racist if politicians applied unequal effort to solve the confounding variable within the general population as compared specifically to group X.
"Patrolling" is pretty vague, the concern is primarily about groundless stopping, searches, arrests and shooting of suspects. It is discrimination if e.g. groundless searches are done at higher rates on people with a certain ethnicity, since presumed innocent individuals of any ethnicity should be treated equally and with the same rights.
Stopping cars just because of the skin color of the driver is not "perceived as racist", it is racist, since you are treating people worse due to their race.
How do you objectively determine "areas of high crime"? That's a core problem. We don't have data on the actual incidence of crime, only data filtered through the justice system.
I suspect that there is indeed no winning, and that under such circumstances every possible course of action is likely to be perceived as racist by some number of people. This is because the population at large doesn't seem to be able to agree on what the word actually means (https://slatestarcodex.com/2017/06/21/against-murderism/).
Regarding ML based predictive policing specifically, it seems like a spectacularly bad (but efficient and statistically effective!) idea to naively apply a machine learning based classification approach to such a problem.
However, I would refine the claim "Machine learning is fundamentally conservative" to say "Reducing a distribution of predictions to only the most likely prediction is conservative."
It isn't /fundamentally/ conservative, it is just typically programmed to choose the most conservative (highest probability) predictions. You could integrate a liberal aspect by fuzzing the decision process to choose from lower probability predictions.
More creativity, and ability to escape local minima, but at some cost when dealing with 'typical' cases and when making particularly damaging mispredictions.
This is a problem in almost every academic field right now. Peoples’ lack of sophistication with understanding the mathematical/philosophical constraints of their tools is incredibly scary.
For example, people throw around the low labor productivity stat all the time to prove that no automation is happening, not realizing that GDP is the numerator of that stat. Well, GDP is a pretty terrible gauge of who is benefitting from technological innovation, as it is distributed incredibly unevenly, probably more so than income even. The problem with automation isn’t that it’s not happening, it’s that only a very small number of people are capturing the wealth that it is generating. Also, it is generating a small number of professional jobs, but mostly the jobs it is generating suck (Uber driver, AMZN warehouse worker, etc).
Likewise, looking only at nominal wage increases is a “pretty terrible” way of gauging who benefits from increased wealth. If improvement in productivity (automation) results in lower prices (or better or more goods for the same price), that increases the purchasing power of a worker’s earnings, even if their wage has not risen.
In other words, a worker is richer if they can buy more and better stuff even without any rise in their wage.
The fact that everyone can afford a smartphone, including those with jobs that “suck”, does not reconcile with “only a very small number of people are capturing the wealth that it is generating.“
In the 1990s and early 00s, automation happened mostly in manufacturing. The low productivity growth stats that stupid (mainstream) "economists" like Krugman cite to argue against people like Yang or Musk measure all labor. But manufacturing makes up less than 1/5 of the US economy. If you look at the labor productivity in the manufacturing sector alone, it's congruent with the automation argument [1].
Productivity numbers are lagging because investments in new tech need time to bear fruit. Automation of the service sector has already started and will show in the numbers soon enough. Given that the service sector is four times the size of manufacturing in the US, it will be even better for GDP and even worse for people if again nothing is done and people keep listening to those intellectual frauds who call themselves "economists."
Reminds me of the thought experiment where I can duplicate myself with a robot. Who would benefit? Me, having eterernal vacation and my robot doing my job for me, or my employer who lays me off and uses a robot instead?
Clearly, if the latter would happen, and we are on that path right now, we are bound for a distopian future.
Old economic models don't work anymore, we need radical changes. If we don't plan them, they will come in the form of either revolution, or brutal repression (ala China).
I am open-minded to the idea that "Machine Learning" is conservative in the manner Doctorow describes.
However, I do wish we would not use the word "conservative" as an epithet. I think it's quite likely that "conservative" is exactly what we should be looking for in algorithms and prediction engines.
The fact that their properties are misused and infused with reactionary (not necessarily conservative) biases by humans should not make us attach morally negative properties to being conservative.
FWIW, my conservatism leads me to be suspicious of employing these tools in the first place ...
When you have a dumb machine that simply solves for maximum likelihood, using past data to predict the future is what you’re going to get. Why is this surprising?
I don’t understand what he’s arguing for, exactly. That we should have AGI all of the sudden to understand intent, and detangle causation vs correlation? I don’t think anyone in the machine learning community would argue against that, but the question is how.
I don't think Cory is writing for the machine learning community.
Cory is trying to inform people that AI and ML aren't a magical solution to all our wasteful systems because in some cases we need to throw away the system and start again, and AI can't tell us that. It can only tell us the best way to run the current system. That's what he means when he says AI is conservative.
It's not surprising at all if you understand AI but most non-tech people don't.
Did you think this or the other works mentioned are written for the machine learning community? They are written so the rest of society can understand what is happening despite mountains of hype to the contrary and the marketing of ML as magic.
Alpha zero is not conservative at chess or go. It doesn’t have to have seen a position before to evaluate it.
You can always train a model to reject a class just as easily as you train it to accept a class. So train it to reject a common class and accept a mutant and it will function more like an evolutionary algorithm that protects against bad luck like bacteria with antibiotics.
...also, as soon as you bring a network element or social element into it, it may no longer be this conservative/self-reinforcing thing. For example, if the algo behind spotify were to identify a "music taste clone" of yours somewhere in the world, they could present you with music you've never heard about that your clone has liked, and vice-versa. So you actually start discovering new stuff that you end up liking.
Furthermore, there is a psychological element at play around mirroring back your own intelligence at you (see Eliza / Rogerian Psychotherapy) in a way that will lead you to new thought.
Nor is machine learning likely to produce a reliable method of inferring intention: it’s a bedrock of anthropology that intention is unknowable without dialogue. As Cliff Geertz points out in his seminal 1973 essay, “Thick Description,” you cannot distinguish a “wink” (which means something) from a “twitch” (a meaningless reflex) without asking the person you’re observing which one it was.
I see this and other examples of "explainability" from time to time as proof why humans are not a "Black Box."
However it rests on two faulty assumptions.
1. The explanation will be truthful
2. The explainer can always reliably describe the actual cause of their own actions
For the purposes of theory, you could explain away #1. However a minute of introspection will make you realize that you would fail at #2 the vast majority of the time - using story telling and retroactive explanations to explain your behavior.
> Search for a refrigerator or a pair of shoes and they will follow you around the web as machine learning systems “re-target” you while you move from place to place, even after you’ve bought the fridge or the shoes
> ...
> This is what makes machine learning so toxic.
I'm saddened to learn about our toxic contributions to society and can't wait to hear about alternative mind-reading approaches for fridge recommendation in the next article.
Words like 'conservative' and 'toxic' are misleading because they imply that there are better alternatives that are not being chosen. Far better are the terms by commenters here, 'descriptive' and so on. That the article is not written for machine learning practitioners makes it even more misleading.
I am not a fan of this article. I have seen many critiques in this vein, this is just another car in the train. None of them have quite reached the point of confronting what is bothering many: What will we do when machine learning (or science, or anything, really) comes to a conclusion we find unpalatable, for whatever reason?
It could be any conclusion, not just those conservatives dislike. Using myself as a target, what if we eventually have enough sampling data to show that people of Irish extraction are more prone to alcoholism, and people of Scottish extraction tend to be statistically more thrifty? (This suggests that I would be a cheap drunk). How will we cope with that?
It is true, ML is prone to some black-boxiness, but it could be any statistical extraction. We might very well use other methods besides ML to show the correlates once suggested.
Will we simply put in a hard override to get the answers we want to get, the answers we find comfortable? History shows we have seen whole governments subscribe to this idea before. Ignore the results, publish what pleases.
I've no easy answers here, but my guess is that history will repeat itself.
Will we simply put in a hard override to get the answers we want to get, the answers we find comfortable
Yep. That's already happening. See Google's papers on "debiasing AI" (if you look at their example goals, they mean biasing it away from what it learned and towards what Googlers wish was true).
> What will we do when machine learning (or science, or anything, really) comes to a conclusion we find unpalatable, for whatever reason?
What makes you think it hasn't already happened? Numerous times?
Biological differences between sexes? IQ differences between differentiable subsets of people? The gender equality paradox?
We already know what will happen when findings are unpalatable. We'll think of ways to explain why they don't matter and can't possibly be right, or that we weren't measuring the right thing in the first place. And these are just the more obvious cases!
>> Empiricism-washing is the top ideological dirty trick of technocrats everywhere: they assert that the data “doesn’t lie,” and thus all policy prescriptions based on data can be divorced from “politics” and relegated to the realm of “evidence.”
Well data doesn't lie. Because data doesn't _say_ anything. People interpret data and they do so according to their own inherent biases. And if the data is already biased (i.e. gathered according to peoples' biases) its interpretations end up far, far away from any objective truth.
[+] [-] jonnypotty|6 years ago|reply
The objection seems to be based on the falasy that technological progress equals social or political "progress". Why on earth would we expect AI descision making to display a lack of prejudice when human decision making is suffused with it.
The only people who expect technology to act like a benevolent god are the ones who have replaced their god with it. All technological progress does is to increase the power and influence of human beings. The progress the writer seems to want is socio- political, not technological.
[+] [-] MaxBarraclough|6 years ago|reply
Particularly when we consider what we mean by prejudice, which is presumably something like Making a decision on grounds which we deem it important to ignore. This is a very complex concept. It's a function of society, and changes with society. It's not something with a rigorous definition.
Obvious example: reasonable modern people know it's indefensible to make an engineering hiring decision on the grounds of ethnicity, regardless of whether there are any correlations associated with ethnicity. This is even enshrined in law in many countries.
To make a decision on the grounds of someone's qualifications and job experience, however, does not count as prejudice.
We should expect a machine learning system to act as a correlation-seeker (that is after all what it is designed to do), without a nuanced understanding of what prejudice means.
We've seen this issue crop up in the context of an AI having a say in parole decisions. [0] also relevant discussion at [1]
[0] https://www.forbes.com/sites/bernardmarr/2019/01/29/3-steps-...
[1] https://news.ycombinator.com/item?id=13156153
[+] [-] rtikulit|6 years ago|reply
It is patently an act of incredible destruction to increasingly, globally and inescapably “control” important domains of human lives, relationships and society with extremely superficial centralized mathematical models of what humanity is.
There will be unintended consequences. They will be horrifying, and we may not even grasp what we've lost.
[+] [-] YeGoblynQueenne|6 years ago|reply
According to the article, that is how AI decision making is presented by "technocrats everywhere". For instance:
Empiricism-washing is the top ideological dirty trick of technocrats everywhere: they assert that the data “doesn’t lie,” and thus all policy prescriptions based on data can be divorced from “politics” and relegated to the realm of “evidence.”
The article is presenting, and supporting, the opposite opinion to that of "technocrats everywhere".
[+] [-] TazeTSchnitzel|6 years ago|reply
[+] [-] masswerk|6 years ago|reply
The ultimate goal of teaching, however, is teaching how we arrived at a given behavior. And amongst the most prominent themes in this is dealing with new situations and adapting to change. Teaching is not about imitation, it's about transcending the example.
[+] [-] quotemstr|6 years ago|reply
[+] [-] reilly3000|6 years ago|reply
[+] [-] TeMPOraL|6 years ago|reply
And, from what I remember from my control theory classes, the integral part of a controller introduces lag, inertia, generally making the output more resistant to input changes.
(Also note that the "non-conservative" Proportional and Derivative components in a PID by definition don't learn - they react to input and its change.)
[+] [-] TazeTSchnitzel|6 years ago|reply
[+] [-] Huycfhct|6 years ago|reply
[+] [-] knzhou|6 years ago|reply
[+] [-] krick|6 years ago|reply
What I want to point out is that nothing he says should be attributed specifically to "machine learning". Machine learning is a set of techniques to make inferences from data automatically, but there is no implicit restriction on what the inferences should be. So machine learning is not "conservative" — almost all popular applications of it are. There is no inherent reason why an ML-algorithm should suggest the most similar videos to the ones you watched recently. The same way you can use (say) association learning to find most common item sets, you can use it to find the least common item sets with the given item, and recommend them instead. Or anything in between. But application designers usually choose the less creative option to recommend (understandably so) stuff similar to what you already got.
Sometimes it's ok: if the most popular thing to ask somebody to sit on nowadays is "my face" it's only logical to advice that, I see nothing wrong with this application. But many internet shops indeed could benefit from considering what a user has already bought (from this very shop), because it isn't likely he will want to buy a second similar, but different TV anytime soon. Or, when recommending a movie, you could try to optimize for something different than "the most popular stuff watched by people that watched a lot of stuff you did watch" — which is a "safe" thing to recommend, but at the same time not really interesting. Of course, finding another sensible approach is a lot of work, but it doesn't mean there isn't one: maybe it could be "a movie with unusually high score given by somebody, who also highly rated a number of movies you rated higher than the average".
[+] [-] ramraj07|6 years ago|reply
>Nor is machine learning likely to produce a reliable method of inferring intention: it’s a bedrock of anthropology that intention is unknowable without dialogue. As Cliff Geertz points out in his seminal 1973 essay, “Thick Description,” you cannot distinguish a “wink” (which means something) from a “twitch” (a meaningless reflex) without asking the person you’re observing which one it was.
Was this mathematically proven? It's definitely an interesting statement, since a lot of "AI" systems try to predict intention and do a piss poor job of it, but to quote that the anthropological ancestors have proclaimed for eternity that a computer can never know even the slightest fraction of intention from just observation seems hypocritical.
[+] [-] daenz|6 years ago|reply
[+] [-] wsxcde|6 years ago|reply
So, how does this relate to your question? The point is that predictive policing is solving the wrong problem. What's needed are not more accurate neural nets predicting crime, but techniques for addressing the underly sociological factors that cause crime.
Taking a step back and speaking broadly, Cory's point is that the focus on data and quantitative analysis is causing problems in two ways: (i) people are using quantitative methods to solve the wrong problems, and (ii) they seem to be oblivious to (and in some cases actively hostile to acknowledging) the harms being perpetrated by their methods. Both of these problems seem to be driven by a lack of understanding of well understood (but non-quantitative) social science literature.
[+] [-] ben_w|6 years ago|reply
Here are three different examples that would meet your description, with very different answers:
1) The X community has exactly the same, or lower, crime rates than the entire nation. However, nationwide anti-X sentiment means that despite this, most convicted criminals are from the X community. This makes it look like the regions where the X community live are high-crime regions.
This is “accidentally racist”. Researchers know about this problem.
2) The X community is more prone to crime, and as this is purely an example, it just is and there’s no need to justify that.
Extra police in this scenario is not racist, though I suspect anyone who jumps right in and assumes it to be true about the world might well be racist themselves.
3) A confounding variable, such as income or education, means that members of the X community are more likely to commit crimes than the general population, albeit it at the same rate as the equivalent income/education/confounding sunset of the general population.
In this case, while it would not be racist to send in more police, it would totally be racist if politicians applied unequal effort to solve the confounding variable within the general population as compared specifically to group X.
[+] [-] goto11|6 years ago|reply
Stopping cars just because of the skin color of the driver is not "perceived as racist", it is racist, since you are treating people worse due to their race.
[+] [-] UncleMeat|6 years ago|reply
[+] [-] Reelin|6 years ago|reply
Regarding ML based predictive policing specifically, it seems like a spectacularly bad (but efficient and statistically effective!) idea to naively apply a machine learning based classification approach to such a problem.
[+] [-] xpe|6 years ago|reply
However, I would refine the claim "Machine learning is fundamentally conservative" to say "Reducing a distribution of predictions to only the most likely prediction is conservative."
[+] [-] kitsuac|6 years ago|reply
More creativity, and ability to escape local minima, but at some cost when dealing with 'typical' cases and when making particularly damaging mispredictions.
[+] [-] hprotagonist|6 years ago|reply
ML Algorithm: yes!!
[+] [-] hacknat|6 years ago|reply
For example, people throw around the low labor productivity stat all the time to prove that no automation is happening, not realizing that GDP is the numerator of that stat. Well, GDP is a pretty terrible gauge of who is benefitting from technological innovation, as it is distributed incredibly unevenly, probably more so than income even. The problem with automation isn’t that it’s not happening, it’s that only a very small number of people are capturing the wealth that it is generating. Also, it is generating a small number of professional jobs, but mostly the jobs it is generating suck (Uber driver, AMZN warehouse worker, etc).
[+] [-] alexmingoia|6 years ago|reply
In other words, a worker is richer if they can buy more and better stuff even without any rise in their wage.
The fact that everyone can afford a smartphone, including those with jobs that “suck”, does not reconcile with “only a very small number of people are capturing the wealth that it is generating.“
[+] [-] HSO|6 years ago|reply
In the 1990s and early 00s, automation happened mostly in manufacturing. The low productivity growth stats that stupid (mainstream) "economists" like Krugman cite to argue against people like Yang or Musk measure all labor. But manufacturing makes up less than 1/5 of the US economy. If you look at the labor productivity in the manufacturing sector alone, it's congruent with the automation argument [1].
Productivity numbers are lagging because investments in new tech need time to bear fruit. Automation of the service sector has already started and will show in the numbers soon enough. Given that the service sector is four times the size of manufacturing in the US, it will be even better for GDP and even worse for people if again nothing is done and people keep listening to those intellectual frauds who call themselves "economists."
_______________________
[1] https://www.bls.gov/lpc/prodybar.htm
[+] [-] plmu|6 years ago|reply
Clearly, if the latter would happen, and we are on that path right now, we are bound for a distopian future.
Old economic models don't work anymore, we need radical changes. If we don't plan them, they will come in the form of either revolution, or brutal repression (ala China).
[+] [-] lowdose|6 years ago|reply
[+] [-] mnowicki|6 years ago|reply
[deleted]
[+] [-] oars|6 years ago|reply
[deleted]
[+] [-] rsync|6 years ago|reply
However, I do wish we would not use the word "conservative" as an epithet. I think it's quite likely that "conservative" is exactly what we should be looking for in algorithms and prediction engines.
The fact that their properties are misused and infused with reactionary (not necessarily conservative) biases by humans should not make us attach morally negative properties to being conservative.
FWIW, my conservatism leads me to be suspicious of employing these tools in the first place ...
[+] [-] azinman2|6 years ago|reply
I don’t understand what he’s arguing for, exactly. That we should have AGI all of the sudden to understand intent, and detangle causation vs correlation? I don’t think anyone in the machine learning community would argue against that, but the question is how.
What’s new here?
[+] [-] onion2k|6 years ago|reply
Cory is trying to inform people that AI and ML aren't a magical solution to all our wasteful systems because in some cases we need to throw away the system and start again, and AI can't tell us that. It can only tell us the best way to run the current system. That's what he means when he says AI is conservative.
It's not surprising at all if you understand AI but most non-tech people don't.
[+] [-] inimino|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] naveen99|6 years ago|reply
You can always train a model to reject a class just as easily as you train it to accept a class. So train it to reject a common class and accept a mutant and it will function more like an evolutionary algorithm that protects against bad luck like bacteria with antibiotics.
I am way more optimistic for AI I guess.
[+] [-] sgt101|6 years ago|reply
[+] [-] gyulai|6 years ago|reply
Furthermore, there is a psychological element at play around mirroring back your own intelligence at you (see Eliza / Rogerian Psychotherapy) in a way that will lead you to new thought.
[+] [-] AndrewKemendo|6 years ago|reply
I see this and other examples of "explainability" from time to time as proof why humans are not a "Black Box."
However it rests on two faulty assumptions.
1. The explanation will be truthful
2. The explainer can always reliably describe the actual cause of their own actions
For the purposes of theory, you could explain away #1. However a minute of introspection will make you realize that you would fail at #2 the vast majority of the time - using story telling and retroactive explanations to explain your behavior.
[+] [-] anjc|6 years ago|reply
> ...
> This is what makes machine learning so toxic.
I'm saddened to learn about our toxic contributions to society and can't wait to hear about alternative mind-reading approaches for fridge recommendation in the next article.
Words like 'conservative' and 'toxic' are misleading because they imply that there are better alternatives that are not being chosen. Far better are the terms by commenters here, 'descriptive' and so on. That the article is not written for machine learning practitioners makes it even more misleading.
[+] [-] at_a_remove|6 years ago|reply
It could be any conclusion, not just those conservatives dislike. Using myself as a target, what if we eventually have enough sampling data to show that people of Irish extraction are more prone to alcoholism, and people of Scottish extraction tend to be statistically more thrifty? (This suggests that I would be a cheap drunk). How will we cope with that?
It is true, ML is prone to some black-boxiness, but it could be any statistical extraction. We might very well use other methods besides ML to show the correlates once suggested.
Will we simply put in a hard override to get the answers we want to get, the answers we find comfortable? History shows we have seen whole governments subscribe to this idea before. Ignore the results, publish what pleases.
I've no easy answers here, but my guess is that history will repeat itself.
[+] [-] thu2111|6 years ago|reply
Yep. That's already happening. See Google's papers on "debiasing AI" (if you look at their example goals, they mean biasing it away from what it learned and towards what Googlers wish was true).
[+] [-] moduspol|6 years ago|reply
What makes you think it hasn't already happened? Numerous times?
Biological differences between sexes? IQ differences between differentiable subsets of people? The gender equality paradox?
We already know what will happen when findings are unpalatable. We'll think of ways to explain why they don't matter and can't possibly be right, or that we weren't measuring the right thing in the first place. And these are just the more obvious cases!
[+] [-] YeGoblynQueenne|6 years ago|reply
Well data doesn't lie. Because data doesn't _say_ anything. People interpret data and they do so according to their own inherent biases. And if the data is already biased (i.e. gathered according to peoples' biases) its interpretations end up far, far away from any objective truth.
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] unknown|6 years ago|reply
[deleted]