- there is no explanation of what the "AI philosophy" or "AI way of thinking" is according to the authors
- according to the authors this undefined philosophy has caused economical and social damage
- the authors propose "focusing on the human" as an alternative but the two aren't mutually exclusive at all
It's just a cheap formula - keep the target vague, blame the vague target for real problems without proving causation, propose an alternative that's good on its own but doesn't contradict any real form of the vaguely defined target.
I haven't read a lot of Wired but this just comes accross as poorly thought out at best. Or worse, it's just manipulative.
There's a famous paper by Dijkstra where he claims that using anthropomorphic terms for computers is a sign of immaturity of the discipline. I used to think that was a bit extreme, but the more people keep talking about fucking AI, the more I'm convinced he was actually right.
Machine learning is linear algebra, nothing more nothing less. Making a model, using it wrong, and then complaining that "the model failed" is a unique kind of stupidity that's becoming more and more popular with "hoi polloi" due to garbage articles like this.
> "AI" is best understood as a political and social ideology rather than as a basket of algorithms. The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity.
On economical and social damage: The article mentions that less than 10 percent of the US workforce is counted as employed by the technology sector, that essential contributions of others aren't counted as "work" or compensated, that this has "hollowed out" the economy and contributed to the concentration of wealth in an elite, and that this has contributed to concentrating power, as well.
On mutual exclusivity: The article proposes paying people for contributions, rather than writing them off as non-work. It also mentions humans with "AI resources" outperforming AI alone.
> - there is no explanation of what the "AI philosophy" or "AI way of thinking" is according to the authors
I think this point was implied here: "A clear alternative to “AI” is to focus on the people present in the system. If a program is able to distinguish cats from dogs, don’t talk about how a machine is learning to see. Instead talk about how people contributed examples in order to define the visual qualities distinguishing “cats” from “dogs” in a rigorous way for the first time. There's always a second way to conceive of any situation in which AI is purported. "
It's basically like saying it's all really just "curve-fitting", a mathematical tool which which requires smart mathematicians and programmers to implement successfully, not anything genuinely intelligent about the software.
They didn't make the point very well but I think they may have been trying to argue that algorithms while pretending to be objective have biases of their creators built in. The ideology they are critical of being that algorithmic decision making is the best or most of objective form of decision making.
I strongly suspect that the author of this article does not have much background or interest in AI, and is merely part of a push to politicize it.
I have spoken to people who are adamant that AI is racist and needs to be forced to change, but seem at a loss to go into even layman levels of technical details.
So I suspect that they are just parroting a received opinion.
To be clear, of course I know that there are all sorts of systematic forms of unfairness. But I think that AI is just an implementation of systems.
The AI umbrella extends as far as what are essentially spreadsheets. Are spreadsheets unfair, or are the policies unfair? Does it make sense to lobby Microsoft to do something about predatory lenders using Excel?
According to the authors this undefined philosophy has caused economical and social damage
The authors propose "focusing on the human" as an alternative but the two aren't mutually exclusive at all
Yes, it's rather vague.
What seems to be annoying the original author is simple. Most of the areas where machine learning is currently deployed are somewhat obnoxious. Ad targeting. Face recognition. Behavior recognition. Those are areas where some error is expected, so mediocre ML performance is acceptable. ML isn't yet good enough to drive, for example. That has a lower tolerance for errors.
All this is "focusing on the human". In the sense of "Big Brother is Watching You". Be careful of what you ask for.
The AI way of thinking is: AI is better than humans, and that it will replace our labor and our decisions. Therefore we must focus and invest in AI, as well as fear it, because it is taking over.
The damage done is obvious. We discount human agency and human labor in the face of this idea. Products are being sold based on this idea. Foreign and domestic policy, security and privacy, and private and public funding are being redirected by this idea.
If one were to simply replace "AI" with "machine learning", which is the name of the actual technology in most of these cases, in most of these contexts the hype and ideology would go away. And with it take away the attention and the money. We would then be able to focus on better things.
>- there is no explanation of what the "AI philosophy" or "AI way of thinking" is according to the authors
"The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity."
>“AI” is best understood as a political and social ideology rather than as a basket of algorithms. The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity. Given that any such replacement is a mirage, this ideology has strong resonances with other historical ideologies, such as technocracy and central-planning-based forms of socialism, which viewed as desirable or inevitable the replacement of most human judgement/agency with systems created by a small technical elite. It is thus not all that surprising that the Chinese Communist Party would find AI to be a welcome technological formulation of its own ideology.
Jaron Lanier is a frustrating thinker. In general he's a pessimist and highly critical of AI and technology. I was at talk of his and asked him about open source AI models and suggestions for ethical frameworks to guide AI research. He gave a rambling answer about we need to pay all the human annotators ever involved in producing a model and that AI is fundamentally anti-human. His justification was as an odd anecdote about an elementary student asking what's the point of human life if robots will do everything in the future. Jaron doesn't provide a meaningful way to engage with his critique and it seems the logical conclusion of his view is that we abandon technology all together.
I do agree with the underlying claim that modern AI research erases the human effort (everything from Mturkers to exploited labor) in producing the annotations that most AI system rely on. It's also fair the critique the proliferation of surveillance states and authoritarian governments that build upon and fund current AI research.
There is isn't good ethical guidance and frameworks to help researchers navigate doing research and in understanding the implications of their works. I'd prefer critiques of AI help guide some sort ethical framework for understanding, developing, and deploying these technologies responsibly. I don't think we can just stick our heads in the sand and pretend the problem goes away or abandoning AI in liberal democracies somehow stops authoritarian regimes from building even worse things.
The usual narrative goes like this: Without the constraints on data collection that liberal democracies impose and with the capacity to centrally direct greater resource allocation, the Chinese will outstrip the West.
In surveillance, yes. But that's not where China's productivity comes from. China does have some centrally directed resource allocation, but it's at a very coarse level. See "Made in China 2025".[1] That's industrial policy. Japan became an industrial power in the 1970s and the 1980s through good industrial policy, run by the Ministry of Trade and Industry. It's a help when you're developing a country, if done well. Most of the "Asian tigers" did that. Done badly, it's a disaster.
What seems to make China go now is a large number of medium-sized companies aggressively competing. Like the US up to the 1980s.
> Jaron doesn't provide a meaningful way to engage with his critique and it seems the logical conclusion of his view is that we abandon technology all together.
While it may appear that way, I wonder whether he might be speaking past many people involved in applying ML on human data.
As a starting point for establishing clarity, do you recognize/understand that one of the core ideas of his theme is human agency? And what he means by “humanity” (our sense of purpose/meaning tied closely with competence & judgement) and why he fears it might be overwhelmed by AI? Since human nature is “reflexive”, being infantilized or treated with certain biases by algorithms will push humans to become like that. The technical way to phrase this is that statistical modeling assumes static distributions, but the actual distribution of human behavior responds/adapts to these assumed models (“distribution shifts”). Pause and think about that for a moment.
Eg, the question he discusses in the talk you link to: if AI could (someday) do everything (some very broad range of things), then what is the point of human life?
If your answers are going to emphasize convenience & improvements and opportunities for better consumption, then you are ignoring the fundamental premise of the question. He’s pointing out a perspective that is deeply at odds with the assumptions which drive today’s computing/ML related industry. Are you saying you’d prefer he stops asking inconvenient questions?
> He gave a rambling answer about we need to pay all the human involved in producing that model (even those that are consumed as opensource models) and that AI is fundamentally anti-human.
I do think this raises a very valid point in terms of intellectual property. If I train an AI to paint in the style of an artist whose art I scraped, does that artist have any claim as to its licensing? If not, should they not?
I do like how his proposed solution does not even remotely solve what he thinks the problem is.
Say you pay image labelers $100k a year. Unfortunately, as soon as they produce a finished model, that algorithm replaces them, permanently.
If a model is profitable to create, it produces structural unemployment. If it's not profitable, then those jobs continue to exist. The only way his proposal functions is if it's a stealth ban on AI. There's no sustainable way for an image labeler to have anything like a middle class income for more than a few months.
And his proposal has to apply... globally? How much does Tencent pay its image labelers? I can't image the PRC version of mturk is any kinder and gentler.
The hostility here to this article is interesting. To me, a reasonable interpretation is that it criticizes the way we understand/conceptualize some areas of modern technology. Indeed, if for some legal or practical reason research labs wouldn't have access to the data generated by the public (i.e. humans), many breakthroughs wouldn't happen. This is the other side to emphasizing the progress in algorithms (which is of course hard to deny).
You can't fully extricate technology from the societies where it exists, and things like naming, branding, institutions, and of course ideologies. There is a tendency to treat some areas of technology like some kinds of ancient gods or idols that have their own "needs" and "mandates" to force on their environment no matter what.
I happen to like[1] some their political proposals, like forcing the barter "data for services" to some fully disclosed monetary form. Also, coming up with some method of using these technologies that is compatible with individual freedom is a big concern. Even from purely practical standpoint, people trying and doing what they want has obvious value compared to everything having to be accepted by some (always to some extent self-interested and narrow-minded) authority. Shifting the language to talking about how we can enable and shape "AIs" as humans and societies seems reasonable: emphasizes that we have natural agency in all this.
[1] Liking doesn't necessarily mean supporting yet.
AI as “ideology” may be a stretch, but it tends there with all the mythos behind a word like “artificial intelligence”.
Naming things matters.
While the head of Apple’s ML/AI strategy likes “machine intelligence” [1], I prefer “machine learning”.
“Intelligence” is a loaded word. I think (hope) that “learning” can be understood as narrowly and crudely as the field actually requires.
A more sober term would help the entire world manage this better on average, IMO, than if we use the unnecessarily scary and confusing term “artificial intelligence”.
[1] “For this reason and others, many AI experts (Giannandrea included) have suggested alternative terms like "machine intelligence" that don't draw parallels to human intelligence.”
> this gets little attention from investors who believe “AI is the future,” encouraging further automation. This has contributed to the hollowing out of the economy
Automation == "hollowing out of the economy" does not follow. Companies can use automation to grow their output while keeping their workforce intact by taking on new projects that were not previously possible.
If automation were that dangerous, then being a software developer should be the worst job because it automates itself away. But no, in reality we do more and still need even more devs.
On the other hand, not automating is wasteful and a future time bomb. This anti-automation rhetoric is like children hating school while their parents force them to attend - preferring short term ease that comes with a much bigger long term penalty (temporal discounting at work).
Your argument doesn’t follow. How would being a software developer be the worst job because they “automate their own work away” when in fact they’d have a job until they automated everyone’s work away? It’s likely a given developer would reach retirement before that happens. Meanwhile tons of other people are out of a job.
Has anyone actually convincingly argued that automation reduces the net number of jobs over time? My impression is that this belief entails a kind of parochialism about job loss, i.e., that it merely tracks the specific kinds or manifestations of jobs that automation has rendered obsolete while ignoring either the fact that particular occupations remain but now make use of new, more sophisticated methods, or the new jobs created by the needs and complexities introduced by new technologies.
Think of our ancestors. How many occupations were there? Arguably fewer than we have today. Certainly, your great grandmother had to wash the dishes herself, but we didn't have factory workers at dishwasher manufacturing plants, dishwasher repairmen, dishwasher dealers, and so on. With the introduction of the dishwasher, there is a reduced need for human dishwashers, but the technology also introduced a whole new industry in its place to support it.
I almost want to say that some law of conservation or even entropy is observed. We are exchanging one kind of burden for another or potentially many.
> Companies can use automation to grow their output while keeping their workforce intact by taking on new projects that were not previously possible.
This idea does not scale. It assumes that there is no upper limit for demand/production, and assumes that a business can suddenly gain the expertise to enter new markets to sell new products. It also assumes that investors would prefer risky reinvestment strategies to cutting costs by reducing labor and increasing short term profits.
> If automation were that dangerous, then being a software developer should be the worst job because it automates itself away
Automation goes after the easiest targets first. IT is still growing because it is an immature profession, and a platform for automation itself. I remember IT in the late 90s - basically anyone who could operate a computer could get a job doing installs. In fact my first job was adding Trumpet Winsock to machines that didn't have any built-in way to access the internet. A few years later you could still make decent money installing and configuring operating systems. Those jobs are long gone and replaced by SCCM and other similar tools. They have been replaced by jobs that require much more technical skill. Entire teams are replaced every day by outsourcing to automation platforms like AWS, because those platforms can provide the same services with fewer employees.
A better example is farming or manufacturing. Take a look at productivity versus employment since 1900.[1]
> This anti-automation rhetoric is like children hating school while their parents force them to attend - preferring short term ease that comes with a much bigger long term penalty (temporal discounting at work).
No, it's a pretty straightforward recognition that retraining even a few percent of any given workforce every year is going to lead to massive inequality and social problems, especially when there is no infrastructure to provide for living costs while a worker gets retrained. A manufacturing worker can be retrained to a better job, but how long do you think it would take to educate that person so they could design and maintain the machines that replace them? How much would that cost? Who is going to pay for it? And if demand for the product is flat, does it make any sense?
It's also a recognition that we are nearing peak production of practically everything. Developed nations are leveling off and declining in population. Around a third of our food is discarded. Ride-sharing is reducing demand for cars. E-commerce is able to offer discounts because it uses less labor. We are now at the point where major technology vendors have resorted to designing addictive experiences to compete for attention. What's left after that is saturated? I'm almost afraid to ask.
Automation certainly doesn’t hollow out the “economy” but it certainly can contribute to wealth inequality by reducing the number of employed humans required to generate value. At the extreme it could permanently upend the concept of full employment in the economy.
Machine Learning algorithms and Artificial Neural Networks are still mostly adequate terms for what's going on in the industry thus abusing of term "AI" (artificial intelligence) makes it just a marketing BS.
> Machine Learning algorithms and Artificial Neural Networks are still mostly adequate terms for what's going on in the industry thus abusing of term "AI" (artificial intelligence) makes it just a marketing BS.
It's true that the terms ML and ANN predate the current hype but they were introduced/kept in use for the same reason: talking in mathematical terms does not excite research grant decision makers or business customers. Neural is a buzzword, it's easy to interpret for laypeople. If you talk about Latent Dirichlet Allocation, Support Vector Regression, Projection Pursuit, Principal Component Analysis, Reproducing Kernel Hilbert Spaces, Reverse-Mode Automatic Differentiation etc etc, then people yawn.
Why do we call linear programming "programming"? It has nothing to do with machine instructions. The answer: the name was made up for the hype and for securing research grants, when math funding was dry, but CS research was popular.
Why call "dynamic programming" like that? What's dynamic about it? Because the inventor wanted a name nobody can object to and sounds buzzy enough.
People try to name things in sexy ways to gain an edge.
Recently government of my country stated that "LGBT aren't people, they are an ideology" and used that as an argument to ban LGBT demonstrations and introduce local law in some regions where they have majority to create so called "LGBT-free zones" where "propagation of LGBT ideology" is banned or at least cannot have institutional support (like other forms of political activity enjoys).
This is obviously not as bad (because AI isn't people (yet?)), but it follows a similar pattern of calling something "an ideology" to show it in a bad light and draw absurd conclusions.
If you look that hard everything is an ideology. Stop playing with definitions and just say what you wanted to say in the first place.
For a long time, "AI" meant "unsolved hard problem" to hackers like us. Speech synthesis and recognition was AI. Text recognition was AI. Some compiler optimization was AI. Search was AI. Now, those things are, well, those things. They work.
Things are a little different now. The feasibility of neural networks trained up on vast data sets means we have non-human systems with hunches. That is, we have AIs capable of delivering results without explanations.
Take a look at Neal Stephenson's recent "Fall" for a social network that uses AI to generate "news" stories where the training metric is engagement. The consequence is the construction of an alternate social universe, and the kind of dystopia only Stephenson can dream up. https://www.worldcat.org/title/fall-or-dodge-in-hell/oclc/11...
Human hunches are often accompanied by a sense of ethics. The agricultural savant who sexes hatchling baby chickens knows the success of the farm depends on a low error rate. The judge sentencing a culprit knows the consequence of making mistakes.
I wonder how hard it would be to add a sense of ethics to neural network results? Maybe it's just a matter of managing the error rates. But this article suggests otherwise.
AI is a blanket term or a bucket term. It can mean Artificial General Intelligence - AGI or Artificial Narrow Intelligence - ANI. In my experience and opinion, as soon as something is well defined it becomes ANI - Vision systems, Natural Language Processing, Machine Learning, Neural Networks - it loses the name "AI" and gets its own specific name.
How AI is talked about depends on the audience. Nowadays, no one really thinks of rule based engines or scripts as AI, neither as ANI or AGI, but back in the day people thought maybe you could just have enough rules to replicate intelligence. So old ANI is just "an algorithm"
To researchers, as soon as an approach is well defined it becomes ANI; at least so far. In future there may be an approach to AI that becomes AGI, but ANI is not considered AI to most researchers.
For industry, AI generally denotes a handful of ANI approaches - Machine Learning and Neural Nets. If you say "AI" in a corporate environment, this is what people will think of.
I personnaly think AI more as a computer assisted culturals productions. The contents of theses productions are numerical, quantitaives stuff, so it look like the result of a pure intelligence. But if you take some distance with what is actually done with AI you can realize that AI is mostly the development of our current cultural environment. So AI is not ideology, but our current ideology let us see "AI" as a pure intelligence rather than a automated cultural production. What have to be questioned is why does AI produced stuff are seen as pure intelligence in our cultural context. What is called AI is "just" a cultural fact (but a fascinating one!).
This is a full reversion of where we were at the start of the century. Back then, the AI paradox was in full effect - if no reliable implementation had been found, it was AI. If you could just A* your way through a maze, it wasn't intelligent routefinding anymore, it was just a simple algorithm.
The change is mostly in terms of money. The promise of AI has become a draw for investment rather than a repellant. Additionally, there is an interest in turning the mountains of surveillance data that exist now from a situationally useful tool to a tradable commodity.
There are private intelligence agencies like google and facebook that make money from extracting all the information they can of individuals in the society.
There is a huge interest on their part on ending privacy restrictions, of course. There is lots of money too, and people in charge in governments that are interested in using private security agencies like public security agencies because the private sector is usually way more efficient than the public one. That also applies to spies.
This is prone for abuses, and society should react to protect themselves. You can easily create a "dictatorship in practice" from a democracy with it.
Now, AI is just the technology that makes global surveillance possible, machines are cheap and easy to control. The alternative humans are expensive and prone to whistle blowers denouncing your abuses.
But AI is not an ideology, it is just a technique set, and you can use it for very good things, like counting blood cells, driving cars or watch your house while you are away.
It is a good thing that exist, only that it must be controlled by the users and not be the controller itself.
FWIW, when referring to specifics, I'd be more comfortable with "techniques" instead of "technology". So a book title could be "Weaponizing AI Techniques to Maximize Dopamine Addiction via Social Mediums" instead of "Recognize Kitten Pictures With Awesome AI Technology for the LOLs".
Of course Jaron Lannier is coauthor. Frequent go-to mildly provocative but still hip contrarian for Wired, the historically full throated Big Tech apologist, booster, and primary paid placement press release conduit.
> the adornment and “deepfake” transformation of the human face, now common on social media platforms like Snapchat and Instagram, was introduced in a startup sold to Google by one of the authors;
Uhh excuse me but deepfakes were not introduced at this company. It was a mobile qr and image recognition technology and that is all. I should know because I was the lead tester of this technology at Neven Vision. They must be talking about DeepDream a google project by the company founder many years later.
AI is a buzzword. A fresh label slapped on old technology that became much more usable in the last years and allows us to tell apart cats and dogs with 98.6% accuracy.
In a nutshell, the term "AI" refers to any situation when software is contrived to massage data in some complicated way that produces useful answers, without any clear, rational explanation why.
(The associated "ideology" consists mainly of the position that this is just terrific, and we should be doing more of it.)
Broadly understood this thesis could be properly developed, AI approaches are based on aggregated data not the particular user, and will in a conflict of interests always go against the particular user. There is a lot of ideology present in the way platforms operate, but mostly its to embellish that they are profit driven.
There isn't as much difference between the West and East as one might imagine. Westerners claim they want privacy, but see no problem giving it away for shinier toys. If you want privacy, it can be had, both in the East and West, you simply have to pay for it. Or learn how technology works and prioritize it. I moved off of Windows decades ago, OSX years ago, and am slowly tightening up all aspects of my digital life, for power, convenience, and yes, privacy.
Privacy is a luxury good like everything else humans want. Available for the masses only if you're willing to spend the time and energy learning how to DIY, or can just be bought if you're rich.
It was a weird experience today watching Apple's announcement video. Everything there is designed to keep you locked into their platform, and it was amusing when I saw how machine learning on your phone was given such significant space. They didn't even try to claim that this would help out ordinary people, jumping immediately to reducing costs of very very deep-pocketed customers, medical device manufacturers. Oh but an AR bird will know where your hand is maybe a bit better too. :rolleyes:
I mean, that's Apple's market, along with "pro" users, with the HomePod tech being a sad afterthought, a bone tossed to the less-than-extremely-well-heeled users as an alternative to mass-market Amazon devices.
America has a cult of the big and flashy, and the engine runs on our data so companies can predict and influence the behavior of the masses. Predictable revenue flows are the name of the game. China's government just has an added incentive to bake it all right into the social fabric. That's all.
[+] [-] st1x7|5 years ago|reply
- there is no explanation of what the "AI philosophy" or "AI way of thinking" is according to the authors
- according to the authors this undefined philosophy has caused economical and social damage
- the authors propose "focusing on the human" as an alternative but the two aren't mutually exclusive at all
It's just a cheap formula - keep the target vague, blame the vague target for real problems without proving causation, propose an alternative that's good on its own but doesn't contradict any real form of the vaguely defined target.
I haven't read a lot of Wired but this just comes accross as poorly thought out at best. Or worse, it's just manipulative.
[+] [-] qsort|5 years ago|reply
Machine learning is linear algebra, nothing more nothing less. Making a model, using it wrong, and then complaining that "the model failed" is a unique kind of stupidity that's becoming more and more popular with "hoi polloi" due to garbage articles like this.
[+] [-] kemitchell|5 years ago|reply
> "AI" is best understood as a political and social ideology rather than as a basket of algorithms. The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity.
On economical and social damage: The article mentions that less than 10 percent of the US workforce is counted as employed by the technology sector, that essential contributions of others aren't counted as "work" or compensated, that this has "hollowed out" the economy and contributed to the concentration of wealth in an elite, and that this has contributed to concentrating power, as well.
On mutual exclusivity: The article proposes paying people for contributions, rather than writing them off as non-work. It also mentions humans with "AI resources" outperforming AI alone.
[+] [-] unishark|5 years ago|reply
I think this point was implied here: "A clear alternative to “AI” is to focus on the people present in the system. If a program is able to distinguish cats from dogs, don’t talk about how a machine is learning to see. Instead talk about how people contributed examples in order to define the visual qualities distinguishing “cats” from “dogs” in a rigorous way for the first time. There's always a second way to conceive of any situation in which AI is purported. "
It's basically like saying it's all really just "curve-fitting", a mathematical tool which which requires smart mathematicians and programmers to implement successfully, not anything genuinely intelligent about the software.
[+] [-] vivekd|5 years ago|reply
[+] [-] apostacy|5 years ago|reply
I have spoken to people who are adamant that AI is racist and needs to be forced to change, but seem at a loss to go into even layman levels of technical details.
So I suspect that they are just parroting a received opinion.
To be clear, of course I know that there are all sorts of systematic forms of unfairness. But I think that AI is just an implementation of systems.
The AI umbrella extends as far as what are essentially spreadsheets. Are spreadsheets unfair, or are the policies unfair? Does it make sense to lobby Microsoft to do something about predatory lenders using Excel?
[+] [-] Animats|5 years ago|reply
The authors propose "focusing on the human" as an alternative but the two aren't mutually exclusive at all
Yes, it's rather vague.
What seems to be annoying the original author is simple. Most of the areas where machine learning is currently deployed are somewhat obnoxious. Ad targeting. Face recognition. Behavior recognition. Those are areas where some error is expected, so mediocre ML performance is acceptable. ML isn't yet good enough to drive, for example. That has a lower tolerance for errors.
All this is "focusing on the human". In the sense of "Big Brother is Watching You". Be careful of what you ask for.
[+] [-] unabst|5 years ago|reply
The damage done is obvious. We discount human agency and human labor in the face of this idea. Products are being sold based on this idea. Foreign and domestic policy, security and privacy, and private and public funding are being redirected by this idea.
If one were to simply replace "AI" with "machine learning", which is the name of the actual technology in most of these cases, in most of these contexts the hype and ideology would go away. And with it take away the attention and the money. We would then be able to focus on better things.
[+] [-] sheeshkebab|5 years ago|reply
the holy grail of AI, it's ultimate goal, is AGI, which is quite opposite of human focus.
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] gambler|5 years ago|reply
"The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity."
[+] [-] TurkishPoptart|5 years ago|reply
>“AI” is best understood as a political and social ideology rather than as a basket of algorithms. The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity. Given that any such replacement is a mirage, this ideology has strong resonances with other historical ideologies, such as technocracy and central-planning-based forms of socialism, which viewed as desirable or inevitable the replacement of most human judgement/agency with systems created by a small technical elite. It is thus not all that surprising that the Chinese Communist Party would find AI to be a welcome technological formulation of its own ideology.
[+] [-] dhairya|5 years ago|reply
I do agree with the underlying claim that modern AI research erases the human effort (everything from Mturkers to exploited labor) in producing the annotations that most AI system rely on. It's also fair the critique the proliferation of surveillance states and authoritarian governments that build upon and fund current AI research.
There is isn't good ethical guidance and frameworks to help researchers navigate doing research and in understanding the implications of their works. I'd prefer critiques of AI help guide some sort ethical framework for understanding, developing, and deploying these technologies responsibly. I don't think we can just stick our heads in the sand and pretend the problem goes away or abandoning AI in liberal democracies somehow stops authoritarian regimes from building even worse things.
[+] [-] Animats|5 years ago|reply
In surveillance, yes. But that's not where China's productivity comes from. China does have some centrally directed resource allocation, but it's at a very coarse level. See "Made in China 2025".[1] That's industrial policy. Japan became an industrial power in the 1970s and the 1980s through good industrial policy, run by the Ministry of Trade and Industry. It's a help when you're developing a country, if done well. Most of the "Asian tigers" did that. Done badly, it's a disaster.
What seems to make China go now is a large number of medium-sized companies aggressively competing. Like the US up to the 1980s.
[1] https://en.wikipedia.org/wiki/Made_in_China_2025
[+] [-] ssivark|5 years ago|reply
While it may appear that way, I wonder whether he might be speaking past many people involved in applying ML on human data.
As a starting point for establishing clarity, do you recognize/understand that one of the core ideas of his theme is human agency? And what he means by “humanity” (our sense of purpose/meaning tied closely with competence & judgement) and why he fears it might be overwhelmed by AI? Since human nature is “reflexive”, being infantilized or treated with certain biases by algorithms will push humans to become like that. The technical way to phrase this is that statistical modeling assumes static distributions, but the actual distribution of human behavior responds/adapts to these assumed models (“distribution shifts”). Pause and think about that for a moment.
Eg, the question he discusses in the talk you link to: if AI could (someday) do everything (some very broad range of things), then what is the point of human life?
If your answers are going to emphasize convenience & improvements and opportunities for better consumption, then you are ignoring the fundamental premise of the question. He’s pointing out a perspective that is deeply at odds with the assumptions which drive today’s computing/ML related industry. Are you saying you’d prefer he stops asking inconvenient questions?
[+] [-] gridlockd|5 years ago|reply
I agree. Like Slavoj but without the Zizek.
> He gave a rambling answer about we need to pay all the human involved in producing that model (even those that are consumed as opensource models) and that AI is fundamentally anti-human.
I do think this raises a very valid point in terms of intellectual property. If I train an AI to paint in the style of an artist whose art I scraped, does that artist have any claim as to its licensing? If not, should they not?
[+] [-] sbierwagen|5 years ago|reply
Say you pay image labelers $100k a year. Unfortunately, as soon as they produce a finished model, that algorithm replaces them, permanently.
If a model is profitable to create, it produces structural unemployment. If it's not profitable, then those jobs continue to exist. The only way his proposal functions is if it's a stealth ban on AI. There's no sustainable way for an image labeler to have anything like a middle class income for more than a few months.
And his proposal has to apply... globally? How much does Tencent pay its image labelers? I can't image the PRC version of mturk is any kinder and gentler.
[+] [-] Kalium|5 years ago|reply
[+] [-] stereolambda|5 years ago|reply
You can't fully extricate technology from the societies where it exists, and things like naming, branding, institutions, and of course ideologies. There is a tendency to treat some areas of technology like some kinds of ancient gods or idols that have their own "needs" and "mandates" to force on their environment no matter what.
I happen to like[1] some their political proposals, like forcing the barter "data for services" to some fully disclosed monetary form. Also, coming up with some method of using these technologies that is compatible with individual freedom is a big concern. Even from purely practical standpoint, people trying and doing what they want has obvious value compared to everything having to be accepted by some (always to some extent self-interested and narrow-minded) authority. Shifting the language to talking about how we can enable and shape "AIs" as humans and societies seems reasonable: emphasizes that we have natural agency in all this.
[1] Liking doesn't necessarily mean supporting yet.
[+] [-] antipaul|5 years ago|reply
Naming things matters.
While the head of Apple’s ML/AI strategy likes “machine intelligence” [1], I prefer “machine learning”.
“Intelligence” is a loaded word. I think (hope) that “learning” can be understood as narrowly and crudely as the field actually requires.
A more sober term would help the entire world manage this better on average, IMO, than if we use the unnecessarily scary and confusing term “artificial intelligence”.
[1] “For this reason and others, many AI experts (Giannandrea included) have suggested alternative terms like "machine intelligence" that don't draw parallels to human intelligence.”
https://arstechnica.com/gadgets/2020/08/apple-explains-how-i...
[+] [-] visarga|5 years ago|reply
Automation == "hollowing out of the economy" does not follow. Companies can use automation to grow their output while keeping their workforce intact by taking on new projects that were not previously possible.
If automation were that dangerous, then being a software developer should be the worst job because it automates itself away. But no, in reality we do more and still need even more devs.
On the other hand, not automating is wasteful and a future time bomb. This anti-automation rhetoric is like children hating school while their parents force them to attend - preferring short term ease that comes with a much bigger long term penalty (temporal discounting at work).
[+] [-] stallmanite|5 years ago|reply
[+] [-] danielam|5 years ago|reply
Think of our ancestors. How many occupations were there? Arguably fewer than we have today. Certainly, your great grandmother had to wash the dishes herself, but we didn't have factory workers at dishwasher manufacturing plants, dishwasher repairmen, dishwasher dealers, and so on. With the introduction of the dishwasher, there is a reduced need for human dishwashers, but the technology also introduced a whole new industry in its place to support it.
I almost want to say that some law of conservation or even entropy is observed. We are exchanging one kind of burden for another or potentially many.
[+] [-] m463|5 years ago|reply
In fact, automation has been the hallmark of the western world, and in the US in particular.
Labor costs have always been high in the US and automation has been used since the beginning - for hundreds of years.
[+] [-] nicoffeine|5 years ago|reply
This idea does not scale. It assumes that there is no upper limit for demand/production, and assumes that a business can suddenly gain the expertise to enter new markets to sell new products. It also assumes that investors would prefer risky reinvestment strategies to cutting costs by reducing labor and increasing short term profits.
> If automation were that dangerous, then being a software developer should be the worst job because it automates itself away
Automation goes after the easiest targets first. IT is still growing because it is an immature profession, and a platform for automation itself. I remember IT in the late 90s - basically anyone who could operate a computer could get a job doing installs. In fact my first job was adding Trumpet Winsock to machines that didn't have any built-in way to access the internet. A few years later you could still make decent money installing and configuring operating systems. Those jobs are long gone and replaced by SCCM and other similar tools. They have been replaced by jobs that require much more technical skill. Entire teams are replaced every day by outsourcing to automation platforms like AWS, because those platforms can provide the same services with fewer employees.
A better example is farming or manufacturing. Take a look at productivity versus employment since 1900.[1]
> This anti-automation rhetoric is like children hating school while their parents force them to attend - preferring short term ease that comes with a much bigger long term penalty (temporal discounting at work).
No, it's a pretty straightforward recognition that retraining even a few percent of any given workforce every year is going to lead to massive inequality and social problems, especially when there is no infrastructure to provide for living costs while a worker gets retrained. A manufacturing worker can be retrained to a better job, but how long do you think it would take to educate that person so they could design and maintain the machines that replace them? How much would that cost? Who is going to pay for it? And if demand for the product is flat, does it make any sense?
It's also a recognition that we are nearing peak production of practically everything. Developed nations are leveling off and declining in population. Around a third of our food is discarded. Ride-sharing is reducing demand for cars. E-commerce is able to offer discounts because it uses less labor. We are now at the point where major technology vendors have resorted to designing addictive experiences to compete for attention. What's left after that is saturated? I'm almost afraid to ask.
[1] https://www.ncci.com/Articles/Pages/II_Insights_QEB_Impact-A...
[+] [-] sjwright|5 years ago|reply
[+] [-] danielEM|5 years ago|reply
Machine Learning algorithms and Artificial Neural Networks are still mostly adequate terms for what's going on in the industry thus abusing of term "AI" (artificial intelligence) makes it just a marketing BS.
[+] [-] bonoboTP|5 years ago|reply
It's true that the terms ML and ANN predate the current hype but they were introduced/kept in use for the same reason: talking in mathematical terms does not excite research grant decision makers or business customers. Neural is a buzzword, it's easy to interpret for laypeople. If you talk about Latent Dirichlet Allocation, Support Vector Regression, Projection Pursuit, Principal Component Analysis, Reproducing Kernel Hilbert Spaces, Reverse-Mode Automatic Differentiation etc etc, then people yawn.
Why do we call linear programming "programming"? It has nothing to do with machine instructions. The answer: the name was made up for the hype and for securing research grants, when math funding was dry, but CS research was popular.
Why call "dynamic programming" like that? What's dynamic about it? Because the inventor wanted a name nobody can object to and sounds buzzy enough.
People try to name things in sexy ways to gain an edge.
[+] [-] ajuc|5 years ago|reply
This is obviously not as bad (because AI isn't people (yet?)), but it follows a similar pattern of calling something "an ideology" to show it in a bad light and draw absurd conclusions.
If you look that hard everything is an ideology. Stop playing with definitions and just say what you wanted to say in the first place.
[+] [-] bserge|5 years ago|reply
But yeah, it's a standard tactic of dehumanizing people. Except in this case, it's applied to AI. Which we don't really have yet. It's rather strange.
[+] [-] OliverJones|5 years ago|reply
Things are a little different now. The feasibility of neural networks trained up on vast data sets means we have non-human systems with hunches. That is, we have AIs capable of delivering results without explanations.
Take a look at Neal Stephenson's recent "Fall" for a social network that uses AI to generate "news" stories where the training metric is engagement. The consequence is the construction of an alternate social universe, and the kind of dystopia only Stephenson can dream up. https://www.worldcat.org/title/fall-or-dodge-in-hell/oclc/11...
Human hunches are often accompanied by a sense of ethics. The agricultural savant who sexes hatchling baby chickens knows the success of the farm depends on a low error rate. The judge sentencing a culprit knows the consequence of making mistakes.
I wonder how hard it would be to add a sense of ethics to neural network results? Maybe it's just a matter of managing the error rates. But this article suggests otherwise.
[+] [-] csours|5 years ago|reply
How AI is talked about depends on the audience. Nowadays, no one really thinks of rule based engines or scripts as AI, neither as ANI or AGI, but back in the day people thought maybe you could just have enough rules to replicate intelligence. So old ANI is just "an algorithm"
To researchers, as soon as an approach is well defined it becomes ANI; at least so far. In future there may be an approach to AI that becomes AGI, but ANI is not considered AI to most researchers.
For industry, AI generally denotes a handful of ANI approaches - Machine Learning and Neural Nets. If you say "AI" in a corporate environment, this is what people will think of.
[+] [-] Pompidou|5 years ago|reply
[+] [-] awkward|5 years ago|reply
The change is mostly in terms of money. The promise of AI has become a draw for investment rather than a repellant. Additionally, there is an interest in turning the mountains of surveillance data that exist now from a situationally useful tool to a tradable commodity.
[+] [-] pritovido|5 years ago|reply
There are private intelligence agencies like google and facebook that make money from extracting all the information they can of individuals in the society.
There is a huge interest on their part on ending privacy restrictions, of course. There is lots of money too, and people in charge in governments that are interested in using private security agencies like public security agencies because the private sector is usually way more efficient than the public one. That also applies to spies.
This is prone for abuses, and society should react to protect themselves. You can easily create a "dictatorship in practice" from a democracy with it.
Now, AI is just the technology that makes global surveillance possible, machines are cheap and easy to control. The alternative humans are expensive and prone to whistle blowers denouncing your abuses.
But AI is not an ideology, it is just a technique set, and you can use it for very good things, like counting blood cells, driving cars or watch your house while you are away.
It is a good thing that exist, only that it must be controlled by the users and not be the controller itself.
[+] [-] rexreed|5 years ago|reply
[+] [-] specialist|5 years ago|reply
One hint is in the suffix -ology meaning "study of". With study comes frames, schools of thought, agendas, dogma.
My intro to technology as ideology was Neil Postman's Technopoly [1992]. https://en.wikipedia.org/wiki/Technopoly Challenging ideas for a young technophile.
FWIW, when referring to specifics, I'd be more comfortable with "techniques" instead of "technology". So a book title could be "Weaponizing AI Techniques to Maximize Dopamine Addiction via Social Mediums" instead of "Recognize Kitten Pictures With Awesome AI Technology for the LOLs".
Of course Jaron Lannier is coauthor. Frequent go-to mildly provocative but still hip contrarian for Wired, the historically full throated Big Tech apologist, booster, and primary paid placement press release conduit.
[+] [-] egfx|5 years ago|reply
Uhh excuse me but deepfakes were not introduced at this company. It was a mobile qr and image recognition technology and that is all. I should know because I was the lead tester of this technology at Neven Vision. They must be talking about DeepDream a google project by the company founder many years later.
[+] [-] Traubenfuchs|5 years ago|reply
[+] [-] kazinator|5 years ago|reply
It's a term.
In a nutshell, the term "AI" refers to any situation when software is contrived to massage data in some complicated way that produces useful answers, without any clear, rational explanation why.
(The associated "ideology" consists mainly of the position that this is just terrific, and we should be doing more of it.)
[+] [-] yarrel|5 years ago|reply
[+] [-] nathias|5 years ago|reply
[+] [-] vinceguidry|5 years ago|reply
Privacy is a luxury good like everything else humans want. Available for the masses only if you're willing to spend the time and energy learning how to DIY, or can just be bought if you're rich.
It was a weird experience today watching Apple's announcement video. Everything there is designed to keep you locked into their platform, and it was amusing when I saw how machine learning on your phone was given such significant space. They didn't even try to claim that this would help out ordinary people, jumping immediately to reducing costs of very very deep-pocketed customers, medical device manufacturers. Oh but an AR bird will know where your hand is maybe a bit better too. :rolleyes:
I mean, that's Apple's market, along with "pro" users, with the HomePod tech being a sad afterthought, a bone tossed to the less-than-extremely-well-heeled users as an alternative to mass-market Amazon devices.
America has a cult of the big and flashy, and the engine runs on our data so companies can predict and influence the behavior of the masses. Predictable revenue flows are the name of the game. China's government just has an added incentive to bake it all right into the social fabric. That's all.