We need a word or phrase for this phenomenon, where we attempt to substitute human pattern recognition with algorithms that just aren't up to the job. Facebook moderation, Tesla Full Self Driving, the War Games movie, arrests for mistaken facial identification. It's becoming an increasingly dystopic force in our lives and will likely get much worse before getting even worse. So it needs a label. Maybe there's a ten syllable German word that expresses it perfectly?
is used by Private Eye (British satirical/political magazine) to highlight adverts auto-generated inappropriately to accompany articles on news etc websites.
Scunthorpe problem [1] is used to describe the false positives for auto filters, which are often results of naive substring matching. In a way, the current problem is similar, but on the semantic level.
However, it doesn't cover other cases for other AI mistakes you mentioned, like self-driving.
The concept of "so-so automation" [1] seems relevant: innovation that allows a business or organization to eliminate human employees, but doesn't result in overall productivity gains or cost savings for society that could then be redistributed to the laid-off employees.
I think so-so automation often is used places where there's a lot of zero-sum conflict between workers and management, or where the work itself causes a lot of negative human externalities. (This can be a good thing: it's probably okay to settle for a "worse result" from an automated system if it eliminates a lot of physical or psychological harm to people...some content moderation issues probably fall under this case, but not this one.)
"Totalitalgorithms" (Totalgorithms?) captures the spirit of these algorithms. They seem like bugs but they're actually undirected, organic features of a total technocratic political system that is rapidly coming to dominate life in our modern societies. The filters will be tuned but not fixed because they aren't broken. They're part of what Tocqueville described as 'soft despotism':
"Thus, After having thus successively taken each member of the community in its powerful grasp and fashioned him at will, the supreme power then extends its arm over the whole community. It covers the surface of society with a network of small complicated rules, minute and uniform, through which the most original minds and the most energetic characters cannot penetrate, to rise above the crowd. The will of man is not shattered, but softened, bent, and guided; men are seldom forced by it to act, but they are constantly restrained from acting. Such a power does not destroy, but it prevents existence; it does not tyrannize, but it compresses, enervates, extinguishes, and stupefies a people, till each nation is reduced to nothing better than a flock of timid and industrious animals, of which the government is the shepherd."
> It's becoming an increasingly dystopic force in our lives and will likely get much worse before getting even worse. So it needs a label.
It's not just something that happens to us, it's something we do. We need to do better rather than accepting it as inevitable, and let future generations worry about what the best name for it was.
> Maybe there's a ten syllable German word that expresses it perfectly?
That being said, I propose Urteilsfähigkeitsauslagerungsnekrose: The necrosis that follows the outsourcing of our capability of judgement.
It's easy to blame this on imperfect technology but I'm not so sure. Couple of months back when all tech companies started their holier than though publicity campaigns with token actions we faced the same issue.
"Blacklist" was banned as term because it was deemed racist. No matter that people understand black and white outside the race issue.
So if blacklist is deemed by people , not technology, as racist; thus removing context from the equation, why is AI wrong to assume that "attack the black soldier in C4" is hate speech?
I use the phrase "K ohne I" since years already. Which basically means "künstlich ohne Intelligenz". We all saw this coming. The topic has been gone over in scifi literature. And still, big tech decided its time to roll it out. "A human would also not be perfect, and we claim this algo is better then the avg. human" is the last thing you hear before discriminating tech is rolled out. And since politics is in the grip of commerce, regulations will not happen early enough. We are fucked. 2040 will be horrible.
I don't have a word for the phenomenon, but the problem reminds me of a quote by Wilfrid Sellars.
"The aim of philosophy, abstractly formulated, is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term"
call it maybe philosophy-lacking, or worldview-lacking, but understanding how things 'hang together' in broad terms is precisely what our 'intelligent' systems cannot do. Agents in the world have an integrated point of view, they're not assemblages of models, and there seems to be very little interest than to build anything but the latter.
It's basically perception and reaction without cognition.
In the natural world we would call that instinct. So maybe 'artificial instinct' if we want to keep 'AI' or 'synthetic instinct' because I think it sounds better.
If a human made the same mistake, we would call them incompetent, careless, and negligent.
- incompetent system
- incompetent robot
- incompesys
- incompebot
- inept system
- inept robot
- inepsys
- ineptobot
- inepobot
- bunglebot
- hambot
- sloppybot
- careless system
- careless robot
- carelessys
- carelessbot
- neglisys
- negligent robot
- neglibot
I like 'neglibot'. "YouTube neglibotted my video." "This is my new email address. My old one got neglibotted." "The app works for typing in data, but the camera feature is neglibotty." "They spent a lot of effort and turned their neglibot automod into carefulbot. In another year it may be meticubot.
Industrial revolutions has happened few times in the past, and every time it occurs, we change our world to adopt it.
I can't help but wonder, what the world would look like if services in the future will be provided primarily by AIs. How do we adopt them? Do we have to invent a "New Speech" just to make AI understand us better so we can live a easier life?
"Question: Have you consumed your food today?", "Answer: I have consumed my food today."
Or a more subtle example:
"Hi! Welcome to ___. What do you want to eat today?", "I want to eat item one, fifty-six, eighty-seven and ninety-one", "Do you want to eat item one, fifty-six, eighty-seven and ninety-one?", "Yes", "Please make a seat. The food will come right away!" (all without making any Hmm or Err noise)
In a government system a similar problem is called Bureaucracy. It is similar in the sense that the system is very complex, beyond any single persons comprehension, bureaucratic system is unforgiving in its conclusion, and it is the responsibility of the victim to deal with a false positive using the same (or similarly complex) system to attempt correction.
However this is different from bureaucracy in the level of automation and statistical inference. A bureaucratic system doesn’t do inference (or at least that is not its main function), and the steps between require human inputs (albeit really automated humans most of the time).
I suggest automatacracy which strings together automation and bureaucracy.
How about calling it a "Buttle" after the movie Brazil from 1985 where a certain Mr. "Buttle" gets arrested and killed instead of a Mr. "Tuttle" due to a fly in a teleprinter.
It's funny that you should mention War Games, because the only way to win this battle is not to play at all. Why are we so hell-bent on restricting speech and burning all these engineering hours trying to moderate something that cannot be moderated? Languages -- and people -- are "transformable" enough to avoid triggering "hate speech" (whatever that actually is, and whoever it is that determines it) algorithms. Let people downvote or shut their computer off if they don't like it, and leave it at that. Are we that scared of words or ideas?
I used to live in a communist country (probably the same one that Mr. Radic lives/lived in), where "hate speech" -- which was called anti-state, or anti-establishment speech back then -- was punishable by a middle-of-the-night visit by dark-clad police. Yet, people still found ways to openly criticize the Party, and there were even popular songs that openly defied them, which only demonstrates that not even human pattern recognition is good enough to detect these things, as you state in your first sentence.
It's a waste of time, but more importantly it is detrimental to society.
We definitely need a term for this so when we are a victim of this, we can easily raise a flag. I have a few ideas:
- Bot blunder
- Artificial stupidity
- Algofail
- Machine madness
- Neural slip
I think this has the unnatural, unjust and just basic wrongness about using AI to be like humans.
"human-ops" is the justification to remove human pattern recognition because it's better for a small group of employees (e.g. "we cant let our expensive staff view flags of distressing pictures or boring chess videos, so we will get an algorithm instead". The tech companies HR say that this is pro mental health as an additional way to justify this change and any unemployment.
"In modern times, legal or administrative bodies with strict, arbitrary rulings, no "due process" rights to those accused, and secretive proceedings are sometimes called "star chambers" as a metaphor."
Example uses:
"I got starbotted."
"Instagram's automod is a starbot."
"YouTube is too starbotty for your lectures. Better post with your school account."
"We're suing them because their starbot took down our site right after our superbowl ad ran."
"Play Store starbotted release 14 so we cut a dupe release 15. How much will it cost to push the ad campaign back 2 weeks?"
"We use Gmail and Google Docs but not Google Cloud because of the starbots."
"I tried to put Google ads on it, but their starbot rejected the site because it doesn't have enough pages. It's a single-page JavaScript utility." (This is my true story about https://www.cloudping.info )
"Our site gets a lot of traffic but we don't use Google ads because of the starbot risk. Nobody needs that trouble."
"The STAHP Bill (Starbot Tempering by Adding Humans to Process) just passed the Senate! Big Tech is finally getting de-Kafka'ed. About f**ing time."
The suggested “malgorithms” is probably the best noun form for these algorithms themselves.
As for terminology that captures society’s general over reliance on automation and algorithms to handle things that really ought to have direct human intervention, I like Rouvroy’s “algorithmic governmentality”
AS or just artificial stupidity I've heard a couple of times. It's quite mind boggling if you think about how many people had to engineer tensors and train networks for months if not years to create a system capable of so blatant stupidity.
Well. The speech police started changing "blacklist" and "whitelist" in programming context, even when those had no racist history; maybe it's time to change it in chess. (After all, white always goes first, that is not very PC.)
Rename "black" and "white" to "second player" and "first player".
There was a star trek channel on youtube which got suspended because he called the fictional race Ferengi “greedy”, which they actually are. Got reinstated after a few days. But it’s getting ridiculous now.
It's never been confirmed that the language of chess is the reason the channel was flagged. It's all speculation. A fishing channel being taken offline due to hate speech, for example, is a boring story. The same thing happening to a chess channel is much juicier due to the implication that an AI accidentally flagged the words "black" and "white" as racist. There are a lot of reasons to be outraged by that idea, but it's important to remember that it may not have happened.
Agadmator, the person who made the video in question, also made a video soon after explaining the situation, and gave some hypotheses on why the video got taken down. In addition to the reason being hate speech, he suggested it may have been because they discussed Covid-19, lockdowns, etc, and YouTube was attempting to stop the spread of misinformation.
I'm actually starting to think we only see these stories about absurd censorship to make the more commonplace and pernicious stuff seem legitimate by comparison. "Oh, hahah, our totally legitimate censorship ML that uses language models to isolate people from each other based on predictions of patterns in their thinking made a funny goof! Gee whiz, you got us that time!"
Anyway, Google will be fine. Lots of tech companies have managed to re-brand after getting on board with idealists, just look at Hollerith.
A friend of mine was banned for sending a chat message that said “this is a Mexican standoff” — the people in the room were both Mexican, if it matters.. we were all confused on why he was permanently banned.
I honestly wouldn't be surprised if someone makes the argument that this automated flagging is an indicator that chess's language is inadvertently racially charged. And think about the concept of "white goes first." All it takes is a few viral tweets, and suddenly the game of chess is in the crosshairs.
This reminds me of a story from a previous era of automated content moderation...
When I was a student at the University of Cincinnati, I was a member of a group called LARC which stood for Laboratory for Recreational Computing. The main purpose of LARC was to get the University of Cincinnati to subsidize our yearly trip to DEFCON, but I digress.
The UC mail servers, or at least the ones where the LARC mailing list was hosted, had some kind of stupid search and replace censorship to replace naughty words with cleaner equivalents. The cleaner equivalents were in ALL CAPS of course.
So a few members of LARC were working on a project to build a classical arcade cocktail table game out of Linux and MAME and some other stuff. I don't remember the details. All I remember is that the mail server transformed this into the "MALE GENITALIAtail table".
This became its official name. I think the MALE GENITALIAtail table was eventually installed in the student union.
The infection of Youtube with Google's fetish for replacing people with machines may be the worst thing about the entire acquisition. Google's obsession with forever increasing the ratio of users to employees is a curse upon us all.
if i am writing a youtube comment and care about it, i always recheck if the comment is still there after a couple of minutes and then a couple of days. because the comments are now "disappearing" more and more frequently
the last time my comment got automatically deleted right away was a couple of weeks ago for "bottle opening" words (in my language) put together. replacing a few letters in these words with different same-looking characters helped for some time, but eventually even these got deleted a few days later. i should probably give up using this last google service i still use
This is a perfect example of "be careful what you wish for". The Wired position seems to be "just put more resources into policing speech, which is a good and necessary activity". My hunch is that cases like these (false positives, at least, as currently judged by the current authorities) will proliferate just as the criteria for judging what constitutes unacceptable speech do. I would challenge the would-be censors to define specifically, in a way not requiring an additional consultation with them for more infusions of judgement, just what types of utterances that they want to suppress, and why. The closer one gets to this, the more the case for censorship will dissolve. Tl;dr: they are complaining about ambiguity in the implementation of the solution, while having failed to define the problem.
YouTube overall generates tremendous value for people who view videos on it.
There are so many YouTube videos being generated for the amount of money being made that it is not economically feasible to hire humans to review all the videos.
Even if there is a human to review videos that have been flagged, there is a time delay to doing so.
YouTube seems to be erring on the side of flagging false positives at least till there is time for human review.
The technology reviewing videos is immature. It may not be an engineering failing. It may be a problem that requires a scientific breakthrough.
So a valid critique is that there is no effective way to reach a human at Google. Critiquing the technology is pointless.
And that makes it even more important to highlight them. We shouldn't consider censorship a normal everyday event just because some parties do it far too often.
Given there's 500 or so hours of video uploaded per minute (or some other huge amount), i'm not sure we can expect YouTube to moderate each potential violation. Each video constitutes a miniscule amount of revenue.
The only solution (I see) to this is for YouTube to charge for each upload - say $1 a video (there may need to be different prices in different parts of the world), this wouldn't detract the majority of uploaders and would pay for checking hate speech, copyright violations etc.
[+] [-] hirundo|4 years ago|reply
[+] [-] mellosouls|4 years ago|reply
"Malgorithms"
is used by Private Eye (British satirical/political magazine) to highlight adverts auto-generated inappropriately to accompany articles on news etc websites.
Example:
https://twitter.com/joshspero/status/562625460732174336
[+] [-] crvdgc|4 years ago|reply
However, it doesn't cover other cases for other AI mistakes you mentioned, like self-driving.
[1]: https://en.wikipedia.org/wiki/Scunthorpe_problem
[+] [-] computerlab|4 years ago|reply
I think so-so automation often is used places where there's a lot of zero-sum conflict between workers and management, or where the work itself causes a lot of negative human externalities. (This can be a good thing: it's probably okay to settle for a "worse result" from an automated system if it eliminates a lot of physical or psychological harm to people...some content moderation issues probably fall under this case, but not this one.)
[1] https://mitsloan.mit.edu/ideas-made-to-matter/lure-so-so-tec...
[+] [-] dan_mctree|4 years ago|reply
AI has proven to be terrible at moderating human content, we felt we needed a word for these kinds of buggy systems and have chosen: "
And GPT-J answered:
"LOLPOP"
Which seems to capture the spirit of unreliable AIs pretty well
[+] [-] ctrlp|4 years ago|reply
"Thus, After having thus successively taken each member of the community in its powerful grasp and fashioned him at will, the supreme power then extends its arm over the whole community. It covers the surface of society with a network of small complicated rules, minute and uniform, through which the most original minds and the most energetic characters cannot penetrate, to rise above the crowd. The will of man is not shattered, but softened, bent, and guided; men are seldom forced by it to act, but they are constantly restrained from acting. Such a power does not destroy, but it prevents existence; it does not tyrannize, but it compresses, enervates, extinguishes, and stupefies a people, till each nation is reduced to nothing better than a flock of timid and industrious animals, of which the government is the shepherd."
https://en.wikipedia.org/wiki/Soft_despotism
[+] [-] howaboutnope|4 years ago|reply
It's not just something that happens to us, it's something we do. We need to do better rather than accepting it as inevitable, and let future generations worry about what the best name for it was.
> Maybe there's a ten syllable German word that expresses it perfectly?
That being said, I propose Urteilsfähigkeitsauslagerungsnekrose: The necrosis that follows the outsourcing of our capability of judgement.
[+] [-] cowl|4 years ago|reply
"Blacklist" was banned as term because it was deemed racist. No matter that people understand black and white outside the race issue. So if blacklist is deemed by people , not technology, as racist; thus removing context from the equation, why is AI wrong to assume that "attack the black soldier in C4" is hate speech?
[+] [-] mlang23|4 years ago|reply
[+] [-] Barrin92|4 years ago|reply
"The aim of philosophy, abstractly formulated, is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term"
call it maybe philosophy-lacking, or worldview-lacking, but understanding how things 'hang together' in broad terms is precisely what our 'intelligent' systems cannot do. Agents in the world have an integrated point of view, they're not assemblages of models, and there seems to be very little interest than to build anything but the latter.
[+] [-] jcims|4 years ago|reply
In the natural world we would call that instinct. So maybe 'artificial instinct' if we want to keep 'AI' or 'synthetic instinct' because I think it sounds better.
[+] [-] mleonhard|4 years ago|reply
- incompetent system
- incompetent robot
- incompesys
- incompebot
- inept system
- inept robot
- inepsys
- ineptobot
- inepobot
- bunglebot
- hambot
- sloppybot
- careless system
- careless robot
- carelessys
- carelessbot
- neglisys
- negligent robot
- neglibot
I like 'neglibot'. "YouTube neglibotted my video." "This is my new email address. My old one got neglibotted." "The app works for typing in data, but the camera feature is neglibotty." "They spent a lot of effort and turned their neglibot automod into carefulbot. In another year it may be meticubot.
[+] [-] nirui|4 years ago|reply
I can't help but wonder, what the world would look like if services in the future will be provided primarily by AIs. How do we adopt them? Do we have to invent a "New Speech" just to make AI understand us better so we can live a easier life?
"Question: Have you consumed your food today?", "Answer: I have consumed my food today."
Or a more subtle example:
"Hi! Welcome to ___. What do you want to eat today?", "I want to eat item one, fifty-six, eighty-seven and ninety-one", "Do you want to eat item one, fifty-six, eighty-seven and ninety-one?", "Yes", "Please make a seat. The food will come right away!" (all without making any Hmm or Err noise)
[+] [-] runarberg|4 years ago|reply
However this is different from bureaucracy in the level of automation and statistical inference. A bureaucratic system doesn’t do inference (or at least that is not its main function), and the steps between require human inputs (albeit really automated humans most of the time).
I suggest automatacracy which strings together automation and bureaucracy.
[+] [-] Hackbraten|4 years ago|reply
Composite noun of verschlimmbessern [1] and Automatisierung (automation).
[1] https://en.wiktionary.org/wiki/verschlimmbessern#German
[+] [-] kofko|4 years ago|reply
[+] [-] acituan|4 years ago|reply
We are often reduced to mere conformity to what the artificial intelligence can make sense of.
Captcha: "Are you human?"
Human: <Goes on to do a simple perceptual task even a cat could do if they had fingers>
[+] [-] Lutzb|4 years ago|reply
[+] [-] mirkules|4 years ago|reply
I used to live in a communist country (probably the same one that Mr. Radic lives/lived in), where "hate speech" -- which was called anti-state, or anti-establishment speech back then -- was punishable by a middle-of-the-night visit by dark-clad police. Yet, people still found ways to openly criticize the Party, and there were even popular songs that openly defied them, which only demonstrates that not even human pattern recognition is good enough to detect these things, as you state in your first sentence.
It's a waste of time, but more importantly it is detrimental to society.
[+] [-] human|4 years ago|reply
[+] [-] stiGGG|4 years ago|reply
[+] [-] thinkingemote|4 years ago|reply
I think this has the unnatural, unjust and just basic wrongness about using AI to be like humans.
"human-ops" is the justification to remove human pattern recognition because it's better for a small group of employees (e.g. "we cant let our expensive staff view flags of distressing pictures or boring chess videos, so we will get an algorithm instead". The tech companies HR say that this is pro mental health as an additional way to justify this change and any unemployment.
[+] [-] mleonhard|4 years ago|reply
"In modern times, legal or administrative bodies with strict, arbitrary rulings, no "due process" rights to those accused, and secretive proceedings are sometimes called "star chambers" as a metaphor."
Example uses:
"I got starbotted."
"Instagram's automod is a starbot."
"YouTube is too starbotty for your lectures. Better post with your school account."
"We're suing them because their starbot took down our site right after our superbowl ad ran."
"Play Store starbotted release 14 so we cut a dupe release 15. How much will it cost to push the ad campaign back 2 weeks?"
"We use Gmail and Google Docs but not Google Cloud because of the starbots."
"I tried to put Google ads on it, but their starbot rejected the site because it doesn't have enough pages. It's a single-page JavaScript utility." (This is my true story about https://www.cloudping.info )
"Our site gets a lot of traffic but we don't use Google ads because of the starbot risk. Nobody needs that trouble."
"The STAHP Bill (Starbot Tempering by Adding Humans to Process) just passed the Senate! Big Tech is finally getting de-Kafka'ed. About f**ing time."
[0] https://en.wikipedia.org/wiki/Star_Chamber
[+] [-] Gibbon1|4 years ago|reply
'Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.'
https://en.wikipedia.org/wiki/Goodhart%27s_law
[+] [-] voidhorse|4 years ago|reply
As for terminology that captures society’s general over reliance on automation and algorithms to handle things that really ought to have direct human intervention, I like Rouvroy’s “algorithmic governmentality”
[+] [-] undersuit|4 years ago|reply
https://daedtech.com/how-developers-stop-learning-rise-of-th...
[+] [-] dominicl|4 years ago|reply
[+] [-] shp0ngle|4 years ago|reply
Rename "black" and "white" to "second player" and "first player".
[+] [-] MetaWhirledPeas|4 years ago|reply
- Popular experiences tend to be better experiences, so we all congregate to the same services
- Homogenous user behavior leads to monopolistic situations, increasing outrage when anything goes wrong
- Even if the government doesn't try to enforce moderation, the company attempts to self-moderate to maintain its image
- The popularity of the service makes human moderation impossible, creating a need for inevitably-flawed robots
I see no solution. The only way to win is not to play.
[+] [-] thoughty|4 years ago|reply
[+] [-] dorkwood|4 years ago|reply
[+] [-] tehnub|4 years ago|reply
The video I’m talking about is here: https://youtu.be/KSjrYWPxsG8
[+] [-] motohagiography|4 years ago|reply
Anyway, Google will be fine. Lots of tech companies have managed to re-brand after getting on board with idealists, just look at Hollerith.
[+] [-] tdhz77|4 years ago|reply
[+] [-] EdwardDiego|4 years ago|reply
At least the human moderator who handled my challenge of it was able to consider context.
[+] [-] spuz|4 years ago|reply
[+] [-] nemo44x|4 years ago|reply
[+] [-] daenz|4 years ago|reply
[+] [-] api|4 years ago|reply
When I was a student at the University of Cincinnati, I was a member of a group called LARC which stood for Laboratory for Recreational Computing. The main purpose of LARC was to get the University of Cincinnati to subsidize our yearly trip to DEFCON, but I digress.
The UC mail servers, or at least the ones where the LARC mailing list was hosted, had some kind of stupid search and replace censorship to replace naughty words with cleaner equivalents. The cleaner equivalents were in ALL CAPS of course.
So a few members of LARC were working on a project to build a classical arcade cocktail table game out of Linux and MAME and some other stuff. I don't remember the details. All I remember is that the mail server transformed this into the "MALE GENITALIAtail table".
This became its official name. I think the MALE GENITALIAtail table was eventually installed in the student union.
[+] [-] gundmc|4 years ago|reply
https://news.ycombinator.com/item?id=26218476
[+] [-] underseacables|4 years ago|reply
[+] [-] temp8964|4 years ago|reply
[+] [-] causi|4 years ago|reply
[+] [-] 0ld|4 years ago|reply
[+] [-] _Mark|4 years ago|reply
I am not sure in other parts of the world, but in Australia Hump day is the middle day of the week, A.K.A Wednesday.
[+] [-] prvc|4 years ago|reply
[+] [-] babesh|4 years ago|reply
There are so many YouTube videos being generated for the amount of money being made that it is not economically feasible to hire humans to review all the videos.
Even if there is a human to review videos that have been flagged, there is a time delay to doing so.
YouTube seems to be erring on the side of flagging false positives at least till there is time for human review.
The technology reviewing videos is immature. It may not be an engineering failing. It may be a problem that requires a scientific breakthrough.
So a valid critique is that there is no effective way to reach a human at Google. Critiquing the technology is pointless.
[+] [-] clircle|4 years ago|reply
[+] [-] alpaca128|4 years ago|reply
[+] [-] helsinkiandrew|4 years ago|reply
The only solution (I see) to this is for YouTube to charge for each upload - say $1 a video (there may need to be different prices in different parts of the world), this wouldn't detract the majority of uploaders and would pay for checking hate speech, copyright violations etc.