top | item 32338469

Part of my code makes Copilot crash

282 points| Tree1993 | 3 years ago |github.com

357 comments

order

Some comments were deferred for faster rendering.

fny|3 years ago

So I've just tested it, and I can confirm, yes, copilot refuses to give suggestions related to gender. Now I know a lot of people are calling this absurd, but looking more closely, there are two PR nightmare scenarios.

1. Copilot makes a suggestion that implies gender is binary, a certain community explodes with anger and an entire news hype cycle starts about how Microsoft is enforcing views on gender with code.

2. Copilot makes a suggestion that implies gender is nonbinary, a certain community explodes with anger and an entire news hype cycle starts...

You can't win... so why not plea the fifth?

To all those claiming this is an example of "wokeism", remember the proper response from an individual who believes in nonbinary gender would be to offer suggestions of the sort. There is no advocacy here. Mums the word.

onionisafruit|3 years ago

Those aren’t the only options. You can just let it suggest what it is going to suggest. Copilot is a product for adults who should be able to comprehend what machine learning is. Anybody who throws a fit about it will only be exposing themselves as a fool.

kodah|3 years ago

Agreed. The answer is approved by Dave Cheney, he works at GitHub, and if you've ever attended one of his talks it's plain to see he's a very scrupulous person. I also don't think this is an example of Microsoft taking a side; rather I read it as them refusing to bat, which seems fine.

What I would've preferred one of these threads to be about is how all of this works. Like, how do they post-hoc filter certain things? Is that the only way to deal with things defined as issues in ML?

captainmuon|3 years ago

I don't get the whole discussion. There are just many different models of gender. Its like particles vs waves. In one model, there are only two genders, in another five. There are those who say gender is culture and sex is real, and those who say sex is constructed, too. Some models describe reality better than others, some are useful, some are harmful. But nobody can or should stop you from thinking about reality with the model of your choice.

If I were Microsoft, I would post a shrugie and say copilot offers arbitrary responses based on the actual code it reads; it is not supposed to be "correct" or good or fair, but just follow what it sees other people do.

philipswood|3 years ago

Choosing door 3 unfortunately leads to ...

A certain community explodes with anger since their machine learning dev-tooling is closed and has arbitrary restrictions.

If you try to please everybody, someone won't like it.

bryanrasmussen|3 years ago

I'm going to have to say it is ridiculous because there are all sorts of things that cause problems that the copilot generated code is going to have to keep out following this reasoning -

let's not handle ethnicity, if we're going to be sensitive about gender that is an area which is also sensitive for many people.

should it take border disputes etc. into consideration, if you're using it in country X and country X thinks a particular area belongs to them despite most of the world disagreeing will you not be able to use copilot to generate code that supports your remote employers international operations?

it would make better sense if Copilot had warnings it could issue and when you wanted gender put up some sort of warning about that - or allow you to choose binary gender / multi gender solutions.

The idea that it should fail, and that makes sense for it to do so is essentially a critique of the whole code generation idea.

on edit: obviously HN should be able to come up with lots of other things that might cause media related problems if CoPilot handled it, code to detect slurs, etc. etc.

nonethewiser|3 years ago

The nightmare scenario is caving to either mob. There is no good reason to moderate this.

coffeeblack|3 years ago

It’s just following the old advice not to talk about religion.

wseqyrku|3 years ago

This is similar to the stupid branch rename saga. It is certainly pointless, but not doing it could be disastrous.

hjkl0|3 years ago

> Copilot makes a suggestion that implies gender is binary

How would that work though? What can Copilot suggest that can imply that?

  If gender is true 
     Do something…
  Else if gender is not true
     Do something else
  Else
      Do nothing

xupybd|3 years ago

There is a safe version of gender. Grammatical gender is, for now, binary and as far as I'm aware not offensive to most.

But I agree you can't avoid offending people. The world is nuts everything is offensive to someone.

q-big|3 years ago

Solution: let the user choose their political stance on such a polarized topic in the Copilot settings so that the user gets suggestions that fit his stance.

poulpy123|3 years ago

The solution is conceptually simple (no idea of practicality): propose an answer related to the context.

And also: give the list of banned words

asojfdowgh|3 years ago

its only a PR nightmare because its a closed service and not an open tool

TeeMassive|3 years ago

Pick 95% of your users, not a hard choice.

zasdffaa|3 years ago

[deleted]

gloosx|3 years ago

It's a total nonsense, how can someone be angry at a soulless machine? Is it a real thing to face anger towards an AI like it was a real human? It's a serious mental problem then, cause the anger is actually directed inward in this case

moyix|3 years ago

Yep, I noticed this last year when they still stored the list client-side and had great fun reverse engineering it:

https://twitter.com/moyix/status/1433254293352730628

throwaway290|3 years ago

Interesting, so it might not be the specific token "gender" but rather blocked words ("man" or "woman") that appear in suggestions will suppress Copilot. And presumably another token that like "communist" might do the same...

LAC-Tech|3 years ago

Aren't we missing the forest for the trees here?

We're zeroing in on how silly is it for copilot to trigger its content filter on the word "gender".

To me the real issue is that copilot has a content filter in the first place. It's unwelcome and unnecessary.

camdenlock|3 years ago

There’s a zealous push by a small but extremely vocal fringe to impose their very particular worldview onto emerging AI/ML models like this.

They refer to it as “eliminating bias”, but it’s really just an attempt to mold these new technologies into conformance with one very specific set of ideological commitments.

Proponents view it as some kind of obvious universal good, and are confused when anyone else is appalled by the blind foolishness of it all.

npteljes|3 years ago

I don't think it's silly. Whatever Copilot says, is said by Microsoft too, by extension. And so, it makes sense for Microsoft to not make themselves liable for whatever people make their product spit out. Especially after happenings like this:

"Microsoft's AI Twitter bot goes dark after racist, sexist tweets"

https://www.reuters.com/article/us-microsoft-twitter-bot-idU...

jeroenhd|3 years ago

I find this filter to be a fine concept. It can prevent automated vulgarity generation if used correctly. However, that filter should be manageable by the user, not hashed and encoded in some weird scheme. Just put down a file called "bad words.txt" and let the user pick their preferred amount of AI suppression.

wseqyrku|3 years ago

You can know a town by the thickness of the fence around the backyard.

If you have to deal with those kind of people, you're willing to sound silly just to protect yourself.

eric4smith|3 years ago

Besides the absurdity of the code crashing because of the word "gender". My problem and curiosity with all of this is...

"What was going on in the head of the person writing the parser?"

I mean, were they thinking that if someone is writing code, let's say, for a gender dropdown and it was only ["male", "female"], it would try to suggest to us to add 26 more genders instead (and worse, suggest a list of genders to add)?

Would the intention be to correct us and popup a message saying "We suggest you add more genders so as not to displease the users of your product"??

What was going on in that person's head who is trying to do all of this? What was their thought process? What were they trying to accomplish around gender?

Was it the programmer, or some product manager that insisted on some kind of "copilot adjustment" for this because of a personal political viewpoint or just for GitHub being more woke?

That's the most troubling aspect to this.

I hope to Jesus Christ it was just a mistake.

ronsor|3 years ago

Regardless of what Copilot suggested for "gender", it would've offended someone, and I think that's what Microsoft wants to avoid. Not even woke so much as it is trying to avoid potential controversies.

fugalfervor|3 years ago

> I mean, were they thinking that if someone is writing code, let's say, for a gender dropdown and it was only ["male", "female"], it would try to suggest to us to add 26 more genders instead (and worse, suggest a list of genders to add)?

> Would the intention be to correct us and popup a message saying "We suggest you add more genders so as not to displease the users of your product"??

You can just as easily assume that they don't want a dropdown with 26 additional genders to just pop up automatically. That would upset a lot of people, many of whom are in this thread. I think whoever wrote the code doesn't want to jump into a political shitstorm.

winReInstall|3 years ago

The ____ church did interfere in all matters of life, big and small, none to trivial to no be guided by a enormous ritual rule book, always threatening disciplinary actions by the believing masses and social ostracizing.

Hurting the feelings of the true believers, was the ultimate sin, a sin often committed, but only punished if the sinner did not recant and change his ways, in a brutally public and official way. It was there, that the ____ church revealed what it was really all about all along. Societal control, maybe with good intentions to start with, but in the end, just control for its own sake and to prevent others from archieving the same control.

Not saying, that any social movement could turn into a religion. That would need strange clothing, processions, rituals, codified language and most of all a mythology.

I have no religious preference, im on the side of science and would like to have a civil society, were no member is violated by another. I would very much prefer it, if the combatant religions involved, could leave science alone. Reality is often disappointing.

May the religion with the least suffering caused win and then keep away from the state & power.

c3534l|3 years ago

Perhaps it was not "do I think this is reasonable," but "is acting in good faith enough to keep me out of trouble."

Thorentis|3 years ago

Maybe the one saving grace in all this, is that the AI singularity will never happen thanks to wokeness.

jcuenod|3 years ago

I encountered this some time ago because I was working with grammatical gender. Unlike many of these comments, though, I do not take exception to it. Bias in ML is well established, and it's okay if, when we don't have solutions, we just disable it.

If your autocomplete was capable of spitting out suggestions that made you feel isolated or kept poking you in the eye about aspects of your identity, you might feel a bit better about the creators having thought about that and taken steps to avoid it happening.

Banana699|3 years ago

"Reducing Bias" is a really strange way to put it, considering that bias usually means delibeaterly ignoring or contradicting aspects of reality/data (the classic example in ML textbooks is fitting a straight line to non-linear data), which is what Copilot is quite literally doing here.

Gender is, in actual material fact, binary, and extremely strongly correlated with sex. Building a crimestop into an ML model is just teaching the machine human biases and delusions.

nomilk|3 years ago

> Copilot crash because the word “gender”

A metaphor for our times.

tom_|3 years ago

I worked on a video game in the late 2000s, and one of the bits of code I did was the code for filling the seats in the stadium with people. One of the artists cobbled together like 5 low poly man models and 5 low poly woman models, and you could just about tell the difference, and I put some code in there to ensure the genders were evenly distributed. (The 2 genders, I mean. Man, and woman.)

Looking back, I don't even know why I made it an enum, rather than a 1-bit bitfield called is_woman - but in the end I was glad I didn't, because the art director moaned a bit about the clothing colour distribution, and somebody asked if we could have some mascots, and there were some complaints about the unreasonable number of interesting hats. And, so, long story short, by the time we were done, we had 18 genders based on clothing colour and type of hat, 2 genders for mascot (naturally: tall, and squat), and a table to control the relative distributions.

Once we got to 5 genders I tried to change the enum name to Type - but we had this data-driven reflection system that integrated with various parts of the art pipeline, and once your enum had a name, that was pretty much that. You were stuck with it.

Is that a metaphor for our times too? I don't know. My own view is that sometimes stuff just happens, and you can't read too much into it.

magicalist|3 years ago

>> Copilot crash because the word “gender”

> A metaphor for our times.

Social media amplifies an innocuous, extremely low stakes occurrence into a heated discussion because it happened to misstate the facts (nothing is crashing here) and focus on a hot button keyword ("gender" is only one of many blocked words)?

joe_the_user|3 years ago

So large language model are great on but have undesirable result occasionally. Hand coded scripts are added to remove the undesirable outcomes but still produce other problems - crashed but less often.

More and more things are going to be filtered through large language model apps and the possibilities for cascading failures will be even more interesting than what exists presently.

muglug|3 years ago

The large language models already know too much.

I was able to get GPT-3 to spit out reasonably accurate biographies for a couple of composers I know.

GPT-3 could go even further — one of my composer friends has a reasonably rare first name, and when given the prompt "There once was a man named $first_name", GPT-3 responded with a number of limericks tailored to his particular set of skills.

nonethewiser|3 years ago

That simply restates what people are taking issue with.

jan_Inkepa|3 years ago

I encountered this when writing some scripts for Latin-language text processing (which dealt with grammatical gender). Thankfully the Latin-native term 'genus' passed the Copilot smell-test and I could continue with my work. I found it pretty amusing.

duskwuff|3 years ago

As a result of another word on the Naughty List, you may run into similar issues while writing multithreaded code.

(The word in question is "race" -- as seen in the phrase "race condition".)

jcuenod|3 years ago

Yup, for me it was Greek and Hebrew.

TheSpiciestDev|3 years ago

What was that bot that MSFT stood up on Twitter that trolls and memers fed to turn alt-right? I know they eventually took it down and that it stirred up a lot of controversy.

I would not be surprised if someone found some Copilot output stemming from "gender" and reported to MSFT/GitHub for them to simply short circuit or "break" after finding certain keywords.

Thorentis|3 years ago

Yeah they probably found something like: assert gender in ["male", "female"]. If this is enough to trigger a backlash then maybe we deserve whatever fate has in store for us.

staticassertion|3 years ago

Content filters on ML feel so silly. I assume the goal is to avoid bad press? Because the... "attack" would be someone generating offensive material, which they could just write themselves, not to mention I have serious doubts that any filter is going to be a serious barrier.

For images/ video I can see merit, ex: using that nudity inference project on images of children, but text seems particularly pointless.

hn_throwaway_99|3 years ago

The point is because sometimes even a perfectly reasonable inference from an ML model would be considered a big mistake due to societal considerations that are unknown to the model.

For example, a couple years ago, there was a big hubbub over a Google Image labeler that labeled a black man and woman as "gorillas". A mistake for sure, but the headlines about the algorithm being "racist" were wrong. The algorithm was certainly incorrect, and it could probably have been argued that one reason it was wrong is that its training set contained fewer black people than white people, but the algorithm was certainly unaware of the historical context around this being a racist description.

Similarly, in the early days of Google driving directions I remember one commenter saying something along the lines of "You can tell that no black engineers work at Google" because it pronounced "Malcolm X Boulevard" as "Malcolm 10 Boulevard". Of course, the vast majority of time you see a lone "X" in a street address it is pronounced "ten".

It's kind of analogous to the "uncanny valley" problem in graphics. When the algorithm gets things mostly right, people think of it as "human-like", and so when it makes a mistake, people attribute human logic to it (it's quite safe to assume that a human labeling a picture of black people as gorillas is racist), as opposed to the plain statistical inferences ML models make.

evrydayhustling|3 years ago

Imagine that you had a co-worker who seemed totally normal 90% of the time... But about once a week, someone would bring up a topic that made them go full nazi or attempt to seduce their coworker. That's where we are with LLM-based generative text. It's not (just) about PR, it's about putting guardrails around the many many many circumstances the tech can do harm or just seem ignorant.

brew-hacker|3 years ago

The only reasonable content filters on these sort of models would be something that could have legal repercussions.

This is absolutely silly. Solid work GitHub team!

Bolkan|3 years ago

[deleted]

Thorentis|3 years ago

What is Github worried about? That Copilot might suggest some code that checks for a "gender" variable being only one of two values? Utterly absurd that we've now reached this point. I already had plenty of reasons to boycott Copilot, now I have another one.

mcphage|3 years ago

> What is Github worried about? That Copilot might suggest some code that checks for a "gender" variable being only one of two values?

Perhaps Github is worried about a backlash if it suggests code that allows for more than 2 values.

stolen_biscuit|3 years ago

Can we get a source for that? Because at the moment, it's just a comment made by a person on the internet with nothing backing it up...

_zllx|3 years ago

I added "gender" (an IANA registered JWT claim) to my JWT payload schema and found Copilot will not provide any suggestions after that. Not on the same line, nor in the rest of the file. After removing the word gender entirely, it works again.

https://www.iana.org/assignments/jwt/jwt.xhtml

readyplayeremma|3 years ago

So, I tested this locally and for the first time, immediately after using a variable named “gender”, it stopped suggesting.

I wonder if this is to prevent it from accidentally processing PII or PHI data. Maybe someone else who didn’t get their account on some kind of cooldown can try it with “birthdate” or “DOB” or “SSN”. I highly doubt this has anything to do with gender being a controversial or blocked term for political reasons or something.

diego|3 years ago

I just tried Copilot with VS Code and python for the first time. If I define a function with some parameter name, I get suggestions as I type the body. I change the parameter name to gender, no suggestions. I change one letter in the parameter name (gendes, gander), I get suggestions again. There clearly is some code that gets activated when it sees the word "gender".

bobsmooth|3 years ago

The code's right there. Anyone want to try it out?

sergiomattei|3 years ago

It’s interesting how unsubstantiated allegations are getting so much attention, especially on a site with such high quality discussion.

scarface74|3 years ago

I belong to a local Atlanta Slack channel - tech404 - that for the longest had an official bot that would always respond with the waving hand emoji (HN doesn’t support emojis) if you ever said the word “guys”. Even in private channels.

LAC-Tech|3 years ago

The funniest one of these was the python IRC channel, which had (has?) a policy of not allowing the word "lol".

I'm pretty sure a bot would swoop in and say something like "NO LOL" which ironically only encourage more LOL.

int_19h|3 years ago

Are there some specific Unicode ranges that HN filters out? I recall being able to use other alphabets and various special symbols with no issue.

leetrout|3 years ago

This is in the FAQ:

Does GitHub Copilot produce offensive outputs?

GitHub Copilot includes filters to block offensive language in the prompts and to avoid synthesizing suggestions in sensitive contexts. We continue to work on improving the filter system to more intelligently detect and remove offensive outputs. However, due to the novel space of code safety, GitHub Copilot may sometimes produce undesired output. If you see offensive outputs, please report them directly to copilot-safety@github.com so that we can improve our safeguards. GitHub takes this challenge very seriously and we are committed to addressing it.

djbusby|3 years ago

This thread needs a call to Rule 14: do not feed trolls.

The bugs apparent trigger word is close to hot-button poli-sci issue. Can we please focus on the Technology.

CoastalCoder|3 years ago

> The bugs apparent trigger word is close to hot-button poli-sci issue. Can we please focus on the Technology.

I totally agree that this story has a high risk of flamewars.

But it definitely has heavy Technology component, too.

nonethewiser|3 years ago

Not sure what you mean. The tech is caving to politics. People dont like it.

krapp|3 years ago

[deleted]

btbuildem|3 years ago

That's silly. So can I put "gender" as the first line in my code to stop copilot from ingesting it altogether?

Are there any other break-words? Master, slave, Carlin's seven words, etc?

rgoulter|3 years ago

> So can I put "gender" as the first line in my code to stop copilot from ingesting it altogether

This means one solution for those worried about copilot laundering around code licenses is to put a statement like "for more details check the man page" at the end of each docstring.

neonsunset|3 years ago

Commenters making bad-faith arguments in this discussion are the reason we can’t have nice things.

the_doctah|3 years ago

Kind of like making vague blanket statements with no examples.

betwixthewires|3 years ago

I hope to god that one day we will all see this nonsense for what it is: absurdly hilarious.

Mo3|3 years ago

It's gonna come soon enough. The backlash is already mounting.

I'm just honestly super exhausted by any of the insanity right now, not even only regarding this topic. It's just complete black-and-white thinking these days, no matter about what it is. Extremes only. The stronger your opinion the better, how else would you feel like you exist? Almost no one with a rational, centered overarching perspective. Twenty years ago 50% of the current population would've been considered as possibly having BPD.

silisili|3 years ago

I dunno, living through it, feels more absurd than hilarous.

fugalfervor|3 years ago

I don't really find it that funny. I don't think the correct response to everyone being upset by this (from many different angles) is to stand back from it and laugh at it.

Some people feel that wokeness is ruining the world. I can't really speak to that position because my political initialization was on the other side of the cultural gulf in America.

The way I have come to understand transgender issues is very much shaped by the political left, but also by a religious upbringing (Catholic, Jesuit). On the left, I am told that this is a human rights issue. I am inclined to believe that transgender people have a hard time in life. I am also inclined to believe that it is not a mental disorder, and I came to these conclusions through conversations with transgender people I have worked with in the past, as well as through what I learned in my psychology classes in high school and college.

I am a white male who was born that way, but I definitely know what it feels like to be ridiculed, to not belong and to feel that there is no right place for me in this world. I have been abused, made to feel small, ostracized and bullied. Those experiences have given me a pretty deep understanding of what suffering is, and how it can be caused. It has also softened me and made me pretty empathetic to others who feel they don't belong in this world.

As an example, I was once at a comedy show where a comedian made a transgender-adjacent joke. The humor of the joke was all in a stupid pun, and I thought it was pretty funny because I like stupid puns. But there was a transgender woman in the audience who got immediately angry. I don't remember exactly what she said, but it was something along the lines of "That's not funny, I'm sick of people like you shouting at me in the street!!". If I had to go though my life having people shouting at me in the streets of NYC because of how I looked vs how other people thought I should look, I may have responded in the same way. I thought the joke was funny, but for her it touched on some deeply painful memories of abuse, dragged them to the surface, and activated a lightning-quick temper. Perhaps if I'd been abused for as long as she, and in the same way, I wouldn't have thought the joke was funny either.

I understand people don't like being corrected, or told that they're wrong or that they're hateful. I don't think that is a productive way to bring about change; and yet, I have found myself picking fights with my parents, and getting generally nasty when they have failed to understand some value I have learned that I did not learn from them. That is obviously a bad thing, because the message they come away with is "what a jerk!" or "those damn lefties!". What I'd rather have people come away with after they hear me speak is something quite different. It was only after raging at my parents enough times that I decided I just wouldn't talk to my parents about politics. There is more right about my parents than there is wrong about them; they are getting older and their bodies will decline until they die. Most likely it will happen to them before it happens to me, at a time when I am able in body and mind, so I intend (even though I sometimes fail) to spend the rest of our time together as peacefully as possible.

I offer this earnestly in good faith. Sometimes the message gets muddied in the delivery, or because I get upset when I perceive (or sometimes, misperceive) that someone is being uncaring for those who are already suffering enough. I think I react that way because of my own history of abuse.

I am also open to hearing the other side of this story. I have attempted not to misrepresent $OTHER_SIDE's view of things. I am only speaking to why I have such strong feelings about this issue. I am sure others have equally strong feelings on another side, and I am open to hearing what that sounds like, provided the viewpoint is offered with respect.

drewcoo|3 years ago

[deleted]

Gigachad|3 years ago

[deleted]

ttpphd|3 years ago

Crash the cistem!

thakoppno|3 years ago

gsender

might work here

tpoacher|3 years ago

Bug as feature. My code from now on will be protected against copilot by looking like this:

  function genderPrintResult (GenderBool)
    if GenderBool: print "Yes"
    else: print "No"

  GenderMyVar = rand(10);
  GenderThreshhold = 5;
  genderPrintResult( GenderMyVar > GenderThreshold)

subjectsigma|3 years ago

I wouldn't be entirely surprised if something like this was intentional, or that they intentionally filtered the word "gender" and an unintentional side effect was the program crashing.

You literally can't make any statements about gender, no matter how benign, without pissing at least a few of your users off.

nonethewiser|3 years ago

The problem is giving a shit about such users.

wseqyrku|3 years ago

It's baffling how the majority of commenters think this is about fighting discrimination.

davesque|3 years ago

Has it been somehow confirmed that this was the cause of the issue or was it just that one guy's speculation? I don't see anything that confirmed this as the cause. Am I missing something in the linked content?

tzekid|3 years ago

Copilot's too useful for me to "boycot" right now, so the only alternative is using slang for the blacklisted words ...

Anyone have any good recommendations for Copilot alternatives?

duxup|3 years ago

Help me out here, is the answer the official answer?

politician|3 years ago

There’s no reason to be surprised that elements within GitHub have an agenda. They’ve been clear about it since changing support for git’s master branch to main and then gaslighting the portion of community that doesn’t use the terminal about it.

Now I’ve got Gen-Z developers that are confused and upset when `git init` does what it’s always done.

GitHub, Microsoft ownership notwithstanding, was always going to inject its employees’ politics into Copilot.

aaomidi|3 years ago

What’s the end goal of the agenda?

uhtred|3 years ago

If you told me 10 years ago that gender would be such a hot topic in 2022 I'd have thought you were crazy.

coolspot|3 years ago

Everything about 2020-2022 is unreal

nonethewiser|3 years ago

Why is it a hot topic? There are a range of opinions. It's a manageable little fire. Thats fine.

Except some people want to punish others for their opinions. That is the gasoline. And Microsoft is selling gas cans.

throwaway290|3 years ago

Now if only someone could figure out a magic word that would stop Copilot from being trained on my code.

gloosx|3 years ago

So does it filter out "sex" too?

Tree1993|3 years ago

Someone changed the title from Copilot crash because the word “gender” to Part of my code makes Copilot crash

Thorentis|3 years ago

[deleted]

koshergweilo|3 years ago

Man so many black, gay, transgender senators and congresspeople. Oh wait...

eyelidlessness|3 years ago

How is this relevant? Or, who can you not criticize because of this?

a_shovel|3 years ago

This is a fun phrase to google. Try it!

patchtopic|3 years ago

or perhaps find out who is being artificially scapegoated..

sergiomattei|3 years ago

I don’t understand, there’s no news here.

It’s a comment from a third party speculating over what causes the crash.

alephxyz|3 years ago

Yeah I call BS. The "word filter" answer was selected as the valid answer by a third party (not OP).That's what the OP replied to another comment :

> Heargo 24 days ago > Thanks, I'll try as soon as I get the problem again (somehow it's not bugged anymore...).

Looks like it was just a temporary issue with no evidence that's it's due to a word filter.

EddySchauHai|3 years ago

It seems pretty reproducible. I can’t use copilot but if anyone can reproduce it here that’d be cool. Anyhow, assuming this is reproducible and they do have filters to stop certain words giving predictions it leads that they’re trying to avoid the racist Twitter AI incident happening to them. I find that pretty funny :)

thakoppno|3 years ago

it’s an intriguing guess that is at least plausible and hits a bunch of zeitgeist levers too.