> Hancock and his collaborators set out to explore this problem space by looking at how successful we are at differentiating between human and AI-generated text on OKCupid, AirBNB, and Guru.com.
The study evaluated short-form generic marketing-style content, most of which is manicured and optimized to within an inch of its life.
Most dating profiles I see are extremely similar in terms of how people describe themselves. Same for Airbnb listings. I'd think AI detection would be much higher for long-form writing on a specific topic.
> The study evaluated short-form generic marketing-style content, most of which is manicured and optimized to within an inch of its life.
This is also the kind of human-written content that is closest to how LLMs sound. The tonal and structural similarity is so glaring that I have often wondered if a large percentage of the GPT training corpus is made up of text from spam blogs.
I think if I was given, say, a couple pages from an actual physics textbook and then a GPT emulation of the same, I would be able to tell the difference easily. Similarly with poetry - GPT's attempts at poetry are maximally conventional and stuffed with flat and stale imagery. They can easily be separated from poetry by a truly original human writer.
If AI developers want to impress me, show me an AI whose writing style departs significantly from the superficiality and verbosity of a spam blog. Or, in the case of Bing, an unhinged individual with a nasty mix of antisocial, borderline, and histrionic personality disorders.
Totally agree. Just yesterday, I was finishing up an article [1] that advocates for conversation length as the new definition of a "score" on a Turing test.
You assume everyone is a robot and measure how long it takes to tell otherwise.
According to academic friends of mine, tools like ZeroGPT still have too much noise in the signal to be viable way to catch cheaters. It seems to be better than these short form pieces or content, but if even if it’s “only” 80% accurate, some of those 20% will be false positives which is problematic.
Detecting whether something is written by an AI is a waste of time. Either someone will sign the statement as their own or they won't (and it should be treated as nonsense).
People lie. People tell the truth. Machines lie. Machines tell the truth. I bet our ability to detect when a person is lieing isn't any better than 50% either.
What matters is accountability, not method of generation.
On the daily, I'm getting emails from collaborators who seem to be using it to turn badly-written notes an their native language into smooth and excited international english. I totally am happy that they're using this new tool, but also hope that we don't get stuck on it and continue to value unique, quirky human communication over the smoothed-over outputs of some guardrailed LLM.
Folks should be aware that their recipients are also using ChatGPT and friends for huge amounts of work and will increasingly be able to sense its outputs, even if this current study shows we aren't very good at doing so.
Maybe there will be a backlash and an attempt to certify humanity in written communication by inserting original and weird things into our writing?
The thing is that writing professional email as a non-native sucks.
I'm a non-native English speaker myself. My level is typically considered very good (C2 CEFR level, which is the highest measured level in the European framework). If I need to write an email to a colleague whom I know and trust, that's easy. Writing this message in HN? Also easy, I'm just improvising it as I think it, not much slower than I would in my natural language.
But writing an email to someone you don't know... that's very different. When you write in a non-native language, it's extremely easy to get the subtleties wrong: to sound too pushy about what you want, to make the matter seem more or less urgent than it really is, to sound too blunt or too polite... this doesn't matter with people you know or with strangers in an informal setting like this, but it does matter when emailing strangers in a professional setting, and it's extremely difficult to get right when you are non-native.
Sometimes I used to spend 15-20 minutes brooding over an email in this type of scenario, making and rethinking edits while hitting the submit button... not anymore. ChatGPT: "Write an email reminding this person, who has this role, that the deadline for thing X expires on day Y. The email should be polite, assertive but not too pushy". Check the output, maybe make some trivial edits, because the difficult part (the tone) tends to be fine, at least for my standards. Done.
Non-native speakers aren't going to renounce that luxury. It just makes too big of a difference to not use it in that case.
ChatGPT writes like a college freshman trying to meet a pagecount requirement and the style seems to invite my eyes to slide down to the next item. But it is important to note that while you definitely notice the ones you notice, you don't know about the ones you don't notice. When I use cgpt I always instruct it to maximize for brevity because I am not interested in reading any academic papers. The output I get is much more bearable than 99% of the HN comments that lead with "I asked chatGPT to..."
Please rewrite the following text using smooth and excited international English, but also insert some original and weird things into your writing.
Every day, my inbox is brimming with messages from my global allies, who seem to have harnessed the power of this cutting-edge tool to transform their rough, native-language scribblings into electrifying, polished international English. I'm absolutely thrilled they're embracing this innovative technology, but I also secretly wish for us to preserve the charm of our distinctive, eccentric human exchanges, rather than solely relying on the silky-smooth productions of these masterfully-constructed LLMs.
It's crucial for everyone to realize that the recipients of their messages are also employing ChatGPT and its entourage for colossal workloads, and will gradually develop a keen sense for detecting its output, despite this present research revealing our current inability to do so. In the meantime, let's all enjoy a dancing unicorn with a mustache that serenades us with jazz tunes, just to keep things intriguing and refreshingly bizarre.
the use of commas and how it concludes statements is what usually gives it away
the current work use cases for GPT is almost worse than crypto mining in terms of wasted compute resources:
>manager uses GPT to make an overly long email
>readers use GPT to summarize and respond
then on the search front:
>Microsoft and Google add these tools into their office suites
>will then have to use more resources with Bing and Google Search to try and analyze web content to see if it was written with AI
Huge amounts of wasted energy on this stuff. I'm going to assume that both Google and Microsoft will add text watermarks to make it easy for them to identify at some point
You just now need to write your own tool to take the emails these folks send you and get a GPT to summarise and rephrase them in the voice you would appreciate ;) (I'm not even joking, I think that's our future...)
I'm not disagreeing with your sentiment. I love richly written, complex writing that can take a moment to digest, but, let's be honest here, it isn't just AI that has destroyed the written word: the internet, smart phones, and cute emoji have already done an exemplary job of that.
I cannot find any more fantasy literature that won't make me puke a little bit in my mouth every time I try to read it. Granted it all seems to fall under the grotesque umbrella known as YA so perhaps it cannot be helped, but where or where are the authors who wanted to expand the minds of their young readers? I cannot find them anywhere.
When did you last see any sort of interesting grammatical structure in a sentence? They are bygones. And it depresses me.
> Maybe there will be a backlash and an attempt to certify humanity in written communication by inserting original and weird things into our writing?
I've said it here before but I think we will speak in prompts. We'll go to other iterations before, but I think it'll stabilize by speaking in prompts.
1. First we start using the output of the LLM to send that to others
2. Then we start summarizing what we receive from others with an LLM
3. Finally we start talking to each other in prompts and whenever we need to understand someone better we run their prompt through an LLM to expand it instead of to summarize it.
This path makes the most sense to me because human language evolves to how we think about things, and if a lot of our creative output and work will be generated from thinking in prompts that's how we'll start speaking too.
I also find it problematic that ChatGPT resembles how I write about anything non-trivial, and it's lead to me being accused of using ChatGPT to respond to people's messages before.
> but also hope that we don't get stuck on it and continue to value unique, quirky human communication
For informal, friendly communication, certainly. For business communication, we already lost that.
Companies usually don't want any quirkiness in bug reports, minutes of meetings, and memos. There may be templates to follow, and rules often emphasize going straight to the point, and using English if the company deals in an international context. I expect LLMs to be welcome as a normaliser.
So we've passed the denial stage and are approaching anger, then.
The fact is that most writing nowadays is simply atrocious. I welcome my fellow humans' writing assisted by their AI assistants, if for no other reason than to end the assault on my eyeballs as I'm forced to try to parse their incoherent gibberish.
The information ecosystem has been in pretty bad shape for some decades now:
> "The volume of AI-generated content could overtake human-generated content on the order of years, and that could really disrupt our information ecosystem. When that happens, the trust-default is undermined, and it can decrease trust in each other."
I see no problems here. If people don't trust the pronouncements of other humans blindly, but instead are motivated to do the footwork to check statements and assertions independently, then it'll result in a much better system overall. Media outlets have been lying to the public for decades about important matters using humans to generate the dishonest content, so have politicians, and so have a wide variety of institutions.
What's needed to counter the ability of humans or AI to lie without consequences or accountability is more public education in methods of testing assertions for truthfulness - such as logic (is the claim self-consistent?), research (is the information backed up by other reputable sources?) and so on.
I see differently. You have a news. There is text. Ai generated. There is an image. Ai generated. There is a reference to a convincing study. Ai generated. You try to use your logic textbook to process this. That too is ai generated.
What do you base your trust on? Do you distrust everything? How would you know what to take seriously, when ALL could be AI generated.
While I mostly agree, I think the bar has been raised on how easy it is to make believeable fake proof. We now have AI generated images that can reasonably pass the smell test.
And it's not binary. It's now going to be a spectrum from human <---> AI generated. But just like all digital communication now involves a computer for typing / speaking, all communication will very rapidly involve AI. To me it feels almost meaningless to try to detect if AI was involved.
This is a very generous statement. Clearly our current system is broken (e.g. misinformation campaigns) and people have not been motivated fact-check themselves.
That might work in a narrow set of circumstances where data can be published to trusted sources for one to read and say yes this information is true. But in much broader situations AI can spit out disinformation in many locations and it will be information that is not testable like celebrity news and it will be nearly impossible for one to verify truthfulness.
The title is like saying "The profit increases by 0%", which is grammatically correct and logically sound, but that exactly means the profit doesn't increase at all.
When the task is choosing between two choices (in this case: AI/Human), the worst you can do in average is not 0% correct, but 50%, which is a coin flip. If a model—whether it's an ML one or is inside human's mind—achieves 40% accuracy in a binary prediction, it can increases the accuracy to 60% by just flipping the answers.
The more interesting numbers are precision and recall, or even better, a confusion matrix. It might turn out that the false AI score and the false human score (in the sense of false positive/negative) differ significantly. That would be a more interesting report.
So, if you can get some binary value, true or false, with 50% accuracy, that's like a coin flip. So essentially zero accuracy advantage over random chance. That means, quite literally, that this method of "identifying" AI may as well just BE a coin flip instead and save ourselves the trouble
I bet educated people can identify whether long form content from their own field is bullshit more than 50% of the time. By bullshit, I mean the kind of waffling without a point which LLMs descend into once you pass their token limit or if there's little relevant training data, and which humans descend into when they're writing blog posts for $5.
Your comment only applies to the LLMs of today. Consider how much more bullshit the best natural language bot generated 10 years ago. The bullshit produced is dropping at an incredible rate. In a few short years we could very well have highly accurate expert AIs trained in virtually every field. Humans would be the ones generating bullshit and these bots would be used to spot it.
The flood of AI generated content is already underway and the models keep improving. If our ability to identify AI content is 50% today, I would expect it to be much lower in coming years as people get better at using AI tools and models improve.
This feels vaguely apocalyptic. Like the internet I've known since the late 90s is going away completely and will never come back.
Tools from that era - forums, comment systems, search engines, email, etc. - are ill prepared to deal with the flood of generated content and will have to be replaced with... something.
Won't this "just solves it self/capatalism" ? (After some hard and trouble times)
I.e if 'suddenly' (/s?) the top-20 results of Google-SERPS are all A.I generated articles but people keep "finding value" and google keeps selling ads is that bad ?
If people stop using google because the top-20 results are all useless A.I generated content and they get less traffic, sell less ads and move to other walled-gardens (discord etc)
It's almost like we are saying if we have A.I copywriters they need to be "perfect"
like with "autonomous A.I driving"
I'm betting(guessing) the "bulk of A.I articles" has more value than average human copywriting A.I ?
What matters is if the text is factual. Humans without AI can lie and mislead as well.
If ChatGPT and other tools help humans write nice, easy to read text from prompts, more power to them.
Except for professors trying to grade assignments, the average person should not care.
I think this mostly affects a certain educated person who gate-keeps around writing skill and is upset that the unwashed masses can now write like them.
For one it’s an absolutely massive force multiplier for scammers who often do not write well in English, and who have so far been constrained by human limits in how many victims they can have “in process” at once.
It matters because LLMs can tell plausible lies at incredible scale: marketing, propaganda, misinformation and disinformation, etc. Understanding whether content is AI generated would be a useful red flag, but we can't. Nor can supposed "AI detectors" do so with any reliability [0]. It's going to be a problem.
On the downside, everything is going to be generated by AI here in the next few years.
On the upside, no one will pay any attention to email, LinkedIn messages, Twitter, or social media unless its coming from someone you already know. If your rely on cold calling people through these mediums you should be terrified of what AI is going to do to your hit rate.
As this tech permeates every aspect of our lives, I believe we are on cusp of an explosion of productivity/creation where it will become increasingly hard to distinguish between noise vs signal.
It'll be interesting to see how this all plays out. I'm very optimistic and not because a positive outcome is guaranteed but because we as a civilisation desperately needed this.
The last time we saw multiple technological innovations converging was almost a century ago! Buckle up!
I think when AI gets embodied and navigates our world, we would have figured out a method to propagate ground-truth in our filter bubbles. The rest will be art and op-eds and we would know them as such since AI will label it explicitly unless we choose not to or want to suspend our disbelief.
Ironically, you've hit upon one of the key fears about AI, which have split public opinion somewhat.
One group thinks AI may be 'woke' because its makers blocked it from using slurs. As such, it may even discriminate against those considered 'non-woke'.
The other thinks that AI having some hard-coded language filters doesn't mean that it can't be leveraged to push ideas and data that lead to (man-made) decisions that harm vulnerable groups. It's an extension of the quite stupid idea that one cannot be racist unless they've explicitly used racist speech; behaviour and beliefs are irrelevant as long as they go unsaid.
For dating profiles I guess you have to expect the fake ones to try their best at being real, while the real ones have been trying their best at being fake since the beginning.
Maybe they will cross paths and it will lead to a match made in heaven.
50% means we can't "accurately" identify them at all. The article mentions that it is effectively like a random coin flip, but the title is misleading.
Publish or Perish culture + ChatGPT = Rampant academic fraud in the coming years. I guess the real-world productivity of scientists (not just paper-piling productivity) will take a large hit, as they are fed false data and lose a lot of time trying to replicate bogus findings and sifting through all those spam papers to find the good ones.
[+] [-] rchaud|3 years ago|reply
The study evaluated short-form generic marketing-style content, most of which is manicured and optimized to within an inch of its life.
Most dating profiles I see are extremely similar in terms of how people describe themselves. Same for Airbnb listings. I'd think AI detection would be much higher for long-form writing on a specific topic.
[+] [-] civilized|3 years ago|reply
This is also the kind of human-written content that is closest to how LLMs sound. The tonal and structural similarity is so glaring that I have often wondered if a large percentage of the GPT training corpus is made up of text from spam blogs.
I think if I was given, say, a couple pages from an actual physics textbook and then a GPT emulation of the same, I would be able to tell the difference easily. Similarly with poetry - GPT's attempts at poetry are maximally conventional and stuffed with flat and stale imagery. They can easily be separated from poetry by a truly original human writer.
If AI developers want to impress me, show me an AI whose writing style departs significantly from the superficiality and verbosity of a spam blog. Or, in the case of Bing, an unhinged individual with a nasty mix of antisocial, borderline, and histrionic personality disorders.
[+] [-] shvedsky|3 years ago|reply
[1]: http://coldattic.info/post/129/
[+] [-] jnovek|3 years ago|reply
[+] [-] ouid|3 years ago|reply
[+] [-] dr_dshiv|3 years ago|reply
My guess is that we all become more sensitive to this in a year or two. Look at how awful DALLE looks now, relative to our amazement last year.
[+] [-] williamtrask|3 years ago|reply
People lie. People tell the truth. Machines lie. Machines tell the truth. I bet our ability to detect when a person is lieing isn't any better than 50% either.
What matters is accountability, not method of generation.
[+] [-] inciampati|3 years ago|reply
On the daily, I'm getting emails from collaborators who seem to be using it to turn badly-written notes an their native language into smooth and excited international english. I totally am happy that they're using this new tool, but also hope that we don't get stuck on it and continue to value unique, quirky human communication over the smoothed-over outputs of some guardrailed LLM.
Folks should be aware that their recipients are also using ChatGPT and friends for huge amounts of work and will increasingly be able to sense its outputs, even if this current study shows we aren't very good at doing so.
Maybe there will be a backlash and an attempt to certify humanity in written communication by inserting original and weird things into our writing?
[+] [-] Al-Khwarizmi|3 years ago|reply
I'm a non-native English speaker myself. My level is typically considered very good (C2 CEFR level, which is the highest measured level in the European framework). If I need to write an email to a colleague whom I know and trust, that's easy. Writing this message in HN? Also easy, I'm just improvising it as I think it, not much slower than I would in my natural language.
But writing an email to someone you don't know... that's very different. When you write in a non-native language, it's extremely easy to get the subtleties wrong: to sound too pushy about what you want, to make the matter seem more or less urgent than it really is, to sound too blunt or too polite... this doesn't matter with people you know or with strangers in an informal setting like this, but it does matter when emailing strangers in a professional setting, and it's extremely difficult to get right when you are non-native.
Sometimes I used to spend 15-20 minutes brooding over an email in this type of scenario, making and rethinking edits while hitting the submit button... not anymore. ChatGPT: "Write an email reminding this person, who has this role, that the deadline for thing X expires on day Y. The email should be polite, assertive but not too pushy". Check the output, maybe make some trivial edits, because the difficult part (the tone) tends to be fine, at least for my standards. Done.
Non-native speakers aren't going to renounce that luxury. It just makes too big of a difference to not use it in that case.
[+] [-] jabroni_salad|3 years ago|reply
[+] [-] vbezhenar|3 years ago|reply
Every day, my inbox is brimming with messages from my global allies, who seem to have harnessed the power of this cutting-edge tool to transform their rough, native-language scribblings into electrifying, polished international English. I'm absolutely thrilled they're embracing this innovative technology, but I also secretly wish for us to preserve the charm of our distinctive, eccentric human exchanges, rather than solely relying on the silky-smooth productions of these masterfully-constructed LLMs.
It's crucial for everyone to realize that the recipients of their messages are also employing ChatGPT and its entourage for colossal workloads, and will gradually develop a keen sense for detecting its output, despite this present research revealing our current inability to do so. In the meantime, let's all enjoy a dancing unicorn with a mustache that serenades us with jazz tunes, just to keep things intriguing and refreshingly bizarre.
Not weird enough I guess.
[+] [-] ren_engineer|3 years ago|reply
the current work use cases for GPT is almost worse than crypto mining in terms of wasted compute resources:
>manager uses GPT to make an overly long email
>readers use GPT to summarize and respond
then on the search front:
>Microsoft and Google add these tools into their office suites
>will then have to use more resources with Bing and Google Search to try and analyze web content to see if it was written with AI
Huge amounts of wasted energy on this stuff. I'm going to assume that both Google and Microsoft will add text watermarks to make it easy for them to identify at some point
[+] [-] janekm|3 years ago|reply
[+] [-] flippinburgers|3 years ago|reply
I'm not disagreeing with your sentiment. I love richly written, complex writing that can take a moment to digest, but, let's be honest here, it isn't just AI that has destroyed the written word: the internet, smart phones, and cute emoji have already done an exemplary job of that.
I cannot find any more fantasy literature that won't make me puke a little bit in my mouth every time I try to read it. Granted it all seems to fall under the grotesque umbrella known as YA so perhaps it cannot be helped, but where or where are the authors who wanted to expand the minds of their young readers? I cannot find them anywhere.
When did you last see any sort of interesting grammatical structure in a sentence? They are bygones. And it depresses me.
[+] [-] vasco|3 years ago|reply
I've said it here before but I think we will speak in prompts. We'll go to other iterations before, but I think it'll stabilize by speaking in prompts.
1. First we start using the output of the LLM to send that to others
2. Then we start summarizing what we receive from others with an LLM
3. Finally we start talking to each other in prompts and whenever we need to understand someone better we run their prompt through an LLM to expand it instead of to summarize it.
This path makes the most sense to me because human language evolves to how we think about things, and if a lot of our creative output and work will be generated from thinking in prompts that's how we'll start speaking too.
By Greg Rutkowski.
[+] [-] antibasilisk|3 years ago|reply
[+] [-] GuB-42|3 years ago|reply
For informal, friendly communication, certainly. For business communication, we already lost that.
Companies usually don't want any quirkiness in bug reports, minutes of meetings, and memos. There may be templates to follow, and rules often emphasize going straight to the point, and using English if the company deals in an international context. I expect LLMs to be welcome as a normaliser.
[+] [-] jason-phillips|3 years ago|reply
So we've passed the denial stage and are approaching anger, then.
The fact is that most writing nowadays is simply atrocious. I welcome my fellow humans' writing assisted by their AI assistants, if for no other reason than to end the assault on my eyeballs as I'm forced to try to parse their incoherent gibberish.
[+] [-] photochemsyn|3 years ago|reply
> "The volume of AI-generated content could overtake human-generated content on the order of years, and that could really disrupt our information ecosystem. When that happens, the trust-default is undermined, and it can decrease trust in each other."
I see no problems here. If people don't trust the pronouncements of other humans blindly, but instead are motivated to do the footwork to check statements and assertions independently, then it'll result in a much better system overall. Media outlets have been lying to the public for decades about important matters using humans to generate the dishonest content, so have politicians, and so have a wide variety of institutions.
What's needed to counter the ability of humans or AI to lie without consequences or accountability is more public education in methods of testing assertions for truthfulness - such as logic (is the claim self-consistent?), research (is the information backed up by other reputable sources?) and so on.
[+] [-] arka2147483647|3 years ago|reply
I see differently. You have a news. There is text. Ai generated. There is an image. Ai generated. There is a reference to a convincing study. Ai generated. You try to use your logic textbook to process this. That too is ai generated.
What do you base your trust on? Do you distrust everything? How would you know what to take seriously, when ALL could be AI generated.
[+] [-] stonemetal12|3 years ago|reply
https://arstechnica.com/tech-policy/2023/03/ai-platform-alle...
[+] [-] withinboredom|3 years ago|reply
I think you meant since forever. I’m sure propoganda has existed since someone could yell loudly in a town square.
[+] [-] toddmorey|3 years ago|reply
[+] [-] itake|3 years ago|reply
This is a very generous statement. Clearly our current system is broken (e.g. misinformation campaigns) and people have not been motivated fact-check themselves.
[+] [-] 14|3 years ago|reply
[+] [-] beltsazar|3 years ago|reply
When the task is choosing between two choices (in this case: AI/Human), the worst you can do in average is not 0% correct, but 50%, which is a coin flip. If a model—whether it's an ML one or is inside human's mind—achieves 40% accuracy in a binary prediction, it can increases the accuracy to 60% by just flipping the answers.
The more interesting numbers are precision and recall, or even better, a confusion matrix. It might turn out that the false AI score and the false human score (in the sense of false positive/negative) differ significantly. That would be a more interesting report.
[+] [-] aldousd666|3 years ago|reply
[+] [-] strken|3 years ago|reply
[+] [-] m00x|3 years ago|reply
[+] [-] teawrecks|3 years ago|reply
[+] [-] macrolocal|3 years ago|reply
[+] [-] AlexandrB|3 years ago|reply
This feels vaguely apocalyptic. Like the internet I've known since the late 90s is going away completely and will never come back.
Tools from that era - forums, comment systems, search engines, email, etc. - are ill prepared to deal with the flood of generated content and will have to be replaced with... something.
[+] [-] rawoke083600|3 years ago|reply
I.e if 'suddenly' (/s?) the top-20 results of Google-SERPS are all A.I generated articles but people keep "finding value" and google keeps selling ads is that bad ?
If people stop using google because the top-20 results are all useless A.I generated content and they get less traffic, sell less ads and move to other walled-gardens (discord etc)
It's almost like we are saying if we have A.I copywriters they need to be "perfect" like with "autonomous A.I driving"
I'm betting(guessing) the "bulk of A.I articles" has more value than average human copywriting A.I ?
[+] [-] RcouF1uZ4gsC|3 years ago|reply
What matters is if the text is factual. Humans without AI can lie and mislead as well.
If ChatGPT and other tools help humans write nice, easy to read text from prompts, more power to them.
Except for professors trying to grade assignments, the average person should not care.
I think this mostly affects a certain educated person who gate-keeps around writing skill and is upset that the unwashed masses can now write like them.
[+] [-] macNchz|3 years ago|reply
[+] [-] Veen|3 years ago|reply
[0]: https://arxiv.org/abs/2303.11156
[+] [-] cryptonector|3 years ago|reply
[+] [-] zirgs|3 years ago|reply
[+] [-] woeirua|3 years ago|reply
On the upside, no one will pay any attention to email, LinkedIn messages, Twitter, or social media unless its coming from someone you already know. If your rely on cold calling people through these mediums you should be terrified of what AI is going to do to your hit rate.
[+] [-] pc_edwin|3 years ago|reply
It'll be interesting to see how this all plays out. I'm very optimistic and not because a positive outcome is guaranteed but because we as a civilisation desperately needed this.
The last time we saw multiple technological innovations converging was almost a century ago! Buckle up!
[+] [-] passion__desire|3 years ago|reply
[+] [-] not_enoch_wise|3 years ago|reply
[+] [-] rchaud|3 years ago|reply
One group thinks AI may be 'woke' because its makers blocked it from using slurs. As such, it may even discriminate against those considered 'non-woke'.
The other thinks that AI having some hard-coded language filters doesn't mean that it can't be leveraged to push ideas and data that lead to (man-made) decisions that harm vulnerable groups. It's an extension of the quite stupid idea that one cannot be racist unless they've explicitly used racist speech; behaviour and beliefs are irrelevant as long as they go unsaid.
[+] [-] ceejayoz|3 years ago|reply
[+] [-] jm_l|3 years ago|reply
That's how you know it's fake, nobody loves the politics in SF.
[+] [-] fuzzfactor|3 years ago|reply
Maybe they will cross paths and it will lead to a match made in heaven.
[+] [-] layer8|3 years ago|reply
[+] [-] VikingCoder|3 years ago|reply
[+] [-] Qem|3 years ago|reply
[+] [-] jwrallie|3 years ago|reply