top | item 42390210

OnlyFans models are using AI impersonators to keep up with their DMs

368 points| impish9208 | 1 year ago |wired.com | reply

615 comments

order
[+] evan_|1 year ago|reply
YouTube provides its creators LLM-generated replies right in the interface, apparently trained on some of the creator's actual replies:

https://www.youtube.com/watch?v=26QHXElgrl8

If you keep watching there's another feature that actually generates video ideas, scripts, and even thumbnails for video creators.

Seems really grim- what is the actual good-faith rationale for using this feature? It seems like the only use case is to trick people into thinking they're having a real interaction.

[+] TeMPOraL|1 year ago|reply
> what is the actual good-faith rationale for using this feature?

There is none, because the whole thing was never good faith. It's always been about tricking people into having a real interaction, to make it easier to exploit them for ad clicks or to sell them junk. That's the whole YouTube content creator business, and it's what it always has been.

[+] Karellen|1 year ago|reply
Worth noting that - mostly - punters weren't having real interactions with the models anyway. As the article points out, they'd previously reported (and I think HN had linked to the story) that a large number of models had been outsourcing their DM interactions to a rotating cadre of gig-workers already. And as the gig workers wouldn't be able to keep track of the full chat history between each punter and "the model", the conversations could sometimes feel off, or have long-term inconsistencies.

I guess LLM hallucinations will just give a slightly different flavour of unreal interactions.

[+] janice1999|1 year ago|reply
> It seems like the only use case is to trick people into thinking they're having a real interaction.

It's easy to rationalise a time saving measure I guess. I feel I'm unauthentic when I use auto-generated response suggestions in Outlook. But, like OnlyFans, it's 'just business'. Perhaps I'm overthinking it. How genuine and heartfelt can OnlyFans responses be? Probably as much as my response to a budget approval.

[+] mrbungie|1 year ago|reply
Good-faith rationale: None. Rationale: More growth for less effort, even if there is no soul.

Fitting for our times.

[+] AlienRobot|1 year ago|reply
The other day I was helping an elderly family member purchase something from the Internet. They were talking to a chat bot. One of the automated replies was a voice recording. There was not much "AI" in it, just a bunch of spam in Whatsapp all the time telling them to purchase all sorts of products. They were added to multiple "group chats" where only admins could send messages (i.e. a spam fest). But what worried me was that because the automated messages are never marked as such, I wasn't sure if they were perfectly aware that they were talking to a bot, since it seemed they were tried to have a conversation, saying hello and explaining things. Maybe that's just they think that's how bots work, and, loosely, the bot responds appropriately, by ignoring almost everything all the time and just spamming more.

I found it repulsive on a visceral level.

Add AI to this and I'll need to start praying to God to give me the restraint of not breaking other people's smartphones.

[+] Applejinx|1 year ago|reply
Yes, I'm experiencing this in the form of 'suggestion' buttons.

It's so clearly trained on my own replies that it parrots stuff I've said, but it tends to get the sense of the words literally backwards and wrong. If I used that I would be telling my fans the opposite of what I actually meant, or various other catastrophically not-even-wrong assertions. It really, really is not figuring out what it's being fed. Sometimes I let people in on what the AI is suggesting I say.

It's not actually wresting the controls from my hands and talking to my fans AS me… yet.

[+] everdrive|1 year ago|reply
That's depressing, and I didn't realize. This is truly the death of the the comment section.
[+] RobKohr|1 year ago|reply
I would so much rather the creator not participate in the comments than have a fake AI that doesn't know the creator's mind in the comments.

I don't care if someone with a million viewers replies just to me. Almost any video's comment stream, you can scroll through, get a general sense of what people's reactions/questions/etc are to the video, and then good creators respond in one or two comments to the masses.

The second I see someone with a huge following obviously AI responding to each of the comments, I have lost all respect for them, and people will call them out on it. I see this a lot on reddit now where people are responding to AI generated posts pointing this out, and it is causing some major issues in communities.

[+] SirMaster|1 year ago|reply
I'm OK with it as long as it would be labeled as such...

I think users need to demand that if it's AI generated that it's labeled as such.

[+] vachina|1 year ago|reply
YouTube’s best interest is ad, and having more content to retain you, the consumer, on the platform as long as possible would be their main goal.

Every single online content platform operates in the same way. Their KPI is number of hours spent on their platform, because the longer you spend there the more ads they can shove down your throat.

[+] UniverseHacker|1 year ago|reply
Google has also outsourced the rationale for their actions to LLMs.
[+] derefr|1 year ago|reply
> what is the actual good-faith rationale for using this feature

So, responding to viewers increases engagement, and thereby a channel's virality.

And one human can only do so much of that. So eventually, you hit a marketing scalability bottleneck — you get more comments than you can read, and the people who don't feel engaged with, are more likely to churn from your viewership, so your viewership growth starts to decelerate.

Large media companies + MCNs previously solved this bottleneck, by hiring paid human community managers to scale responding-to-comments.

But individual creators bootstrapping their growth, had no good solution to this (besides joining an MCN), because the too-many-comments threshold comes long before they make enough revenue to afford to hire their own community managers.

This creates a pay-to-win model where companies who already have capital from other ventures can afford to circumvent this bottleneck and so get big on YouTube, in a way that individual creators cannot.

And YouTube doesn't like that; those big companies aren't beholden to YouTube in the way that creators that think of themselves fundamentally as "YouTube content creators" are. (Or, to say that in a nicer way: YouTube wants to democratize content creation, ensuring that there's a way for small bootstrapped content-creators to "make it.")

AI comment responding is a substitute good for the paid human community managers who already perform this function for large media companies / MCNs / etc. It serves to allow these independent bootstrapped content-creators to overcome the responses-to-comments marketing bottleneck for much lower cost.

This doesn't do anything good for the people who post comments, of course; but it does work to ensure a healthy ecosystem of independent bootstrapped content creators, rather than an oligopoly of media companies — which is something that viewers want from YouTube.

(An analogy might be to level-1 CSRs in a call center: as a complainant, they just get in your way; but they solve a customer-service scalability bottleneck for the company, which thereby allows the company to grow past the point where it would otherwise stop being able to handle support at all due to the increasing flood of "nonsense" complaints.)

> It seems like the only use case is to trick people into thinking they're having a real interaction.

Yes, it is, but that's going to happen whether or not there's a built-in feature to do it. That ship has sailed literally centuries ago — ever heard of writing to a famous author/actor/etc, and getting a hand-written response, seemingly from the famous person themselves, but actually from their agent and just signed by the famous person?

[+] llm_trw|1 year ago|reply
How far we have fallen that a few bytes sent to a server are a 'real interaction'.
[+] itsoktocry|1 year ago|reply
>It seems like the only use case is to trick people into thinking they're having a real interaction.

As opposed to the genuine interaction the porn star you follow offers to you and the 10,000 other subscribers. The whole thing is theatre.

[+] manquer|1 year ago|reply
They are selling an experience, that includes the style of content, conversation, personality or persona of the creator.

Given that premise the functionally the same whether the creator or the app using an LLM responds, as long as you can keep the immersion in the role play does it matter if its the Youtube or OnlyFans actually responding?

It was already some underpaid offshore worker responding for the large creators, and this is nothing new, ATN and other phone sex line providers were doing this in 1990s and there were probably similar things in times past.

[+] nonrandomstring|1 year ago|reply
> to trick people

"Artificial Intelligence" -> noun artifice: ruse, clever trick, guile, deception, cunning, a skillful or artful contrivance

[+] bdd8f1df777b|1 year ago|reply
China's Weibo also has a similar chatbot, but the chatbot is a separate account with clear indication that it is a robot.
[+] divbzero|1 year ago|reply
This feels representative of our times: Using chatbots to power a gig economy of para-social relationships for lonely people short on real in-person interactions.
[+] disqard|1 year ago|reply
Human contact is indeed a luxury good now.

They used to say:

"buy cheap, buy twice"

Now it'll be something like:

"interact cheap, interact with AI"

[+] Barrin92|1 year ago|reply
Is there that much difference between this and a phone sex cubicle worker in the 70s or 80s? It's been a commoditized job for ages. Slightly higher tech now, but basically still the same thing.
[+] conductr|1 year ago|reply
I’ve never been, but it’s how I imagine Japan based on the snippets I hear/read about their culture
[+] kmnc|1 year ago|reply
The models themselves are going to be replaced by those same AI Impersonators. Onlyfans has to be the most predatory service I have ever used. You are pretty much signing up to be a target of scammers, whether they be some kids in the Philippines sitting with 20 phones, or AI bots, it has become incredibly obvious that every creator uses them. They all employ the same tactic of seeing how much you are willing to pay and then increasingly offering you more and more expensive content. A few years ago you could have some real conversations but now it is just glorified spam. Even if a creator wasn’t using these tactics, the water has been so muddied that you just assume it’s all fake now.
[+] anonzzzies|1 year ago|reply
I find it scary how many people I see online or even in real life that actually call girls on there their girlfriend and say things like 'sure it costs money, but so does yours when you buy dinner etc'. And no amount of; these girls are doing their job; you are a client, they forget about you the second you switch off, if their weren't bots on the first place seems to help.
[+] JumpCrisscross|1 year ago|reply
> Onlyfans has to be the most predatory service I have ever used

>> employ the same tactic of seeing how much you are willing to pay and then increasingly offering you more and more expensive content

Oh, in that way.

[+] bawolff|1 year ago|reply
The porn industry being predatory?

Absolute shocker.

[+] standardUser|1 year ago|reply
There's still plenty of avenues to have direct access to sex workers with varying levels of intimacy/realness if you cast a wide enough net. Just not necessarily with a top tier OF model! Though it seems like most people are satisfied with superficial interactions, in which case the AI bot approach might be a win-win.
[+] game_the0ry|1 year ago|reply
OnlyFans might want to look into the legal implications.

I could imagine a class action law suit where the company is sued for fraud on the basis that it is selling real interactions with models.

OnlyFans user should know better, but you could make the case they were defrauded for services (in this case, interaction with the model).

[+] paxys|1 year ago|reply
Well the people signed up to talk to a model..
[+] tomgs|1 year ago|reply
Small professional rabbithole:

So I do GTM and work a lot with marketing websites for companies with long sales processes.

The company mentioned there, Supercreator, funnily has a CRM - which is not just some funky AI chatbot thingie, but a proper enterprise thing that people use when doing sales.

It looks like they're treating creators and the "agencies" (whatever that means) as what we would potentially call "SMBs", and sell this CRM thing to them to manage their "customers", which I assume are the fans or subscribers or whatever on the OF side of things.

This is insanely interesting to me. Look at the website - you have a "request a demo" section (which is super enterprise B2B), look at the menu, it's like an enterprise SaaS website.

What the actual fuck is going on here lol

[+] hmmm-i-wonder|1 year ago|reply
Man traditional CRM's are missing a major medium size market segment I guess.
[+] yechiams|1 year ago|reply
also checked Supercreator.app out - looks really interesting
[+] thefounder|1 year ago|reply
a better headline would be: “OnlyFans human chatters are being replaced by AI”. That’s because:

- 99% of the time the models on OF don’t actually have any conversation(many of them don’t speak English at all), they may not even signup on OF. They are “managed” models(aka pimp-ed through a local pimp or totally remote/web pimp/agency))

- the DMs were handled by chatters in the first place so this has nothing to do with the models.

[+] eloisant|1 year ago|reply
If anything, an LLM might be better at remembering about the small details of the model persona, previous conversations with the customer, and stay in character.
[+] klabb3|1 year ago|reply
And how does that affects those OF models who actually type out their own interactions? The problem with chat farms and AI alike is the deception, and destroying for those who are genuine.

Tricking a human to interact with a bot is imo in the same category as trolling, spam or propaganda. It's a way to redirect attention and energy away from people, without spending any yourself. Sure, we can laugh at the simps wasting money on "interactions" with streamers, but it's no fun when our Google searches and Amazon reviews are filled up with AI slop, is it? It was already bad before AI, and now we're moving towards noise levels that will cripple human-crafted content.

Labeling content is the first, and right thing to do, just like the unwritten rule of giving credit to a creator when you post their work. It doesn't stop ill-faithed AI slop, or human content farms, but it can at least create a social expectation and prevent normalizing it.

[+] vasco|1 year ago|reply
Yeah this just went from "You're speaking to someone in a call center pretending to be a cutie" to "you're speaking to an AI cutie". With the amount of people addicted to AI girlfriends it's not surprising.
[+] charlie0|1 year ago|reply
The issue this brings up always reminds of that scene in Westworld where the protagonist arrives there for the first time and meets a host.

Host: "You want to ask, so ask."

Protagonist: "Are you real?"

Host: "Well if you can't tell, does it matter?"

It always gives me chills because at least for me, the answer is like Schrödinger's cat. It really doesn't matter at all if you never know the truth and definitely seems to matter if you do find out.

[+] devjab|1 year ago|reply
I hope I don’t come off as offensive asking this, but is there really that much of a difference? I’m not a fan of services which prey on peoples loneliness, but isn’t the defining feature or these para-social relationship platforms that they are all make-belief? Maybe I’m wrong, but it’s very hard for me to imagine that you could form any sort of relationship with the thousands of lonely people who pay you money to notice them. Hell, they must have some impressive note taking strategies to remember it all. An AI might end up being more engaging and personal.
[+] danielvaughn|1 year ago|reply
I genuinely don’t understand how OnlyFans customers wouldn’t understand this. What, you think that someone with hundreds of thousands of subscribers is going be personally chatting with you? It’s either going to be AI, or some poor guy across the world who’s paid to pretend to be a sexy woman.

It’s hard to fathom how so many people can be so gullible.

[+] kusokurae|1 year ago|reply
Surely the motivation for paying for this pornography is precisely the somehow more "human" "closeness" of a known individual creating content on a direct, "personal" basis.

It's not quite the-Queen-is-my-friend levels of parasocial but it does seem to be about intimacy.

What happens to that business model if customers paying on that basis realise they are seeing & talking to a bot? Seems quite footshooty.

[+] beoberha|1 year ago|reply
Someone in the “side projects making $500+ per month” thread yesterday said they have a business doing this
[+] bhouston|1 year ago|reply
How long until the most popular Only Fans creators are fully AI?

I could see these pseudorelations, which are already mediated by so much fakery, just continue to evolve in that direction?

Maybe everyone gets their own OnlyFan creators which are tailored/adapt to their personal desires?

In the future all of this is AI generated, multimedia, tailored to your wants, available and fresh 24/7.

[+] gdjskshh|1 year ago|reply
I really dislike the business model of Tinder->Snapchat->OnlyFans conversion.

There might be a lot of fish in the sea, but there's a lot of pollution too.

[+] amai|1 year ago|reply
Finally a use case for generative AI which makes money. And we all know, if the porn industry adopts a technology then it is here to stay.
[+] theshrike79|1 year ago|reply
Exactly zero major OF models handle their own messages anyway. They used hired humans or 3rd party services for that.

I think OF itself lets you send "personal" mass messages with something like "Hey, $name, I've missed you. Here, pay $100 to see a single picture"

Dunno how thick do you have to be to believe the message was sent only to you and none of the other 10000 subscribers...

[+] kylehotchkiss|1 year ago|reply
Our country has an increasingly worsening loneliness epidemic and this is amplifying it. $1,000 tip?? That's enough to take a lot of dates or take a trip with some friends instead of making OF even more appealing a job option to people who have the potential to have much more fulfilling lives and careers.
[+] nullorempty|1 year ago|reply
I always use AI impersonator to DM my favourite OnlyFans models.
[+] space_oddity|1 year ago|reply
The irony is that fans pay for a personal connection, yet increasingly interact with scripts crafted by machines