> In particular, we'd like to acknowledge the remarkable creative output of Japan--we are struck by how deep the connection between users and Japanese content is!
Translation from snake speech bs: We've been threatened by Japanese artists via their lawyers that unless we remove the "Ghibli" feature that earned us so much money, and others like it, we're going to get absolutely destroyed in court.
My hunch is that openai used ghibli as the example in their earlier dall-e blog posts strategically because anime was earlier said by the PM not to be protected by copyright in training. OpenAI is always sneakier than most people give them credit for.
I'm pretty sure this is in response to the flood of Sora anime parodies that have flooded TikTok in the past 48 hours. Seems like OpenAI is acknowledging some strongly worded letters from anime rights holders rather than individual artists, or the response wouldn't be this swift.
I don't understand some parts of this, the writing doesn't seem to flow logically from one thought to another.
> Second, we are going to have to somehow make money for video generation. People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences.
> We are going to try sharing some of this revenue with rightsholders who want their characters generated by users.
> The exact model will take some trial and error to figure out, but we plan to start very soon. Our hope is that the new kind of engagement is even more valuable than the revenue share, but of course we we want both to be valuable.
The first part of this paragraph implies that the video generation service is more expensive than they expected, because users are generating more videos than they expected and sharing them less. The next sentence then references sharing revenue with "rightsholders"? What revenue? The first part makes it sound like there's very little left over after paying for inference.
Secondly, to make a prediction about the future business model - it sounds like large companies (disney, nintendo, etc) will be able to enter revenue sharing agreements with OpenAI where users pay extra to use specific brand characters in their generated videos, and some of that licensing cost will be returned to the "rightsholders". But I bet everyone else - you, me, small youtube celebrities - will be left out in the cold with no controls over their likeness. After all, it's not like they could possibly identify every single living person and tie them to their likeness.
2. They might get into trouble charging users to generate some other entity's IP, so they may revenue-share with the IP owner.
They're probably still losing money even if they charge for video generation, but recouping some of that cost, even if they revshare, is better than nothing.
"Sora Update #4: Through a partnership with Google, Meta and Snap Inc., you will be able to generate tasteful photos of the cute girl you saw on the bus. She will receive a compensation of $0.007 once she signs our universal content creators' agreement."
“Dear rights holders, we abused your content to train our closed model, but rest assured we’ll figure out a way to get you pennies back if you don’t get too mad at us”
It is already illegal to use images in somebody's likeness for commercial purposes or purposes that harm their reputation, could be confusing, etc... Basically the only times you could use these images are for some parodies, for public figures, and fair use.
Now, the OpenAI will be lecturing their own users, while expecting them to make them rich. I suspect, the users will find it insulting.
Generation for personal use is not illegal, as far as I know.
Don't worry, you can write "dumb ass" here without needing to use algospeak. This isn't Instagram or TikTok and you won't be unpersoned by a "trust and safety" team for doing so.
P.S. No need for a space after your meme arrows :-)
copyright is such a poorly designed tax on our society and culture. innovations like Sora should be possible, but faces huge headwinds because... Disney wants even more money?
the blind greed of copyright companies disgusts me
> People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences.
What did OpenAI expect, really? They imposed no meaningful generation limits and and "very small audiences" is literally the point of an invite-only program.
Update after more testing: looks like every popular video game prompt (even those not owned by Nintendo) triggers a Content Warning, and prompting "Italian video game plumber" didn't work either. Even indie games like Slay the Spire and Undertale got rejected. The only one that didn't trigger a "similarity to third party content" Content Violation was Cyberpunk 2077.
Even content like Spongebob and Rick and Morty is now being rejected after having flooded the feeds.
And I don't think you can revenue share these generations with rights owners just like that. What rights owner will let their "product" be depicted in any imaginable situation by any prompt by anyone in the planet? Words are powerful and images a 1000 words worth, videos are a millionth fold... I've seen a quick Sora video from OpenAI themselves I believe of the real life Mario Bros Princess, a rather voluptuous one, playing herself on a console and the image stuck. And it's not just misuse, distortion or appropriation but also association: imagine a series of very viral videos of Pikachu drinking Coke or a fan series of Goku with friends at KFC... it could condition, or steal, future marketing deals for the rights holders.
This is a non-starter, unless you own a "license to AI" from the rights owner directly, such as an ad agency that uses Sora to generate an ad it was hired to do.
Indeed. If you read between the lines that’s clearly it.
And on that note can I add how much I truly despise sentences like this:
> We are hearing from a lot of rightsholders who are very excited for this new kind of "interactive fan fiction" and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all).
To me this sentence sums up a certain kind of passive aggressive California, Silicon Valley, sociopathic way of communicating with people that just makes my skin crawl. It’s sort of a conceptual cousin to concepts like banning someone from a service without even telling them or using words like “sunset” instead of “cancel” and so on.
What that sentence actually fucking means is that a lot of powerful people with valuable creative works contacted them with lawyers telling them to knock this the fuck off. Which they thought was appropriate to put in parentheses at the end as if it wasn’t the main point.
It’s telling to how society values copyright of different media that 4 years into people yelling about these being copyright violation machines the first time there’s been an emergency copyright update has been with video.
> "We are hearing from a lot of rightsholders who are very excited for this new kind of "interactive fan fiction" and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all)"
Marvelous ability to convolute the simple message "rightholders told us to fuck off"
Obviously, OpenAI could have had copyright restrictions in place from the get-go with this, but instead made an intentional decision to allow people to generate everything ranging from Spongebob videos to Michael Jackson videos to South Park videos.
Today, Sora users on reddit are pretty much beside themselves because of newly enabled content restrictions. They are (apparently) no longer able to generate these types of videos and see no use for the service without that ability!
To me it raises two questions:
1) Was the initial "free for all" a marketing ploy?
2) Is it the case that people find these video generators a lot less interesting when they have to come up with original ideas for the videos and cannot use copyright characters, etc?
Considering that these models are trained on existing data to remix it means that when you shackle their ability to remix existing IP’s they’re practically useless because there’s little originality if any to squeeze out of them to begin with.
You make a good point. They may well as admit at this point that curing cancer, new physics, and AGI aren't going to happen very soon.
What surprises me a bit is that they'd take this TikTok route, rather than selling Sora as a very expensive storyboarding tool to film/tv studios, producers, etc. Why package it as an app for your neice to make viral videos that's bound to lose money with every click? Just sell it for $50k/hr of video to someone with deep pockets. Is it just a publicity stunt?
> AI slop tictoc to waste millions of human-hours.
Don't forget the power it consumes from an already overloaded grid [while actively turning off new renewable power sources], the fresh water data centers consume for cooling, and the noise pollution forced on low-income residents.
Well, yeah, but that stuff was all bullshit, whereas the fake tiktok kind of exists and might keep the all-important money taps on for another six months or so.
Is this a roundabout way to say that they've realised that people are using their service to make porn of celebrities and fictional characters in the entertainment industry, and aim to figure out a way to keep making money from it without involving "rightsholders" in scandals?
The detail that rightsholders seem to be demanding a revenue share is interesting. That sounds administratively and technologically very complex to implement and probably also just plain expensive to implement.
Workers getting paid a flat rate while owners are raking in the entire income generated by the work is how the rich get richer faster than any working person can.
This "but it's too hard to implement" excuse never made sense to me. So it's doable to make a system like this, to have smart people working on it, hire and poach other smart people, to have payments systems, tracking systems, personal data collection, request filtering and content awareness, all that jazz, but somehow all of that grinds to a halt the moment a question like this arises? and it's been a problem for years, yet some of the smartest people are just unable to approach it, let alone solve it? Does it not seem idiotic to see them serve 'most advanced' products over and over, and then pretend like this question is "too complex" for them to solve? Shouldn't they be smart enough to rise up to that level of "complexity" anyway?
Seems more like selective, intentional ignoring of the problem to me. It's just because if they start to pay up, everyone will want to get paid, and paying other people is something that companies like this systematically try to avoid as much as possible.
The logic is that if they don't do it then Meta or some other company will & they have decided it's better that they do it b/c they are the better, more righteous, & moral people. But the main issue is I don't understand how they went from solving general intelligence to becoming an ad sponsored synthetic media company without anyone noticing.
As someone who is concerned about how artists are supposed to earn a living in a ecosystem where anyone can trivially copy any style effortlessly, it does sound better than the status quo?
The fact that LLMs are trained on humans data yet the same humans receive no benefits from it (cannot even use the weights for free, even if they unwillingly contributed to it existing), kind of sucks.
What alternative is there? Let companies freely slurp up people's work and give absolutely nothing back?
Someone I know uses chatgpt a lot. Not because they find it incredibly valuable. But because they want to stick it to the VC's funding OAI and increase their costs with no revenue.
So this is why you have to be careful about usage numbers. The only true meaningful number is about those who are contributing towards revenue. Without that OAI is just a giant money sink.
So that sounds like they "released" this fully aware it would generate loads of hype, but never ever be legally feasible to release at scale, so we can expect some heavily cut down version to eventually become publicly released?
Feels very much like a knee-jerk response to Facebook releasing their "Vibes" app the week before. It's basically the same thing, OpenAI are probably willing to light a pile of money on fire to take the wind out of their sails.
I also don't think the "Sam Altman" videos were authentic/organic at all, smells much more like a coordinated astroturfing campaign.
It is sad (and predictable, PR- and legal-wise) that there was no mention of the Ghibli Studio.
I would be actually moved if there was some genuine in the line of "We are sorry - we wanted to make a PR stunt, but we went to hard." and offered real $ for that. (Not that I believe it is going to happen, as GenAI does not like this kind of precedence.)
>Second, we are going to have to somehow make money for video generation. People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences.
Once again, Scam Altman looking for excuses to raise more money. What a joke…
I don't have access but it seems you can impose a friend into a video? Are we not rightsholders to our own likeness? It seems like a person should be able to block a video someone shares without their consent or earn revenue then if their likeness is used.
simianparrot|4 months ago
Translation from snake speech bs: We've been threatened by Japanese artists via their lawyers that unless we remove the "Ghibli" feature that earned us so much money, and others like it, we're going to get absolutely destroyed in court.
qoez|4 months ago
47thpresident|4 months ago
qlm|4 months ago
Sickening
andrew_mason1|4 months ago
vivzkestrel|4 months ago
solid_fuel|4 months ago
Secondly, to make a prediction about the future business model - it sounds like large companies (disney, nintendo, etc) will be able to enter revenue sharing agreements with OpenAI where users pay extra to use specific brand characters in their generated videos, and some of that licensing cost will be returned to the "rightsholders". But I bet everyone else - you, me, small youtube celebrities - will be left out in the cold with no controls over their likeness. After all, it's not like they could possibly identify every single living person and tie them to their likeness.
cg505|4 months ago
2. They might get into trouble charging users to generate some other entity's IP, so they may revenue-share with the IP owner.
They're probably still losing money even if they charge for video generation, but recouping some of that cost, even if they revshare, is better than nothing.
melvinmelih|4 months ago
Wasn’t he literally scanning eye balls a couple years ago?
sebzim4500|4 months ago
(i) they will need to start charging money per generation (ii) they will share some of this money with rightsholders
raphman|4 months ago
unknown|4 months ago
[deleted]
48terry|4 months ago
Neither has most of the stuff Sam has said since basically the moment he started talking.
It is possible, perhaps, that he is actually a very stupid person!
camillomiller|4 months ago
g42gregory|4 months ago
Now, the OpenAI will be lecturing their own users, while expecting them to make them rich. I suspect, the users will find it insulting.
Generation for personal use is not illegal, as far as I know.
nickthegreek|4 months ago
camillomiller|4 months ago
[deleted]
surrTurr|4 months ago
> enable generating ghibli content since users are ADDICTED to that style
> willingly ignore the fact that the people who own this content don't want this
> wait a few days
> "ooooh we're so sorry for letting these users generate copyrighted content"
> disables it via some dumb ahh prompt detection algorithm
> dumb down the model and features even more
> add expensive pricing
> wait a few months
> launch new model without all of these restrictions again so that the difference to the new model feels insane
slacktivism123|4 months ago
Don't worry, you can write "dumb ass" here without needing to use algospeak. This isn't Instagram or TikTok and you won't be unpersoned by a "trust and safety" team for doing so.
P.S. No need for a space after your meme arrows :-)
workfromspace|4 months ago
unknown|4 months ago
[deleted]
spongebobstoes|4 months ago
the blind greed of copyright companies disgusts me
minimaxir|4 months ago
> People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences.
What did OpenAI expect, really? They imposed no meaningful generation limits and and "very small audiences" is literally the point of an invite-only program.
minimaxir|4 months ago
Even content like Spongebob and Rick and Morty is now being rejected after having flooded the feeds.
techblueberry|4 months ago
Jordan-117|4 months ago
ojosilva|4 months ago
This is a non-starter, unless you own a "license to AI" from the rights owner directly, such as an ad agency that uses Sora to generate an ad it was hired to do.
CPLX|4 months ago
And on that note can I add how much I truly despise sentences like this:
> We are hearing from a lot of rightsholders who are very excited for this new kind of "interactive fan fiction" and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all).
To me this sentence sums up a certain kind of passive aggressive California, Silicon Valley, sociopathic way of communicating with people that just makes my skin crawl. It’s sort of a conceptual cousin to concepts like banning someone from a service without even telling them or using words like “sunset” instead of “cancel” and so on.
What that sentence actually fucking means is that a lot of powerful people with valuable creative works contacted them with lawyers telling them to knock this the fuck off. Which they thought was appropriate to put in parentheses at the end as if it wasn’t the main point.
unknown|4 months ago
[deleted]
geraldalewis|4 months ago
roxolotl|4 months ago
jameslk|4 months ago
nextworddev|4 months ago
unknown|4 months ago
[deleted]
aubanel|4 months ago
Marvelous ability to convolute the simple message "rightholders told us to fuck off"
brandon272|4 months ago
Today, Sora users on reddit are pretty much beside themselves because of newly enabled content restrictions. They are (apparently) no longer able to generate these types of videos and see no use for the service without that ability!
To me it raises two questions:
1) Was the initial "free for all" a marketing ploy?
2) Is it the case that people find these video generators a lot less interesting when they have to come up with original ideas for the videos and cannot use copyright characters, etc?
ronsor|4 months ago
simianparrot|4 months ago
piskov|4 months ago
Woke: AI slop tictoc to waste millions of human-hours.
noduerme|4 months ago
What surprises me a bit is that they'd take this TikTok route, rather than selling Sora as a very expensive storyboarding tool to film/tv studios, producers, etc. Why package it as an app for your neice to make viral videos that's bound to lose money with every click? Just sell it for $50k/hr of video to someone with deep pockets. Is it just a publicity stunt?
amarcheschi|4 months ago
eclipticplane|4 months ago
Don't forget the power it consumes from an already overloaded grid [while actively turning off new renewable power sources], the fresh water data centers consume for cooling, and the noise pollution forced on low-income residents.
rsynnott|4 months ago
unknown|4 months ago
[deleted]
cess11|4 months ago
unknown|4 months ago
[deleted]
kg|4 months ago
minimaxir|4 months ago
martin-t|4 months ago
Workers getting paid a flat rate while owners are raking in the entire income generated by the work is how the rich get richer faster than any working person can.
pxoe|4 months ago
Seems more like selective, intentional ignoring of the problem to me. It's just because if they start to pay up, everyone will want to get paid, and paying other people is something that companies like this systematically try to avoid as much as possible.
HypomaniaMan|4 months ago
measurablefunc|4 months ago
zarzavat|4 months ago
I can't tell if this is face saving or delusion.
CaptainOfCoit|4 months ago
The fact that LLMs are trained on humans data yet the same humans receive no benefits from it (cannot even use the weights for free, even if they unwillingly contributed to it existing), kind of sucks.
What alternative is there? Let companies freely slurp up people's work and give absolutely nothing back?
sumedh|4 months ago
Why should AI generated videos not have revenue sharing.
In the end what matters is whether people enjoy the video, it does not matter if its AI created or human created.
tmaly|4 months ago
Ianjit|4 months ago
tkamado|4 months ago
stogot|4 months ago
_fs|4 months ago
rr808|4 months ago
rhetocj23|4 months ago
So this is why you have to be careful about usage numbers. The only true meaningful number is about those who are contributing towards revenue. Without that OAI is just a giant money sink.
unknown|4 months ago
[deleted]
crimsoneer|4 months ago
nmfisher|4 months ago
I also don't think the "Sam Altman" videos were authentic/organic at all, smells much more like a coordinated astroturfing campaign.
stared|4 months ago
I would be actually moved if there was some genuine in the line of "We are sorry - we wanted to make a PR stunt, but we went to hard." and offered real $ for that. (Not that I believe it is going to happen, as GenAI does not like this kind of precedence.)
rpgbr|4 months ago
Once again, Scam Altman looking for excuses to raise more money. What a joke…
unknown|4 months ago
[deleted]
mallowdram|4 months ago
hamhamed|4 months ago
[deleted]
CompoundEyes|4 months ago
minimaxir|4 months ago
> person should be able to block a video someone shares without their consent
That is already implemented.