(no title)
dchichkov | 11 months ago
Perhaps also symmetric "freedom to learn" from OpenAI models, with some provisions / naming convention? U.S. labs are limited in this way, while labs in China are not.
dchichkov | 11 months ago
Perhaps also symmetric "freedom to learn" from OpenAI models, with some provisions / naming convention? U.S. labs are limited in this way, while labs in China are not.
taurath|11 months ago
woah|11 months ago
Acting like copyright is some natural law of the universe that LLMs are upending simply because they can learn from written texts is silly.
If you want to argue that it should be radically expanded to the point that not only a work, but even the ideas and knowledge contained in that work should be censored and restricted, fine. But at least have the honesty to admit that this is a radical new expansion for a body of law that has already been radically expanded relatively recently.
jsemrau|11 months ago
https://abounaja.com/blog/intellectual-property-disputes
bongodongobob|11 months ago
[deleted]
EGreg|11 months ago
Copyright cartels (RIAA, MPAA) that monetized young artists without paying them much at all [1], vs the AI megalomaniacs who took all the work for free and used Kenyans at $2 an hour [2] so that they can raise "$7 trillion" for their AI infrastructure
[1] https://www.reddit.com/r/LetsTalkMusic/comments/1fzyr0u/arti...
[2] https://time.com/6247678/openai-chatgpt-kenya-workers/
Bjorkbat|11 months ago
But that does make me think, that in a sane society with a functional legislature I wouldn't have to pick a dog in this fight. I'd have have enough faith in lawmakers and the political process to pursue a path towards copyright reform that reigns in abuses from both AI companies and megacorp rightsholders
Alas, for now I'm hoping that aforementioned megacorps sue OpenAI into a painful lesson.
blitzar|11 months ago
NoOn3|11 months ago
unknown|11 months ago
[deleted]
999900000999|11 months ago
Have we all been transported to bizzaro land?
Different rules for billion dollar corps I guess.
somenameforme|11 months ago
Same rules, but people are a lot less inclined to defend themselves because the cost of loss was seen as too high to even risk it.
unknown|11 months ago
[deleted]
seanmcdirmid|11 months ago
sva_|11 months ago
I don't doubt it but am interested to read a source? I know the models can't talk about things like Tiananmen Square 1989, but what does 'implementing socialist values by law' look like?
j-krieger|11 months ago
sega_sai|11 months ago
TheSoftwareGuy|11 months ago
diego_sandoval|11 months ago
If a human buys a movie, he can watch it and learn about its contents, and then talk about those contents, and he can create a similar movie with a similar theme.
If OpenAI buys a movie and shows it to their model, it's unclear whether the model can talk about the contents of the movie and create a similar movie with a similar theme.
voytec|11 months ago
DebtDeflation|11 months ago
kranke155|11 months ago
The model gets to use training data of all humans.
But if you use the model as training data OAI will say you’re infringing T&Cs
binarymax|11 months ago
unknown|11 months ago
[deleted]
IncreasePosts|11 months ago
Maybe terrorist manuals and some child pornography, but what else?
samstave|11 months ago
[deleted]
janalsncm|11 months ago
cadamsdotcom|11 months ago
cscurmudgeon|11 months ago
[deleted]
thrance|11 months ago
Companies like this were allowed to siphon the free work of billions of people over centuries and they still want more.