(no title)
rkou | 1 year ago
This is such devious, but increasingly obvious, narrative crafting by a commercial entity that has proven itself adversarial to an open and decentralized internet / ideas and knowledge economy.
The argument goes as follows:
- The future of AI is open source and decentralized
- We want to win the future of AI instead, become a central leader and player in the collective open-source community (a corporate entity with personhood for which Mark is the human mask/spokesperson)
- So let's call our open-weight models open-source, and benefit from its imago, require all Llama developers to transfer any goodwill to us, and decentralize responsibility and liability, for when our 20 million dollar plus "AI jet engine" Waifu emulator causes harm.
Read the terms of use / contract for Meta AI products. If you deploy it, some producer finds the model spits out copyrighted content, knocks on Meta's door, Meta will point to you for the rest of the court case. If that's the future for AI, then it doesn't really matter whether China wins.
bee_rider|1 year ago
As much as I hate Facebook, I think that seems pretty… reasonable? These AI tools are just tools. If somebody uses a crayon to violate copyright, the crayon is not to blame, and certainly the crayon company is not, the person using it is.
The fact that Facebook won’t voluntarily take liability for any thing their users’ users’ might do with their software means that software might not be useable in some cases. It is a reason to avoid that software if you have one of those use cases.
But I think if you find some company that says “yes, we’ll be responsible for anything your users do with with our product,” I mean… that seems like a hard promise to take seriously, right?
gyomu|1 year ago
rkou|1 year ago
While Mark claims his Open Source AI is safer, because fully transparent and many eyes make all bugs shallow, the latest technical report makes mention of an internal, secret, benchmark that had to be developed, because available benchmarks did not suffice at that level of capabilities. For child abuse generation, it only makes mention that it investigated this, not any results of these tests or conditions under which it possibly failed. They shove all this liability on the developer, while claiming any positive goodwill generated.
It completely loses their motivation to care for AI safety and ethics if fines don't punish them, but those who used the library to build.
Reasonable for Meta? Yes. Reasonable for us to nod along when they misuse open source to accomplish this? No.
Calvin02|1 year ago
redleader55|1 year ago
I think the success of the "Threads + Fediverse = <3" relies on the Fediverse not throwing the towel and leaving Threads as the biggest player in the space. That would mean fixing a lot of problems that that people have with Activity Pub today.
I don't want to say the big tech are awesome and without fault, but at the end of the day big-techs will be big-techs. Let's keep the Fediverse relevant and Meta will continue to support it, otherwise it will be swallowed by the bigger fish.
tacocataco|1 year ago
Given the nature of the fediverse, if it happened or not depends on the instance you use/follow.
amy-petrik-214|1 year ago
So Meta says "well we will buy tons of compute and try to make it distributed" "we'll make the model open and people will fine-tune with data that they found" and so on. Now google and openAI aren't competing versus meta, they are competing versus meta + all compute owned by amateurs + all data scrapped by all amateurs, which is non-trivial. so it's not so much as aspiring to be #1 as capping the knees of the competition who has superior competitiveness - but people love it because the common man wins here for once.
Anyway, eventually, they'll all be open models. Near future weaker models will run on a PC, bigger models on the cluster, weakest models on the phone... then just weak models on the phone and bigger on the PC.. eventually anything and everything fits on a phone and maybe iWatch. Even Google and openAI will have to run on the PC/phone at this point, it wouldn't make sense not to. Then since people have local access to these devices, it all gets reverse engineered, boom boom boom. now they're all open
unknown|1 year ago
[deleted]
eli_gottlieb|1 year ago
echelon|1 year ago
Code is a single input and is cheap to compile, modify, and distribute. It's cheap to run.
Models are many things: data sets, data set processing code, training code, inference code, weights, etc. But it doesn't even matter if all of these inputs are "open source". Models take millions of dollars to train, and the inference costs aren't cheap either.
edit:
Remember when platforms ate the open web? We might be looking at a time where giants eat small software due to the cost and scale barriers.
nicce|1 year ago
jongjong|1 year ago
bschmidt1|1 year ago
Everyone tries this. Apple tried it with lawsuits and patents, Facebook did it under the guise of privacy, OpenAI will do it under the guise of public safety.
There's almost no case where a private company is going to be able to successfully argue "they shouldn't be allowed but we should" I wonder why so many companies these days try. Just hire better people and win outright.
ilrwbwrkhv|1 year ago
[deleted]
foobar_______|1 year ago
ipsum2|1 year ago
tomjen3|1 year ago
CaptainFever|1 year ago