top | item 41583949

(no title)

rkou | 1 year ago

And what about the future of social media?

This is such devious, but increasingly obvious, narrative crafting by a commercial entity that has proven itself adversarial to an open and decentralized internet / ideas and knowledge economy.

The argument goes as follows:

- The future of AI is open source and decentralized

- We want to win the future of AI instead, become a central leader and player in the collective open-source community (a corporate entity with personhood for which Mark is the human mask/spokesperson)

- So let's call our open-weight models open-source, and benefit from its imago, require all Llama developers to transfer any goodwill to us, and decentralize responsibility and liability, for when our 20 million dollar plus "AI jet engine" Waifu emulator causes harm.

Read the terms of use / contract for Meta AI products. If you deploy it, some producer finds the model spits out copyrighted content, knocks on Meta's door, Meta will point to you for the rest of the court case. If that's the future for AI, then it doesn't really matter whether China wins.

discuss

order

bee_rider|1 year ago

> Read the terms of use / contract for Meta AI products. If you deploy it, some producer finds the model spits out copyrighted content, knocks on Meta's door, Meta will point to you for the rest of the court case. If that's the future for AI, then it doesn't really matter whether China wins.

As much as I hate Facebook, I think that seems pretty… reasonable? These AI tools are just tools. If somebody uses a crayon to violate copyright, the crayon is not to blame, and certainly the crayon company is not, the person using it is.

The fact that Facebook won’t voluntarily take liability for any thing their users’ users’ might do with their software means that software might not be useable in some cases. It is a reason to avoid that software if you have one of those use cases.

But I think if you find some company that says “yes, we’ll be responsible for anything your users do with with our product,” I mean… that seems like a hard promise to take seriously, right?

gyomu|1 year ago

This is a bad analogy. The factory producing crayons doesn’t need to ingest hundreds of millions of copyrighted works as a fundamental part of its process to make crayons.

rkou|1 year ago

AI safety is expensive, or even impossible, by releasing your models for local inference (not behind API). Meta AI shifts the responsibility of highly-general highly-capable AI models to smaller developers, putting ethics, safety, legal, and guard-rails responsibility on innovators who want to innovate with AI (without having the knowledge or resources to do so by themselves) as an "open-source" hacking project.

While Mark claims his Open Source AI is safer, because fully transparent and many eyes make all bugs shallow, the latest technical report makes mention of an internal, secret, benchmark that had to be developed, because available benchmarks did not suffice at that level of capabilities. For child abuse generation, it only makes mention that it investigated this, not any results of these tests or conditions under which it possibly failed. They shove all this liability on the developer, while claiming any positive goodwill generated.

It completely loses their motivation to care for AI safety and ethics if fines don't punish them, but those who used the library to build.

Reasonable for Meta? Yes. Reasonable for us to nod along when they misuse open source to accomplish this? No.

Calvin02|1 year ago

Doesn’t Threads and Fediverse indicate that they are headed that way for social as well?

redleader55|1 year ago

The last time we had a corporate romance between an open source protocol/project, "XMPP + Gtalk/Facebook = <3", XMPP was crappy and it was moving too slowly to the mobile age. Gtalk/Messenger gave up on XMPP and evolved their own protocols and stopped federating with the "legacy" one.

I think the success of the "Threads + Fediverse = <3" relies on the Fediverse not throwing the towel and leaving Threads as the biggest player in the space. That would mean fixing a lot of problems that that people have with Activity Pub today.

I don't want to say the big tech are awesome and without fault, but at the end of the day big-techs will be big-techs. Let's keep the Fediverse relevant and Meta will continue to support it, otherwise it will be swallowed by the bigger fish.

tacocataco|1 year ago

Last I checked, there was a movement in the biggest instances to defederate from meta's embrace stage of "embrace extend extinguish" playbook. I didn't check back to see if it got pushed through.

Given the nature of the fediverse, if it happened or not depends on the instance you use/follow.

amy-petrik-214|1 year ago

It's got nothing to do with Meta's social media business directly. Massive as the FB dataset is, it gets mogged by google who, what with their advanced non-PHP-based infra and superior coders, basically have way more and way better and way more accessible data... and their own AI CPUs they made, and a bigger cluster, and faster software, and more store, and so on. Big picture, Google is poised to steamroll Facebook AI-wise, and if no them, then openAI+microsoft

So Meta says "well we will buy tons of compute and try to make it distributed" "we'll make the model open and people will fine-tune with data that they found" and so on. Now google and openAI aren't competing versus meta, they are competing versus meta + all compute owned by amateurs + all data scrapped by all amateurs, which is non-trivial. so it's not so much as aspiring to be #1 as capping the knees of the competition who has superior competitiveness - but people love it because the common man wins here for once.

Anyway, eventually, they'll all be open models. Near future weaker models will run on a PC, bigger models on the cluster, weakest models on the phone... then just weak models on the phone and bigger on the PC.. eventually anything and everything fits on a phone and maybe iWatch. Even Google and openAI will have to run on the PC/phone at this point, it wouldn't make sense not to. Then since people have local access to these devices, it all gets reverse engineered, boom boom boom. now they're all open

eli_gottlieb|1 year ago

If it was really open-source you'd be able to just train one yourself.

echelon|1 year ago

This sort of puts the whole notion of "open source" at risk.

Code is a single input and is cheap to compile, modify, and distribute. It's cheap to run.

Models are many things: data sets, data set processing code, training code, inference code, weights, etc. But it doesn't even matter if all of these inputs are "open source". Models take millions of dollars to train, and the inference costs aren't cheap either.

edit:

Remember when platforms ate the open web? We might be looking at a time where giants eat small software due to the cost and scale barriers.

nicce|1 year ago

Only if you were a billionaire. These models are starting to be so out of reach for single researchers or even traditional academic research groups.

jongjong|1 year ago

Maybe the road to heaven is paved with bad intentions.

bschmidt1|1 year ago

It's especially rich coming from Facebook who was all for regulating everyone else in social media after they had already captured the market.

Everyone tries this. Apple tried it with lawsuits and patents, Facebook did it under the guise of privacy, OpenAI will do it under the guise of public safety.

There's almost no case where a private company is going to be able to successfully argue "they shouldn't be allowed but we should" I wonder why so many companies these days try. Just hire better people and win outright.

foobar_______|1 year ago

It has been clear from the beginning that Meta's supposed desire for an open source AI, is just a coping mechanism for the fact that got beat out of the gate. This is an attempt to commoditize AI and reduce OpenAI/Google/Whoever's advantage. It is effective, not doubt, but all this wankery about how noble they are for creating an open-source AI future is just bullshit.

ipsum2|1 year ago

You're wrong here. Meta has released state of the art open source ML models prior to ChatGPT. I know a few successful startups (now valued at >$1b) that were built on top of Detectron2, a best-in-class image segmentation model.

tomjen3|1 year ago

It’s because Facebooks complementary good is content (primary good is ad slots) and if somebody wins the ai race they can pump out enough content to jumpstart a Facebook competitor with a ton of content.

CaptainFever|1 year ago

I feel the same way. I'm grateful to Meta for releasing libre models, but I also understand that this is simply because they're second in the AI race. The winner always plays dirty, the underdog always plays nice.