top | item 41584516

(no title)

rkou | 1 year ago

AI safety is expensive, or even impossible, by releasing your models for local inference (not behind API). Meta AI shifts the responsibility of highly-general highly-capable AI models to smaller developers, putting ethics, safety, legal, and guard-rails responsibility on innovators who want to innovate with AI (without having the knowledge or resources to do so by themselves) as an "open-source" hacking project.

While Mark claims his Open Source AI is safer, because fully transparent and many eyes make all bugs shallow, the latest technical report makes mention of an internal, secret, benchmark that had to be developed, because available benchmarks did not suffice at that level of capabilities. For child abuse generation, it only makes mention that it investigated this, not any results of these tests or conditions under which it possibly failed. They shove all this liability on the developer, while claiming any positive goodwill generated.

It completely loses their motivation to care for AI safety and ethics if fines don't punish them, but those who used the library to build.

Reasonable for Meta? Yes. Reasonable for us to nod along when they misuse open source to accomplish this? No.

discuss

order

bee_rider|1 year ago

I think this could be a somewhat reasonable argument for the position that open AI just shouldn’t exist (there are counter arguments, but I’m not interested enough to do a back and forth on that). If Facebook can’t produce something safe, maybe they shouldn’t release anything at all.

But, I think in that case the failing is not in not taking the liability for what other people do with their tool. It is in producing the tool in the first place.

rkou|1 year ago

Perhaps Open AI simply can't exist (too hard and expensive to coordinate/crowd-source compute and hardware). If it can, then, to me, it should and would.

OpenAI produced GPT-2, but did not release it, as it couldn't be made safe under those conditions, when not monitored or patch-able. So it put it behind an API and owned its responsibility.

I didn't take issue with Meta's business methods and can respect its cunning moves. I take issue with things like them arguing "Open Source AI improves safety", so we can't focus on the legit cost-benefits of releasing advanced, ever-so-slightly risky, AI into the hands of novices and bad actors. It would be a failure on my part if I let myself get rigamaroled.

One should ideally own that hypothetical 3% failure rate to deny CSAM request when arguing for releasing your model still. Heck, ignore it for all I care, but they damn well do know how much this goes up when the model is jailbroken. But claiming instead that your open model release will make the world a better place for children's safety, so there is not even a need to have this difficult discussion?

cornholio|1 year ago

This strange obsession with synthetic CSAM as the absolute epitome of "AI safety" says more about the collective phobias and sensibilities of our society than about any objective "safety" issues.

Of course, from a PR perspective, it would be extremely "unsafe" for a publicly traded company to release a tool that can spew out pedophile literature, to the point of being an existential threat. Twitter was economically cancelled for much less. But as far as dangerous AI goes, it's one of the most benign and inconsequential failure modes.