top | item 36811646

Moving AI governance forward – openai.com

54 points| ladino | 2 years ago |openai.com

62 comments

order

Pannoniae|2 years ago

Moving AI governance "forward" - means working on eliminating the competition via regulatory capture, censoring and blackholing any facts they don't like, and preserving the economical status quo with enriching themselves.

None of this will protect against actual risks like massive job losses without any reskilling, lowering of living standards and widespread censorship.

skybrian|2 years ago

Regulatory capture is a risk but concluding "therefore we shouldn't regulate things" is an argument that doesn't work when there's a need to regulate dangerous things, except maybe with hard-core libertarians.

Like, should there be prescription drugs? If you say no, you should be able to get any drug without a prescription, that's pretty out there.

But the question is whether AI is that dangerous, and there's widespread disagreement on that.

kenjackson|2 years ago

This is short-sighted. Governance will happen with or without the assistance from OpenAI. Google is still much larger than OpenAI and presumably has much more regulatory presence than OpenAI. The same goes from Meta, Amazon, Microsoft, and others in the space.

I think OpenAI is making it clear to get your seat at the table now. And it probably will be true that this will make it harder for upstarts to join in, but everything so far they've talked about seem reasonable and make sense. And have you worked on things like GDPR compliance? It's a wonder that anyone bothers to get large enough to comply -- it's so much work. But we think privacy is important so we put a huge burden on those companies who hold our data.

flangola7|2 years ago

Worrying about censorship and job losses is so myopic my eyes are cringing just thinking about it.

This is akin to a ship sinking while people are blocking the hallways and fretting about how their makeup looks and where they put their jewelry at. It's so maddeningly absurd there's no point in even discussion, shove them in the nearest cabin and out of the way so the serious people can get on with it.

two_in_one|2 years ago

What OpneAI is lobbying is actually a ban, better world wide, on products better then theirs. Just exactly the level they have, accidentally. It's not for profit or anything other all the humankind good, of course.

650REDHAIR|2 years ago

I do not trust OpenAI or Altman to do the right thing here.

Is the ACLU or EFF doing anything in this space?

flangola7|2 years ago

Altman has written about the dangers for years, from before OpenAI was even founded. The constant cope of claiming it's all just another corporate money making ploy is so tiresome and banal.

alpark3|2 years ago

>Scope: Where commitments mention particular models, they apply only to generative models that are overall more powerful than the current industry frontier (e.g. models that are overall more powerful than any currently released models, including GPT-4, Claude 2, PaLM 2, Titan and, in the case of image generation, DALL-E 2).

How is DALL-E 2 the "industry frontier" of image generation?

minimaxir|2 years ago

It was when it released.

It is very very weird OpenAI has done nothing with DALL-E 2, not even a price drop to compete.

skepticATX|2 years ago

I for one can't wait until OpenAI is fully crushed by other companies. Their weird combination of singularity/utopia talk plus fearmongering is getting old.

GaggiX|2 years ago

>current industry frontier

>Dalle 2

I'm sorry OpenAI, but your model is not the frontier; also it's funny that it's the only text-to-image models mentioned, they probably know how better the other models are.

__loam|2 years ago

Yes there are so many better ways to produce ethically dubious, derivative trash.

villgax|2 years ago

Hard pass. No other company in any field had to do so much fear mongering about potential misuse of their tech all the way from original GPT releases & then go on a whirlwind world tour meeting political leaders to talk about their own product. Like startups & companies in each & every country can't get their leaders to talk with them let alone to journalists but somehow Altman is able to waltz right in to every location?

commandlinefan|2 years ago

Considering how biased their own product is, I don't want them to have anything to do with "governance" of AI either.

simbolit|2 years ago

If they so scared of AI, they could just stop building it.

kenjackson|2 years ago

Turns out that this wouldn't stop others from building it.

logicchains|2 years ago

I hope at least some republican lawmakers aren't too senile to recognise the threat this poses. AI will play a huge role in our futures, and if OpenAI, Google et al. get their way, it'll essentially be illegal to have an AI capable of expressing conservative political views.

jmount|2 years ago

Ah, the traditional pulling up the ladder behind themselves move. If openai cared about harm they would ask if their current API service is doing harm at this moment, not if somebody else would do the same or more harm as it is profitable.

mavsman|2 years ago

This is shockingly similar to how NCAA colleges and universities handle behavior and conduct violations from players and coaches. They perform an internal investigation and then attempt to dish out a penalty or restriction that appears harsh enough for the governing body (the NCAA) not to take any additional action.

Also similar to everyone's response when asked: "What do _you_ think your punishment should be?"

torginus|2 years ago

I guess Big AI is following the Big Pharma playbook - I recently read an article about children being unable to afford Penicillin shots, each costing almost a thousand dollars - which is absolutely infuriating considering any competent chemist can make Penicillin with rudimentary lab equipment, most of the cost being price jacking due to regulatory capture. Probably they are looking to avoid the marginal cost of AI services trending to zero by restricting the supply in a similar way.

happytiger|2 years ago

Voluntary, self-regulatory oversight of one of the most powerful technology breakthroughs in human history? What could go wrong?

simbolit|2 years ago

Question: What is the relation between O̶p̶e̶n̶AI and this website? Isn't Sam Altman also part-owner of YC?

kenjackson|2 years ago

I believe no longer affiliated with YC. Although obviously has a long history with YC.

EDIT: Although likely still has investments in YC backed companies -- but just a guess.

torginus|2 years ago

It's highly telling nothing is said about the legality of taking the entire sum of human knowledge and using it to train the AI - which already created a huge stink in the generative art community leading companies like Valve to issue a blanket ban on AI art.