top | item 38373684

(no title)

lwneal | 2 years ago

The relevant passage from the paper co-written by board member Helen Toner:

"OpenAI has also drawn criticism for many other safety and ethics issues related to the launches of ChatGPT and GPT-4, including regarding copyright issues, labor conditions for data annotators, and the susceptibility of their products to "jailbreaks" that allow users to bypass safety controls...

A different approach to signaling in the private sector comes from Anthropic, one of OpenAI's primary competitors. Anthropic's desire to be perceived as a company that values safety shines through across its communications, beginning from its tagline: "an AI safety and research company." A careful look at the company's decision-making reveals that this commitment goes beyond words."

[1] https://cset.georgetown.edu/publication/decoding-intentions/

discuss

order

murakamiiq84|2 years ago

I think this is heavily editoralized. if you look at the 3 pages in question that the quotes are pulled from (28-30 in doc, 29-31 in pdf), they appear to be given as examples in pretty boring academic discussions explicating the theories of costly signaling in the context of AI.It also has lines like:

"The system card provides evidence of several kinds of costs that OpenAI was willing to bear in order to release GPT-4 safely.These include the time and financial cost..."

"Returning to our framework of costly signals, OpenAI’s decision to create and publish the GPT4 system card could be considered an example of tying hands as well as reducible costs. By publishing such a thorough, frank assessment of its model’s shortcomings, OpenAI has to some extent tied its own hands—creating an expectation that the company will produce and publish similar risk assessments for major new releases in the future. OpenAI also paid a price ..."

"While the system card itself has been well received among researchers interested in understanding GPT-4’s risk profile, it appears to have been less successful as a broader signal of OpenAI’s commitment to safety"

And the conclusion:

"Yet where OpenAI’s attempt at signaling may have been drowned out by other, even more conspicuous actions taken by the company, Anthropic’s signal may have simply failed to cut through the noise. By burying the explanation of Claude’s delayed release in the middle of a long, detailed document posted to the company’s website, Anthropic appears to have ensured that this signal of its intentions around AI safety has gone largely unnoticed. Taken together, these two case studies therefore provide further evidence that signaling around AI may be even more complex than signaling in previous eras."

hn_throwaway_99|2 years ago

> I think this is heavily editoralized.

"Editorialized"?? It's a direct quote from the paper, and additional context doesn't alter its perceived meaning.

murakamiiq84|2 years ago

Note that the quote about Anthropic is about Anthropic's desire to be perceived as a company that values safety, not a direct claim that Anthropic actually is safe, or even that it desires to value safety.

hn_throwaway_99|2 years ago

You must have interpreted the final sentence "A careful look at the company's decision-making reveals that this commitment goes beyond words" very differently than I did, or else you're splitting hairs in making your distinction.

ryukoposting|2 years ago

This reads more like ad copy than a research paper. I'd have been pissed too if I were Altman.