(no title)
lwneal | 2 years ago
"OpenAI has also drawn criticism for many other safety and ethics issues related to the launches of ChatGPT and GPT-4, including regarding copyright issues, labor conditions for data annotators, and the susceptibility of their products to "jailbreaks" that allow users to bypass safety controls...
A different approach to signaling in the private sector comes from Anthropic, one of OpenAI's primary competitors. Anthropic's desire to be perceived as a company that values safety shines through across its communications, beginning from its tagline: "an AI safety and research company." A careful look at the company's decision-making reveals that this commitment goes beyond words."
[1] https://cset.georgetown.edu/publication/decoding-intentions/
murakamiiq84|2 years ago
"The system card provides evidence of several kinds of costs that OpenAI was willing to bear in order to release GPT-4 safely.These include the time and financial cost..."
"Returning to our framework of costly signals, OpenAI’s decision to create and publish the GPT4 system card could be considered an example of tying hands as well as reducible costs. By publishing such a thorough, frank assessment of its model’s shortcomings, OpenAI has to some extent tied its own hands—creating an expectation that the company will produce and publish similar risk assessments for major new releases in the future. OpenAI also paid a price ..."
"While the system card itself has been well received among researchers interested in understanding GPT-4’s risk profile, it appears to have been less successful as a broader signal of OpenAI’s commitment to safety"
And the conclusion:
"Yet where OpenAI’s attempt at signaling may have been drowned out by other, even more conspicuous actions taken by the company, Anthropic’s signal may have simply failed to cut through the noise. By burying the explanation of Claude’s delayed release in the middle of a long, detailed document posted to the company’s website, Anthropic appears to have ensured that this signal of its intentions around AI safety has gone largely unnoticed. Taken together, these two case studies therefore provide further evidence that signaling around AI may be even more complex than signaling in previous eras."
hn_throwaway_99|2 years ago
"Editorialized"?? It's a direct quote from the paper, and additional context doesn't alter its perceived meaning.
murakamiiq84|2 years ago
hn_throwaway_99|2 years ago
ryukoposting|2 years ago