(no title)
awal2 | 4 years ago
1. A lot of people want these systems to be open, and don't want the power that comes along with them to be locked up in the hands of a few rich people.
2. But some people also think these systems are powerful and don't want them in the hands of bad-faith actors (spammers, scammers, propagandists).
3. A lot of people also want these systems to be weakly safe and not have negative externalities when used in good faith (avoid spitting out racism when prompted with innocent questions). This is already hard.
4. Even better would be for the system to be strongly safe and be really hard to use for bad-faith purposes, but this seems unreasonably hard.
5. It's often easier to develop the "unsafe" version of something first and then figure out the details of safety once it's actually able to do something. This is basically where OpenAI is now.
6. The details around liability for the harms caused by this kind of thing are not clear at all.
So OpenAI is in this position where it has built this thing that is not yet weakly safe. People have very different ideas about how potentially harmful this could be, ranging from very dismissive ("there's tons of racism on the internet already, who cares?") to the very not dismissive ("rich white tech people are exacerbating inequities by subjecting us to their evil racist AI systems!").
What should OpenAI do with this thing? Keep it locked up so that it doesn't hurt anybody? Release it to the world and push accountability onto the end users? Brush aside the ethical questions and use the hype generated by the above tensions to get as rich as possible? So far their answer seems to be somewhere cautiously in the middle.
My personal opinion is that these questions will be very important for real AGI, but this ain't it, so the issues may not be as bad as they seem. On the other hand, maybe this is a useful test case for how to deal with these problems for when we do actually get there? Also from past experience, it's probably not a good idea for them to allow open access to something that spits out unprompted racism. I would like to see OpenAI more open, but I also realize that it's very hard for them to make any decision in this space without making people unhappy and generating a lot of bad press and accusations.
abetusk|4 years ago
Openness, in the libre/free sense, is also making sure to minimize gatekeeping or putting the creators in a position of making judgements about what's good an what's not.
All the other points you list are ancillary. OpenAI is a prime example of "open-washing". OpenAI got good will from the community by implying they were open (free/libre) and then hid behind all the other points you listed to not commit to openness.
If they wanted to have a discussion about the moral hazard of AI and their business model was to create a walled garden where only approved scientists, engineers and researchers had access to the data and code, that's their prerogative, just don't name it "OpenAI".
dcposch|4 years ago
This is not a criticism, just an observation of where we're at and how dramatically attitudes have shifted.
kevingadd|4 years ago