top | item 30936771

(no title)

rndphs | 3 years ago

This is going to be mostly a rant on OpenAI's "safer than thou" approach to safety, but let me start with that I think this technology I think is really cool, amazing, powerful stuff. Dall-E (and Dall-E 2) is an incredible advance over GANs, and no doubt will have many positive applications. It's simply brilliant. I am someone who has been interested in and has followed the progress of ML generated images for nearly a decade. Almost unimaginable progress has been made in the last five years in this field.

Now the rant:

I think if OpenAI genuinely cared about the ethical consequences of the technology, they would realise that any algorithm they release will be replicated in implementation by other people within some short period of time (a year or two). At that point, the cat is out of the bag and there is nothing they can do to prevent abuse. So really all they are doing is delaying abuse, and in no way stopping it.

I think their strong "safety" stance has three functions:

1. Legal protection 2. PR 3. Keeping their researchers' consciences clear

I think number 3 is dangerous because researchers are put under the false belief that their technology can or will be made safe. This way they can continue to harness bright minds that no doubt have ethical leanings to create things that they otherwise wouldn't have.

I think OpenAI are trying to have the cake and eat it too. They are accelerating the development of potentially very destructive algorithms (and profiting from it in the process!), while trying to absolve themselves of the responsibility. Putting bandaids on a tumour is not going to matter in the long run. I'm not necessarily saying that these algorithms will be widely destructive, but they certainly have the potential to be.

The safety approach of OpenAI ultimately boils down to gatekeeping compute power. This is just gatekeeping via capital. Anyone with sufficient money can replicate their models easily and bypass every single one of their safety constraints. Basically they are only preventing poor bad actors, and only for a limited time at that.

These models cannot be made safe as long as they are replicable.

To produce scientific research requires making your results replicable.

Therefore, there is no ability to develop abusable technology in a safe way. As a researcher, you will have blood on your hands if things go wrong.

If you choose to continue research knowing this, that is your decision. But don't pretend that you can make the algorithms safer by sanitizing models.

discuss

order

visarga|3 years ago

OpenAI is not the only AI shop. If they didn't make DALL-E someone else would, and control its release as they see fit.