top | item 39467786

(no title)

memossy | 2 years ago

As the leader in open image models it is incumbent upon us as the models get to this level of quality to take seriously how we can release open and safe models from a legal, societal and other considerations.

Not engaging in this will indeed lead to bad laws, sanctions and more as well as not fulfilling our societal obligations of ensuring this amazing technology is used for as positive outcomes as possible.

Stability AI was set up to build benchmark open models of all types in a proper way, this is why for example we are one of the only companies to offer opt out of datasets (stable cascade and SD3 are opted out), have given millions of supercompute hours in grants to safety related research and more.

Smaller players with less uptake and scrutiny don't need to worry so much about some of these complex issues, it is quite a lot to keep on top of, doing our best.

discuss

order

GenerWork|2 years ago

>it is incumbent upon us as the models get to this level of quality to take seriously how we can release open and safe models from a legal, societal and other considerations.

Can you define what you mean by "societal and other considerations"? If not, why not?

memossy|2 years ago

I could but I won't as legal stuff :)

zmgsabst|2 years ago

“We need to enforce our morality on you, for our beliefs are the true ones — and you’re unsafe for questioning them!”

You sound like many authoritarian regimes.

memossy|2 years ago

I mean open models yo