This question narrows the scope of "safety" to something less than what the people at SD or even probably what OP cares about. _Non-random_ CSAM requests targeting potentially real people is the obvious answer here, but even non-CSAM sexual content is also a probably a threat. I can understand frustration with it currently going overboard on blurring, but removing safety checks altogether would result in SD mainly being associated with porn pretty quickly, which I'm sure Stability AI wants to avoid for the safety of their company.Add to that, parents who want to avoid having their kids generate sexual content would now need to prevent their kids from using this tool because it can create it randomly, limiting SD usage to kids 18+ (which is probably something else Stability AI does not want to deal with.)
It's definitely a balance between going overboard and having restrictions though. I haven't used SD in several months now so I'm not sure where that balance is right now.
int_19h|2 years ago
To whom? SD's reputation, perhaps - but that ship has already sailed with 1.x. That aside, why is generated porn threatening? If anything, anti-porn crusaders ought to rejoice, given that it doesn't involve actual humans performing all those acts.
dyslexit|2 years ago
You can have your own opinion on it, but surely you can see the issue here?
BeFlatXIII|2 years ago