I get a slightly uncomfortable feeling with this talk about AI safety. Not in the sense that there is anything wrong with that (may be or may be not), but in the sense I don't understand what people are talking about when they talk about safety in this context. Could someone explain like I have Asperger (ELIA?) whats this about? What are the "bad actors" possibly going to do? Generate (child) porn/ images with violence etc. and sell them? Pollute the training data so that the racist images pops up when someone wants to get an image of a white pussycat? Or produce images that contain vulnerabilities so that when you open that in your browser you get compromised? Or what?
vprcic|2 years ago
https://arstechnica.com/information-technology/2024/02/deepf...
reaperman|2 years ago
> explain like I have ~~Asperger (ELIA?)~~ limited understanding of how the world really works.
The AI is being limited so that it cannot produce any "offensive" content which could end up on the news or go viral and bring negative publicity to Stability AI.
Viral posts containing generated content that brings negative publicity to Stability AI are fine as long as they're not "offensive". For example, wrong number of fingers is fine.
There is not a comprehensive, definitive list of things that are "offensive". Many of them we are aware of - e.g. nudity, child porn, depictions of Muhammad. But for many things it cannot be known a priori whether the current zeitgeist will find it offensive or not (e.g. certain depictions of current political figures, like Trump).
Perhaps they will use AI to help decide what might be offensive if it does not explicitly appear on the blocklist. They will definitely keep updating the "AI Safety" to cover additional offensive edge cases.
It's important to note that "AI Safety", as defined above (cannot produce any "offensive" content which could end up on the news or go viral and bring negative publicity to Stability AI) is not just about facially offensive content, but also about offensive uses for milquetoast content. Stability AI won't want news articles detailing how they're used by fraudsters, for example. So there will be some guards on generating things that look like scans of official documents, etc.
beefield|2 years ago
Q6T46nT668w6i3m|2 years ago
Tadpole9181|2 years ago
Excuse me?
beefield|2 years ago