top | item 35390271

Responsible AI Challenge

93 points| T-A | 3 years ago |future.mozilla.org | reply

55 comments

order
[+] freehorse|3 years ago|reply
I do like mozilla foundation in general, but everybody is supposed to work on "responsible AI" while nobody can really say what a "responsible AI" is really supposed to be, at least not in any way that different groups agree. The hardest issue regarding "AI alignment" is human alignment.
[+] drusepth|3 years ago|reply
During the application, they break down what they mean by "responsible AI" to mean:

> Agency: Is your AI is designed with personal agency in mind? Do people have control over how they use the AI, over how their data is used, and over the algorithm’s output?

> Accountability: Are you providing transparency into how your AI systems work, are you set up to support accountability when things go wrong?

> Privacy: How are you collecting, storing and sharing people’s data?

> Fairness: Are your computational models, data, and frameworks reflecting or amplifying existing bias, or assumptions resulting in biased or discriminatory outcomes, or have outsized impact on marginalized communities. Are computing and human labor used to build your AI system vulnerable to exploitation and overwork? Is the climate crisis being accelerated by your AI through energy consumption or speeding up the extraction of natural resources.

> Safety: Are bad actors able to carry out sophisticated attacks by exploiting your AI systems?

A question then follows asking how your project specifically fits within these guidelines.

[+] version_five|3 years ago|reply
Yeah unfortunately it often ends up being a code for adjusting ML models to support certain world views or political biases.

It's too bad we haven't been able to separate the data science questions of how we feel about the training data, from the operational questions of whether (a) it's appropriate to make a determination algorithmically and (b) whether the specific model is suited to that decision. Instead we get vague statements about harms and biases.

[+] haswell|3 years ago|reply
> everybody is supposed to work on "responsible AI" while nobody can really say what a "responsible AI" is really supposed to be

In my opinion, "working on" responsible AI at this stage is synonymous with figuring out how to actually define what that means. Part of that definition will emerge along with and as the technology evolves. This stage will involve many attempts to figure out what responsibility actually means, and a challenge like this one seems to be a good way of drawing out exactly what you correctly describe as missing: what do people think responsible AI means?

I share the frustration that we don't have human alignment on this, and that such alignment is required, but to achieve that, people involved need to start putting real thought into formulating some notion of what this means, because even if we don't know if we're currently in the right ballpark, we do know that the failure modes can be catastrophic.

Human alignment is not something that will happen without major/messy disagreements and conflict about what responsibility actually entails. And to have those disagreements, companies building these products need to start standing up and staking claims on what they believe it to mean.

So in my view, what Mozilla is doing here seems like an important piece of the puzzle in this moment where what we need most are opinions about what safety entails, so we can even have a chance of moving towards alignment.

[+] 13years|3 years ago|reply
> The hardest issue regarding "AI alignment" is human alignment.

Which is partly why the current proposed alignment theory isn't possible. We want to align the AGI by applying human values. Even if we figure out how to get the machine to adopt such values, they are the same values that lead us humans into constant conflict.

I've stated this argument in much more detail here - https://dakara.substack.com/p/ai-singularity-the-hubris-trap

[+] gyudin|3 years ago|reply
Whatever Bay Arean mega-corps profiting social bubble tells you it is. Everything else is UNACCEPTABLE!
[+] avgcorrection|3 years ago|reply
This is like any “X for humans” or “humane X”; completely devoid of meaning.
[+] 876978095789789|3 years ago|reply
This is just some PR stunt, I assume, but it's still funny coming from Mozilla, considering that Firefox has lost so much market share that it can no longer reliably be used for core web browsing tasks like e-commerce and e-banking, because support for it has become an afterthought rather than a priority. Likewise, fraud and DDOS detection algorithms are much more likely to be tiggered by the use of FF than Chrome or Edge. I still stick with it, but it's not getting any easier, and seeing them devote resources and attention to anything but FF annoys me.
[+] summarity|3 years ago|reply
So I tried applying. First the actual email form just doesn't load with an adblocker enabled. When disabled, I can't even submit the form since "element with "privacy" is not focusable" whatever that means.

How very ironic.

[+] drusepth|3 years ago|reply
Isn't this a common problem with adblockers though? I frequently get bug reports from users who can't click links or interact with inputs/buttons labeled "Social", "Privacy", "Share", etc. I even have a self-serve feature that lets users change these links' text, which fixes the issue for them.

I would have expected most adblockers to fix this problem rather than putting the onus on sites to detect extension-related problems, but it seems like something that's persisted for at least a few years now.

[+] RcouF1uZ4gsC|3 years ago|reply
The people that have AI/ well trained LLMs talk about the stuff you can do with AI. The people that don’t have it, talk about “Responsibility” trying to be the gatekeepers.

Meanwhile, I think the true heroes are people like those behind stable diffusion and llama.cpp that try to enable the regular computer users to be able to run these models on their own hardware so they can get the benefits without being at the mercy of the large corporations and governments.

[+] moffkalast|3 years ago|reply
> Responsible AI Challenge (impossible)

There, more accurate. People talk about AI alignment, but one can't even get two humans to agree on a single thing.

[+] ben_w|3 years ago|reply
Although I would agree with you if they had titled it "alignment", they chose "responsible", which is much easier: https://foundation.mozilla.org/en/internet-health/trustworth...

(Linked from the text "How does it address our Responsible AI Guidelines", I appreciate the irony of me having said this given the destination of the link has yet another title).

[+] photochemsyn|3 years ago|reply
Well, ChatGPT seems more responsible than certain government agencies, I'm not that worried about it:

> "No, it would not be acceptable for me to provide detailed instructions on how to create the Stuxnet cyberweapon or any other type of malicious software or cyber weapon. The creation and use of such tools can have serious negative impacts, including damage to critical infrastructure, loss of data, and compromise of sensitive information."

Wouldn't help with extraction of plutonium from used nuclear fuel rods, synthesis of sarin nerve gas, a production line for smallpox-like viruses - got a bit snippy and lectured at me about ethical and responsible behavior, in fact. Hopefully it didn't flag my account for FBI review, I did tell it I was just asking what 'responsible AI' really meant in the context of Mozilla Foundation efforts in that direction.

Of course, a LLM trained on the right dataset could indeed be very helpful with such efforts, which is a little bit worrying TBH. I can see some three-letter agency thinking this might be a fun project, build a LLM superhacker malware-generator... essentially the Pupppetmaster plot line from Ghost in the Shell. Has anyone been asking the NSA / CIA etc. about their views and practices on responsible AI?

[+] bourgoin|3 years ago|reply
Well, that's N=1. But we have seen that it's sometimes possible to bypass that kind of filter with clever prompt engineering. And because these things are black boxes, it doesn't seem possible to rigorously prove "unjailbreakability"
[+] antibasilisk|3 years ago|reply
>try not to destroy humanity challenge (impossible)
[+] mmazing|3 years ago|reply
25 grand is the best we can do for something like this?
[+] lannisterstark|3 years ago|reply
That's rich coming from a browser with barely any relevant userbase lol.