(no title)
cbHXBY1D | 1 year ago
Google has a set of AI principles: https://ai.google/responsibility/principles/
These include:
> AI applications we will not pursue
> In addition to the above objectives, we will not design or deploy AI in the following application areas:
> 1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
> 2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
> 3. Technologies that gather or use information for surveillance violating internationally accepted norms.
> 4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
The contract goes against those principles. Employees rightfully speak out about this and stonewalled.
pavon|1 year ago
cbHXBY1D|1 year ago
Are you aware of the recent revelations that it is using AI to indiscriminately kill people at their home? https://www.972mag.com/lavender-ai-israeli-army-gaza/
Do you have an evidence that they aren't using Project Nimbus for this? Spoiler: you do not - none of us do.