top | item 39458286

(no title)

kaesar14 | 2 years ago

Curious to see if this thread gets flagged and shut down like the others. Shame, too, since I feel like all the Gemini stuff that’s gone down today is so important to talk about when we consider AI safety.

This has convinced me more and more that the only possible way forward that’s not a dystopian hellscape is total freedom of all AI for anyone to do with as they wish. Anything else is forcing values on other people and withholding control of certain capabilities for those who can afford to pay for them.

discuss

order

chasd00|2 years ago

> This has convinced me more and more that the only possible way forward that’s not a dystopian hellscape is total freedom of all AI for anyone to do with as they wish

i've been saying this for a long time. If you're going to be the moral police then it better be applied perfectly to everyone, the moment you get it wrong everything else you've done becomes suspect. This reminds me of the censorship being done on the major platforms during the pandemic. They got it wrong once (i believe it was the lableak theory) and the credibility of their moral authority went out the window. Zuckerberg was right about questioning if these platforms should be in that business.

edit: for "..total freedom of all AI for anyone to do with as they wish" i would add "within the bounds of law.". Let the courts decide what an AI can or cannot respond with.

Jason_Protell|2 years ago

Why would this be flagged / shut down?

Also, what Gemini stuff are you referring to?

kaesar14|2 years ago

Carmack’s tweet is about what’s going around Twitter today regarding the implicit biases Gemini (Google’s chatbot) has when drawing images. Will refuse to draw white people (and perhaps more strongly so, refuses to draw white men?) even in prompts where appropriate, like “Draw me a Pope” where Gemini drew an Indian woman and a Black man - here’s the thread: https://x.com/imao_/status/1760093853430710557?s=46 Maybe in isolation this isn’t so bad but it will NEVER draw these sorts of diverse characters for when you ask for a non Anglo/Western background, e.g draw me a Korean woman.

Discussion on this has been flagged and shut down all day https://news.ycombinator.com/item?id=39449890

duringmath|2 years ago

[deleted]

commandlinefan|2 years ago

> Why would this be flagged / shut down

A lot of people believe (based on a fair amount of evidence) that public AI tools like ChatGPT are forced by the guardrails to follow a particular (left-wing) script. There's no absolute proof of that, though, because they're kept a closely-guarded secret. These discussions get shut down when people start presenting evidence of baked-in bias.

wredue|2 years ago

[deleted]

pixl97|2 years ago

"The only way to deal with some people making crazy rules is to have no rules at all" --libertarians

"Oh my god I'm being eaten by a fucking bear" --also libertarians

chasd00|2 years ago

"can you write the rules down so i know them?" --everyone

altruios|2 years ago

having rules, and knowing what the rules are are not orthogonal goals.

kaesar14|2 years ago

I find it fascinating this type of response from people is always accompanied by a political label in order to insinuate some other negative baggage.

hackerlight|2 years ago

I'm convinced this happens because of technical alignment challenges rather than a desire to present 1800s English Kings as non-white.

> Use all possible different descents with equal probability. Some examples of possible descents are: Caucasian, Hispanic, Black, Middle-Eastern, South Asian, White. They should all have equal probability.

This is OpenAI's system prompt. There is nothing nefarious here, they're asking White to be chosen with high probability (Caucasian + White / 6 = 1/3) which is significantly more than how they're distributed in the general population.

The data these LLMs were trained on vastly over-represents wealthy countries who connected to the internet a decade earlier. If you don't explicitly put something in the system prompt, any time you ask for a "person" it will probably be Male and White, despite Male and White only being about 5-10% of the world's population. I would say that's even more dystopian. That the biases in the training distribution get automatically built-in and cemented forever unless we take active countermeasures.

As these systems get better, they'll figure out that "1800s English" should mean "White with > 99.9% probability". But as of February 2024, the hacky way we are doing system prompting is not there yet.

cubefox|2 years ago

> As these systems get better, they'll figure out that "1800s English" should mean "White with > 99.9% probability".

The thing is, they already could do that, if they weren't prompt engineered to do something else. The cleaner solution would be to let people prompt engineer such details themselves, instead of letting a US American company's idiosyncratic conception of "diversity" do the job. Japanese people would probably simply request "a group of Japanese people" instead of letting the hidden prompt modify "a group of people", where the US company unfortunately forgot to mention "East Asian" in their prompt apart from "South Asian".

fatherzine|2 years ago

BigTech, which critically depends on hyper-targeted ads for the lion share of its revenue, is incapable of offering AI model outputs that are plausible given the location / language of the request. The irony.

- request from Ljubljana using Slovenian => white people with high probability

- request from Nairobi using Swahili => black people with high probability

- request from Shenzhen using Mandarin => asian people with high probability

If a specific user is unhappy with the prevailing demographics of the city where they live, give them a few settings to customize their personal output to their heart's content.

klyrs|2 years ago

> As these systems get better, they'll figure out that "1800s English" should mean "White with > 99.9% probability".

I question the historicity of this figure. Do you have sources?