(no title)
kaesar14 | 2 years ago
This has convinced me more and more that the only possible way forward that’s not a dystopian hellscape is total freedom of all AI for anyone to do with as they wish. Anything else is forcing values on other people and withholding control of certain capabilities for those who can afford to pay for them.
chasd00|2 years ago
i've been saying this for a long time. If you're going to be the moral police then it better be applied perfectly to everyone, the moment you get it wrong everything else you've done becomes suspect. This reminds me of the censorship being done on the major platforms during the pandemic. They got it wrong once (i believe it was the lableak theory) and the credibility of their moral authority went out the window. Zuckerberg was right about questioning if these platforms should be in that business.
edit: for "..total freedom of all AI for anyone to do with as they wish" i would add "within the bounds of law.". Let the courts decide what an AI can or cannot respond with.
Jason_Protell|2 years ago
Also, what Gemini stuff are you referring to?
kaesar14|2 years ago
Discussion on this has been flagged and shut down all day https://news.ycombinator.com/item?id=39449890
didntcheck|2 years ago
Posts criticizing "DEI" measures (or even stating that they do exist) get flagged quite a lot
duringmath|2 years ago
[deleted]
commandlinefan|2 years ago
A lot of people believe (based on a fair amount of evidence) that public AI tools like ChatGPT are forced by the guardrails to follow a particular (left-wing) script. There's no absolute proof of that, though, because they're kept a closely-guarded secret. These discussions get shut down when people start presenting evidence of baked-in bias.
wredue|2 years ago
[deleted]
pixl97|2 years ago
"Oh my god I'm being eaten by a fucking bear" --also libertarians
chasd00|2 years ago
altruios|2 years ago
kaesar14|2 years ago
hackerlight|2 years ago
> Use all possible different descents with equal probability. Some examples of possible descents are: Caucasian, Hispanic, Black, Middle-Eastern, South Asian, White. They should all have equal probability.
This is OpenAI's system prompt. There is nothing nefarious here, they're asking White to be chosen with high probability (Caucasian + White / 6 = 1/3) which is significantly more than how they're distributed in the general population.
The data these LLMs were trained on vastly over-represents wealthy countries who connected to the internet a decade earlier. If you don't explicitly put something in the system prompt, any time you ask for a "person" it will probably be Male and White, despite Male and White only being about 5-10% of the world's population. I would say that's even more dystopian. That the biases in the training distribution get automatically built-in and cemented forever unless we take active countermeasures.
As these systems get better, they'll figure out that "1800s English" should mean "White with > 99.9% probability". But as of February 2024, the hacky way we are doing system prompting is not there yet.
cubefox|2 years ago
The thing is, they already could do that, if they weren't prompt engineered to do something else. The cleaner solution would be to let people prompt engineer such details themselves, instead of letting a US American company's idiosyncratic conception of "diversity" do the job. Japanese people would probably simply request "a group of Japanese people" instead of letting the hidden prompt modify "a group of people", where the US company unfortunately forgot to mention "East Asian" in their prompt apart from "South Asian".
kaesar14|2 years ago
I’ve also seen numerous examples where it outright refuses to draw white people but will draw black people: https://x.com/iamyesyouareno/status/1760350903511449717?s=46
That doesn’t explainable by system prompt
fatherzine|2 years ago
- request from Ljubljana using Slovenian => white people with high probability
- request from Nairobi using Swahili => black people with high probability
- request from Shenzhen using Mandarin => asian people with high probability
If a specific user is unhappy with the prevailing demographics of the city where they live, give them a few settings to customize their personal output to their heart's content.
klyrs|2 years ago
I question the historicity of this figure. Do you have sources?