top | item 47176743

(no title)

HDThoreaun | 3 days ago

Their values are about AI safety. Geopolitically they could care less. You might think its a bad take but at least they are consistent. AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.

discuss

order

protocolture|3 days ago

Consistency isn't a virtue. A guy who murders people at a consistent rate isn't better than a guy who murders people only on weekends.

>AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.

Humanity includes the future victim of AI weapons.

HDThoreaun|2 days ago

Perhaps a better word would be honesty, which I find refreshing when most other big tech leaders seem to be lying through their teeth about their AI goals. I disagree that consistent ideology isnt a virtue though. It shows that he has spent time thinking about his stance and that it is important to him. It makes it easy to decide if you agree with the direction he believes in.

> Humanity includes the future victim of AI weapons.

Which is why he wants to control them instead of someone he believes is more likely to massacre people. Its definitely an egotistical take but if he's right that the weapons are inevitable I think its at least rational

vasco|3 days ago

There's no AI safety. Either the AI does what the user asks and so the user can be prosecuted for the crime, or the AI does what IT wants and cannot be prosecuted for a crime. There's no safety, you just need to decide if you're on the side of alignment with humans or if you're on the side of the AIs.

orbital-decay|2 days ago

Which humans in particular? There are multiple wars happening right now just because of the misalignment between different groups of humans.

orbital-decay|2 days ago

>Geopolitically they could care less.

I think that at the very least you might want to read Dario's nationalistic rants before saying anything like that.

>align them with humanity.

Quick sanity check: does their version of humanity include e.g. North Koreans?

ExoticPearTree|2 days ago

> AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.

This meaning what exactly? Having autonomous weapons kill what exactly that is so different from what soldiers kill? Or killing others more efficiently so they “don’t feel a thing”?

marxisttemp|2 days ago

I think you mean “couldn’t care less”. “Could care less” implies they care.