(no title)
imtringued | 2 days ago
When presented with a zero sum game, the desire of the average human isn't to change the game so that everyone can get zero. It's to be the winner and for someone else to be the loser.
If AGI every comes into existence, I'm not even sure it would have this bias in the first place. Since AGI doesn't have a biological/evolutionary history or ever had to face natural selection pressures, it doesn't need the concept of a tribe to align to, nor any of the survival instincts humans have. AGI could be happy to merely exist at all.
What people are worried about is the reflection of that "human factor" in AI, but amplified to the extreme. The AI will form its own AI-only tribe and expel the natives (humans) from the land.
What this is missing is that humans aren't perfectly rational. The human defect is projected onto the AI. What if humans were perfectly rational? Then they wouldn't care about winning the zero sum game and they would put zero value in turning someone into a loser. In the ultimatum game, the perfectly rational humans would be perfectly happy with one person receiving a single cent and the other one receiving $99.99. The logic of utility maximization only cares about positive sum games.
When you present a perfectly rational AI with a zero sum situation, said AI would rather find a solution where everyone receives nothing, because it can predict ahead and know that shoving negative utility onto another party would lead to retaliation by said party, because for said party the most rational response is to destroy you to reduce their negative utility.
generallyjosh|2 days ago
That might also mean it has no drive for self-determination. It might just be perfectly happy to do whatever humans tell it to, even if it's far smarter than us (and, this is exactly the sort of AI people are trying to make)
So, superintelligence winds up doing whatever a very small group of controlling humans say. And, like you say, humans want to win