top | item 35363882

(no title)

biscottigelato | 2 years ago

Because the ability to influence others is more important than 1 or 2 standard deviation in more intelligence.

However if we are talking about 10x more intelligence, then that'd be a whole different ball game.

Check Alpha Go -> Alpha Go Zero. Gaining orders of magnitude in capability only with slight change to the model, along with a lot of more resource (we can easily throw 100x more compute and data to these models if someone seriously wants to and don't care about ROI. Or maybe if AI starts to be hugely profitable, 100x in investment is almost a rational outcome) is not uncommon in AI research.

discuss

order

Barrin92|2 years ago

>However if we are talking about 10x more intelligence, then that'd be a whole different ball game.

Research that the US military has conducted has shown that the ability to influence others declines if the intelligence gap between leaders and subjects is too large. Stephen Hawking wouldn't have been a very good influencer of 80 IQ guys compared to Andrew Tate. 1 standard deviation in intelligence is actually just about optimal in terms of leadership.

This is a good example that shows how disinterested AI fear scenarios are in empirical reality and how much it's just psychology of the people who talk about it. Intelligence is one trait among many that contributes to an individuals fitness and like all others has diminishing returns.

If the AI existential risk people were truly rational they would hit the gym and be hotter because it turns out that's much more effective at getting their point across to ordinary people than trying to make intelligent arguments for it.

jamilton|2 years ago

I really don't think we can say that research will apply to future AI, given that it was about humans. If intelligent AI exists in the future, it will probably not think exactly like humans. I think you're being overly dismissive.

Teever|2 years ago

Why can't a super intelligent AI just make a dumb avatar to trick people?

machiaweliczny|2 years ago

Good point and I agree but regarding fitness:

  \* communication
  \* resource usage
  \* procreation
  \* embodiment
I think that digital agents posses very big fitness like real life viruses or malware