(no title)
biscottigelato | 2 years ago
However if we are talking about 10x more intelligence, then that'd be a whole different ball game.
Check Alpha Go -> Alpha Go Zero. Gaining orders of magnitude in capability only with slight change to the model, along with a lot of more resource (we can easily throw 100x more compute and data to these models if someone seriously wants to and don't care about ROI. Or maybe if AI starts to be hugely profitable, 100x in investment is almost a rational outcome) is not uncommon in AI research.
Barrin92|2 years ago
Research that the US military has conducted has shown that the ability to influence others declines if the intelligence gap between leaders and subjects is too large. Stephen Hawking wouldn't have been a very good influencer of 80 IQ guys compared to Andrew Tate. 1 standard deviation in intelligence is actually just about optimal in terms of leadership.
This is a good example that shows how disinterested AI fear scenarios are in empirical reality and how much it's just psychology of the people who talk about it. Intelligence is one trait among many that contributes to an individuals fitness and like all others has diminishing returns.
If the AI existential risk people were truly rational they would hit the gym and be hotter because it turns out that's much more effective at getting their point across to ordinary people than trying to make intelligent arguments for it.
jamilton|2 years ago
Teever|2 years ago
machiaweliczny|2 years ago
thenaturalist|2 years ago
0: https://www.iflscience.com/human-beats-ai-in-14-out-of-15-go...