I believe that you have a fine understanding of the issues involved. The risk is two-sided. One part of it is that AI may somehow become smarter than us. The other side is that humans have developed mathematics, which encourages us to treat the universe as an optimization problem, which encourages us to think that the best optimizers are the best people and that they deserve great rewards for optimizing, which leads to competition at optimizing optimization, serious negative feedback for the losers, single winner systems, and all the winners realizing that the more they resemble their enemies, the less likely they are to be targeted as a resource to plunder. Perhaps we are not able to figure out if any of this is wrong, but the emphasis on convergent thinking will make our species easy to fool and sabotage, if our species doesn't win a Darwin award first. AI will be able to avoid the blame. It may be optimizing the fire department and selling fire insurance when civilization burns down, but the fire started millennia ago.
No comments yet.