What exactly is the argument that corporations are incapable of unbounded exponential growth contra a possible future AI? Is there just something magic about computers, or am I missing something obvious?
You can find discussions on this by googling "AI Foom" - the key idea is that an AGI that can recursively self-improve will be able to rapidly escape human control before any entity could check it, and very likely without humans even knowing that this had happened. Such an AI would likely consider humans an obstacle to its objectives, and would develop some sort of capacity to quickly destroy the world. Popular hypothesized mechanisms for doing so involve creating nanomachines or highly-lethal viruses.
That's key to the story of the paperclip maximizer - the paperclip maximizer will go about its task by trying to improve itself to be able to best solve the problem, and once it's improved enough it will decide that paperclips would be maximized by destroying humanity and would come up with a plan to achieve this outcome. However, humans may not realize that the AI is planning this until it's too late.
> What exactly is the argument that corporations are incapable of unbounded exponential growth
Individual corporations aren't capable of unbounded exponential growth because they can't keep the interests of the humans that make them up aligned with the "interests" of the corporation indefinitely. They develop cancer of the middle management and either die or settle into a comfortable steady-state monopoly.
Market systems as a whole can and do grow exponentially - and this makes them extremely dangerous. But they're not intelligent and so can't effectively resist when a world power decides to shorten the leash, as occasionally happens.
Corporations innovate through the work of human minds. Corporations improving doesn't cause the human minds to improve, so there's no recursive self-improvement. Corporations today still have the same kind of human brains trying to innovate them as they did in the past.
A human+ level AI would be able to understand and improve its own hardware and software in a way we can't with our own brains. As it improves itself, it will get compounding benefits from its own improvements.
yanderekko|3 years ago
That's key to the story of the paperclip maximizer - the paperclip maximizer will go about its task by trying to improve itself to be able to best solve the problem, and once it's improved enough it will decide that paperclips would be maximized by destroying humanity and would come up with a plan to achieve this outcome. However, humans may not realize that the AI is planning this until it's too late.
consilient|3 years ago
Individual corporations aren't capable of unbounded exponential growth because they can't keep the interests of the humans that make them up aligned with the "interests" of the corporation indefinitely. They develop cancer of the middle management and either die or settle into a comfortable steady-state monopoly.
Market systems as a whole can and do grow exponentially - and this makes them extremely dangerous. But they're not intelligent and so can't effectively resist when a world power decides to shorten the leash, as occasionally happens.
AgentME|3 years ago
A human+ level AI would be able to understand and improve its own hardware and software in a way we can't with our own brains. As it improves itself, it will get compounding benefits from its own improvements.