top | item 39407912

(no title)

djokkataja | 2 years ago

> Nature gives us a world in which "he who does not work, does not eat" is a rule of law.

Which is why no child has ever survived until adulthood.

And TBH I find this an appropriate metaphor because the promise of AGI is that every adult human will be infantile by comparison--but there are also potential technologies that could allow humans to ... "upgrade" themselves and become a mature ... whatever. Sadly brain-computer-interfaces seem like a technology that works best when enabled by AI (to help with interpreting brain signals), so it seems quite unlikely that any biological humans are going to keep up with AI over the medium term.

discuss

order

Nevermark|2 years ago

> but there are also potential technologies that could allow humans to ... "upgrade" themselves

I hope so.

But transforming ourselves into GAI to keep up (while maintaining continuity of memories, etc.) is going to be a much more expensive proposition than simply making more GAI hardware from scratch.

So economically, where are humans going to earn that additional value, needed to account for the extra cost? When the premise of upgrading ourselves is our native biology isn't keeping up and so not actually needed?

djokkataja|2 years ago

Given that there's no well-defined "unit of intelligence", I see no reason to believe that all AGI (or GAI, to use your acronym) entities will be identical. I also don't think it's plausible to assume that any GAI entity will be literally omniscient; uncertainty remains a reasonable, deep, persistent factor for all entities that don't severely violate physics as we presently understand it.

The additional value that humans present would thus be a diversity factor. Much as it might be economically beneficial to cut down the entire Amazon, there's also significant economic benefits in keeping it intact and finding out what kinds of crazy biological stuff comes out of there. And over the long run, we get way more economic benefit from finding crazy things in there than in chopping it all down. There are all kinds of ways to explore uncertain dimensions, but looking at whatever the universe has already managed remains a consistent source of value.

> But transforming ourselves into GAI to keep up (while maintaining continuity of memories, etc.) is going to be a much more expensive proposition than simply making more GAI hardware from scratch.

It's basically a fixed cost because of how slowly the human population grows (if it even continues to grow).

Synthetic GAI entities will be able to use vastly more resources than humans presently do, because they'll be able to create more of themselves as quickly as resources become available, and they won't be stuck operating at a fixed clock speed. It's also probably going to be much easier for them to make use of off-world resources. So the total population of GAI entities could plausibly explode compared with the total human population.

It comes down to timing: if there's technology available for humans to become less ... biological, and it seems very desirable to lots of people, but it's really expensive and there aren't that many GAI entities yet, that might be less pleasant. But if that kind of tech depends sufficiently on AI developments that there's some significant lag time between a GAI population explosion and that kind of human-upgrading tech showing up, the cost of making it available to whatever humans want it might be a drop in the bucket from a total economic perspective.

To be clear, I don't see biological humans "keeping up" over the long term. If we take AGI and BCIs and various Ship of Theseus questions seriously, there could be some significant blurring of lines between "human" and "AI." And if we combine that with GAIs originating from humans, the concept of "keeping up" seems to become less meaningful. Who's keeping up with who?

Lastly, I'm suspicious that GAI entities won't be purely economically motivated, because I don't see any reason that they'll be "purely" anything at all. There is no magical "essence d'intelligence"; instead there's staggering layers of complexity. And every plausible AI safety approach I've seen so far involves training AIs to be inclined towards beneficial acts and away from harmful acts because there's no way to program conceptually pure "motivations" into them.