top | item 28509393

(no title)

throwaway316943 | 4 years ago

Here’s one from reality, bacteria. If we create self replicating machines, even if we do not intentionally introduce variation, they will vary from generation to generation. They will presumably not be indestructible and so some form of selection will be possible. If their reproduction cycle is fast enough they could potentially deviate from their intended purpose in a very short time.

discuss

order

drran|4 years ago

Remote inspection and checksum of the device firmware can solve this problem. You should worry about intentional hacking of the firmware, i.e. self-replicating botnet.

nicoburns|4 years ago

This assumes that the firmware contains no bugs. As soon as it does, there is always the possibility of unexpected behaviour and an unintended evolutionary trajectory.

kragen|4 years ago

I used to be concerned about this issue, due to the obvious analogy with biological evolution, but later I realized that it's an easy risk to guard against.

It's a straightforward engineering problem to reduce the possibility of accidental program mutation to any arbitrarily low level, for example using SHA-2 (with which the chance of an undetected error is 1e-77). Of course, you can have "somatic mutations" where one part of a machine or another malfunctions, for example due to damage or errors during construction; but those don't get propagated to the next generation, so they don't produce the kind of progressive deviation you're describing.

This doesn't happen in nature for a variety of reasons, among which is that as such corrective mechanisms become progressively more perfect, the evolution of the species using them becomes progressively slower, and therefore the perfection of the error-correction mechanisms never quite arrives. Moreover, any species whose evolution becomes very slow is at a major disadvantage when the environment changes sufficiently; it will probably die out and leave no descendants after the next climate change, even something minor like an Ice Age, much less a meteor strike.

Another thing to keep in mind is that the number of generations is quite limited in practice. If, to take an unrealistically risky example, you have an alcohol-dependent nanobot replicator weighing 2 picograms, and you set it to reshaping a tonne of alcohol-soaked soil, it can't make more than 5e17 copies of itself, for which it only needs 59 generations. We aren't talking about thousands or millions of generations: even if time allows for them, space doesn't.

1e-77 (.000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 01) is a number that it can be difficult to get a handle on. If we, to take another unrealistically risky example, converted Ceres into 2-picogram nanobot replicators, there would only be 4.7e35 of them, so the chance that one of them would have a mutation in its program that SHA-2 couldn't detect would be 4e-42, assuming that there were just as many erroneous replications as correct ones. (If only one out of every 1000 nanobot firmware installations had a copying error, the chance that one of them went undetected would instead be 4e-45.) The universe is only 4.3e26 nanoseconds old.

So, human malice or extreme carelessness would be necessary.

Kim_Bruning|4 years ago

In fact, all living things form a category of self replicating machines.

Depending, your [highly optimized] self-replicator might have some stiff competition.

danielheath|4 years ago

The idea that a designed machine could be competitive with living things at self-replication (the sole target of a multi-millenia optimisation process) seems… pretty unlikely to me.