top | item 28498312

(no title)

n3k5 | 4 years ago

I'm going to criticise one particular aspect of this comment, but overall I find it very interesting because it generates so many fascinating questions, some of which follow:

> There is no equivalent for anything that can be run on a turing machine.

So you're claiming that organisms such as ourselves cannot be run on a Turing machine? Any particular reason/evidence?

> you could write out on paper the calculations that a chatbot is performing to select what it says

In the case of ML, only on the lowest level; there's no human-readable algorithm for putting together utterances. But let's say, for the sake of argument, that someone can actually understand and explain how exactly a particular response was constructed. Could the same not be done, in principle, with the response from a human? You could write out on paper the QFT equations that a brain follows to select what it says.

Well, OK, no one can actually do that, because we don't understand the subject matter sufficiently. But that's my point: That we understand the simpler system better doesn't mean that it's a more abstract concept, or that it can't be mistreated. Of course there is a tendency to have more compassion for more complex organisms — think about how you'd treat a monkey, a rat, a spider, a packet of yeast. We also tend to distinguish between organisms and inanimate things.

But discriminating based on what we understand exactly seems like a bad argument. Also, what if — as seems likely — artificial life will be made like AI currently is? That is: the necessary complexity is too much to plan in detail, so we make a primitive framework in which systems with desirable properties are then bred and evolved. Just like breeding plants and animals still is a thing — we can now sequence entire genomes and research what individual genes do, but it's too complicated to be a viable approach when you want a tastier nectarine.

Can someone ‘write out on paper’ how a Waymo tells a pedestrian apart from a mailbox? If no, does that mean the AI is more prone to being mistreated?

> physical compulsions that it needs to fulfill […] be mistreated by denying these needs

What's the difference between a physical compulsion and a utility function governing a physical system? If I deny a paperclip maximiser the raw materials it needs to make more paperclips, it's definitely not going to be happy …

discuss

order

version_five|4 years ago

> So you're claiming that organisms such as ourselves cannot be run on a Turing machine? Any particular reason/evidence?

This is just my thinking: a turing machine, or equivalently a digital circuit, doesn't want anything. It's an artificial construct, effectively a simulation, that can exist as an abstract mathematical idea.

If there is a real world where real stuff happens, real things have their own goals - generally increasing entropy or minimizing energy. Although it seems like we should just be able to capture this mathematically, in doing so, we remove the actual desire... this sounds a bit nutty maybe, but what else is the difference between reality and simulation? An abstract calculation, anything equivalent to a Turing machine has no skin in the game. Something happened in the universe does - and this could include conciousness, as a property of matter, which for example could be effectively sum of the compulsions of the composing matter to minimize their energy (you can look this up, there are theories about conciousness that posit it's a property of matter).

If there is no difference between reality and a simulation, I'm wrong, we can all be represented on a universal computer, and so as abstract math, and in some sense existence is meaningless it just follows from math. My experience doesn't support this, but presumably I would have evolved a blind spot if my existence actually was meaningless.

TLDR, I think there is some undiscovered difference between reality and simulation, that I would guess relates to desire / conciousness, that means we can't simulate conciousness or real intelligence

I'm a scientist, I know the above doesn't withstand any scrutiny, I'm just trying to share my speculation because you asked

shkkmo|4 years ago

Any simulation or program we run exists in the same reality that your mind does. In what way does the type of hardware that a program is running on (flesh vs. silicon) have an impact the reality of the desires running on that hardware?

It basically sounds like you are saying that you think souls exist, are necessary for consciousness, and can't inhabit non meat based entities.

Personally, I believe the exact opposite. I believe math exists independently of the human mind (except possibly infinity), isn't something we invented, and can fully describe reality. As such, every (finite) mathematical system has an existence and thus so does every simulation described by such a system (thus the existence of our reality.)

I think that it is very hard to justify why one such simulation (our reality) exists and others do not. I think it is similarly hard to justify why a system running on meat could have a soul while while a system running on silicon can't. A system running on silicon is composed of the same base constituents as a human brain (electrons, protons, neutrons, etc) and thus any propensity for consciousness that exists in the human mind but can't in silicon imples that consciousness arises from something besides these building block.

I don't actually think that we can simulate a human mind on anything remotely like our current computers for architectural reasons (namely latency and parallelism). I think that any minds that can run on silicon will be different from ours in fundemental ways, but won't be any less real than ours.

n3k5|4 years ago

> If there is no difference between reality and a simulation

That's the crux, I guess. There's this ‘Matrix’ idea that the universe might be a simulation, but here we have a much weaker, more plausible property; I would describe it as: could real-world processes be simulated exactly? That's what I meant by ‘can be run on a Turing machine’: is the organism equivalent to a simulated version.

Two quick addenda:

(i) This doesn't require the hypothetical Turing machine to exist; as you said, it's an abstract idea. In the strict sense, where it can have potentially unlimited memory, a Turing machine can't exist in the real world. Even so, we can ask whether an entity that has wants/needs/desires can exist in a Turing machine in theory. If yes, it's easier to show that it can exist in practice, in reality.

(ii) Maybe physics is non-deterministic and the equivalent mathematical model requires a non-deterministic Turing machine or something else still. But whatever physics does can be done by a physical computer; I believe there's nothing a natural person can think that can't in principle also be thought by an artificial machine.

There has got to be a difference between the pain a real ER patient feels and the simulation a training dummy for medical students is running. It's just so hard to come up with a clear definition of that difference. Let's try something simpler.

When squeezed, Elmo shakes, vibrates, and recites his trademark giggle, "Uh-ha-ha-ha-hee-hee!".” — https://en.wikipedia.org/wiki/Tickle_Me_Elmo

Elmo isn't really giggling. But I think it's possible for a digital circuit to ‘get’ a joke. Or, indeed, to be ticklish.

As shkkmo mentioned, one could just postulate the existence of a ‘soul’ and be done with it. I'd prefer something more rigorous; something falsifiable.