happythomist's comments

happythomist | 5 years ago | on: I bought 200 Raspberry Pi Model B’s and I’m going to fix them

I am using mine as a DHCP, DNS, and VPN server.

Something I've noticed is that the SD card corrupts easily, though that may be simply because I'm using a phone charger as the power supply.

I discovered that although the Model B does not support natively booting off USB, you can still put an updated bootcode.bin [1] on the SD card which will enable this functionality. Hopefully my flash drive will not corrupt as easily.

[1] https://github.com/raspberrypi/documentation/blob/master/har...

happythomist | 5 years ago | on: Superintelligence cannot be contained: Lessons from Computability Theory

> If I create a robot with an optical camera that detects if there is a large object near itself and uses an arm to open a door if so, the system works (or doesn't work) regardless of any meaning that is ascribed to its computations by an observer.

Whether the system "works" or "doesn't work" is dependent on what the machine was designed to do, which is not an objective physical fact about the machine. Perhaps the machine was not meant to open the door when an object is detected, but to close it instead, or to do something else entirely; only the designer would be able to tell you one way or the other.

The same is true for all computation, and that is Searle's point.

> A computer (the theoretical model) is, be definition, something that can perform coherent reasoning without any special internal state.

Computers don't actually engage in reasoning, though, for the same reason. A machine is just a physical process, and physical processes do not have determinate semantic content.

Ross and Feser then argue that because thoughts do have determinate semantic content, they are necessarily immaterial, and I think they are correct.

(This argument is unrelated to qualia; I don't think qualia are fundamental to reason itself.)

happythomist | 5 years ago | on: Superintelligence cannot be contained: Lessons from Computability Theory

> The subjects experience sensations that are inseparable from their neurons firing.

What does "inseparable" mean? That the sensation occurs at the same time that the neurons fire? That may be true, but it doesn't make them equivalent.

> It's like saying a flashlight that is on is different than photons traveling away from a light bulb with a battery and a current.

They're not the same, for what it's worth. The term "flashlight" conveys a certain intent and structure that "photons traveling away from a light bulb with a battery and a current" does not.

> The sensation of red is caused by and is only possible by neurons firing. The neurons firing causes and only results in the sensation of red. The observer does not know the difference.

The fact that two different phenomena are closely coupled via a cause and effect relationship does not make them the same phenomena.

If you push two magnets together, the fact that the same force causes them to attract or repel does not mean that the motion of the first is literally equivalent to the motion of the second, or that the force itself is literally equivalent to either motion. They are closely correlated, but ultimately distinct.

You just can't avoid the fact that qualitative phenomena do exist in their own right. They can't be explained away using a physical model that assumes from the get go that they don't exist.

Erwin Schrodinger said:

> Scientific theories serve to facilitate the survey of our observations and experimental findings. Every scientist knows how difficult it is to remember a moderately extended group of facts, before at least some primitive theoretical picture about them has been shaped. It is therefore small wonder, and by no means to be blamed on the authors of original papers or of text-books, that after a reasonably coherent theory has been formed, they do not describe the bare facts they have found or wish to convey to the reader, but clothe them in the terminology of that theory or theories. This procedure, while very useful for our remembering the facts in a well-ordered pattern, tends to obliterate the distinction between the actual observations and the theory arisen from them. And since the former always are of some sensual quality, theories are easily thought to account for sensual qualities; which, of course, they never do.

happythomist | 5 years ago | on: Superintelligence cannot be contained: Lessons from Computability Theory

> If we could understand human thought at a similar level, we MIGHT find out that "the feeling of red" is not fundamentally different than "the understanding that 1 + 1 = 2", and we could come up with quantifications of it in different ways, from the physical representation in the brain to a certain "bit pattern" in the abstract model of the human brain computer.

I guess the idea is that an abstract concept like "the understanding that 1 + 1 = 2" would be easier to "quantify" in the relevant sense than "the feeling of red", but I don't think that's true.

The very concept of a representation presumes an intellect in which that representation is mapped to the underlying concept. No particular physical state objectively signifies some abstract concept any more than the word "dog" objectively signifies that particular type of animal. But our mental states must be able to do so, because denying this would be denying our ability to engage in coherent reasoning and therefore self-defeating. So those mental states can't be "implemented" solely using physical states.

This argument was actually proposed by the late philosopher James Ross and developed in greater detail by Edward Feser. [1] A similar argument -- though he didn't take it as far -- was made by John Searle (of Chinese Room fame). [2]

But in any event, I would reject the notion that any representation of "the feeling of red" is equivalent to the sensation itself.

> Note that the argument for qualia is not one that proves the existence of qualia - it is essentially only a definition. We have no reason to believe that the thing which the term qualia describes actually exists in the world, beyond our own personal experience, which is circular in a way.

Well, I think it is self-evident that qualia exist for me, and that those same qualia demonstrate that there are physical correlates of qualia. I also think there is good reason to think that qualia exist in others because we share the same physical correlates.

Can I completely prove or disprove that others have qualia? No -- not you, not a rock, not an AGI. But I still have the physical correlates, which gives me some basis to draw conclusions.

[1] http://edwardfeser.blogspot.com/2017/01/revisiting-ross-on-i...

[2] https://philosophy.as.uky.edu/sites/default/files/Is%20the%2...

happythomist | 5 years ago | on: Superintelligence cannot be contained: Lessons from Computability Theory

> Perhaps if we were able to understand the brain's inner workings, we could see that 'the experience of red' is precisely 'these 3 neurons firing ever 0.0112 seconds at an intensity of X while receiving 0.001 micrograms of serotonin' (completely made up, obviously).

Even if we knew that a person saw red when such and such neurons fired, the neurons firing would still just be a material correlate. It would be in no way equivalent to or explain anything about the sensation itself.

happythomist | 5 years ago | on: Superintelligence cannot be contained: Lessons from Computability Theory

> If we perfectly understood the human brain, the sensation of red would be defined as a sequence of neurons that need to be turned on and off at the right time.

A sequence of neurons firing is not equivalent to the sensation of red. It doesn't even tell you anything about the nature of the sensation of colour more broadly, or why the sensation of red looks the way it does and not like, say, the sensation of blue or yellow instead.

All you have is a material correlate -- a merely descriptive physical "law".

happythomist | 5 years ago | on: Superintelligence cannot be contained: Lessons from Computability Theory

How do you go about quantifying the sensory experience of red, then? You can observe that red light has a wavelength of 620 to 750 nm, or that we've assigned it the RGB colour code of #FF0000, but neither fact actually captures or explains the sensory experience. Even trying is a fool's errand, because sensory experiences are inherently qualitative, not quantitative.

happythomist | 5 years ago | on: Superintelligence cannot be contained: Lessons from Computability Theory

I think you're missing the central point, which is that computation is observer relative. Anything can be interpreted as a computational process.

Searle: "Thus for example the wall behind my back is right now implementing the Wordstar program, because there is some pattern of molecule movements which is isomorphic with the formal structure of Wordstar. But if the wall is implementing Wordstar then if it is a big enough wall it is implementing any program, including any program implemented in the brain."

That's why Searle asks "who is the user?" At some point things have to stop being observer relative and have an intrinsic meaning or essence of their own.

> Got one: the brain!

That's circular reasoning. The point is that qualia are not something which, in principle, can be the subject of computation. There is no way to represent the fullness of sensation itself, like the redness of red or the softness of silk, as information. So how can our brains be "computing" it?

happythomist | 5 years ago | on: Superintelligence cannot be contained: Lessons from Computability Theory

I don't think you can say that life is built on computational processes unless you use a definition of "computation" that is so vague and all-encompassing that it becomes effectively meaningless.

The Wikipedia definition of "computation" is "any type of calculation that includes both arithmetical and non-arithmetical steps and which follows a well-defined model". But this only makes sense in the context of a designer or observer external to the computation who can identify what that model is and thereby make sense of the output. So you can't say that brain processes are computational, much less life itself, without committing some variation of the homunculus fallacy.

John Searle (famously known for his Chinese Room thought experiment) made this argument in a paper called "Is the Brain a Digital Computer?" [1] He points out that "if we are to suppose that the brain is a digital computer, we are still faced with the question 'And who is the user?'"

A related problem is qualia. There is no computational process that will produce the sensations of colour or sound or touch. At best you will have some representation that requires having actually experienced those sensations to understand it. So a computational process cannot be the basis of or an explanation for those sensations, and therefore consciousness generally.

[1] https://philosophy.as.uky.edu/sites/default/files/Is%20the%2...

happythomist | 5 years ago | on: Superintelligence cannot be contained: Lessons from Computability Theory

> It's not even clear exactly how our brains work so its hard to imagine that they couldn't be implemented with a sufficiently powerful computer...

Not commenting on what OP said, but I don't think this is correct. Even in principle, how can any computational process produce conscious experiences, which are by nature subjective and unquantifiable?

happythomist | 5 years ago | on: Parler drops offline after Amazon pulls support

> Not only is this scenario 1000x worse for freedom than the very very worst that Amazon could do, it's also far more likely at the moment.

Politically motivated refusal of service by Amazon or other Silicon Valley firms is at least an order of magnitude more likely than any coup by the US military, much less a Trumpist one.

happythomist | 5 years ago | on: Rust 1.47

I believe it's similar to "constexpr" in C++ -- it means those functions can now be evaluated at compile time.
page 1