top | item 39439110

(no title)

raindeer2 | 2 years ago

> Unless you really claim that modeling "pain" as some kind of variable in an algorithm can be equivalent to a biological being feeling pain?

Yes, I do. I guess it all boils down to whether or not you think the hard problem of consciousness is actually a problem or not. I doubt it is, but it is a totally respectable position if you do :)

> Where exactly in the cited paper is the claim that emotions are a belief? Can't find it.

As I wrote in my previous response, you can call it predictions too. Predictions are usually beliefs about the future. In the predictive brain literature, it is also used as predictions about the present. I use belief as the output of some inference algorithm that needs to deal with uncertainty.

Here is a quote from the paper: "The brain continually constructs concepts and creates categories to identify what the sensory inputs are, infers a causal explanation for what caused them, and drives action plans for what to do about them. When the internal model creates an emotion concept, the eventual categorization results in an instance of emotion."

But you are right, the paper talks more about how emotional categories are created, and dodges the question of how the "subjective experience of having an emotion" emerges. In my mind, the step is not far though, and boils down to, as said above, how you view the hard problem or consciousness. That we "feel" stuff is a result of an algorithm/model that describes ourselves as having experiences, which is a good model, since how else would we describe ourselves? My belief is that the progress in AI and neuroscience will prove me right :)

discuss

order

starbugs|2 years ago

> whether or not you think the hard problem of consciousness is actually a problem or not

Depends on what constitutes a "problem" in this context. But based on my interpretation of what you probably want to express with this sentence, yes, I think consciousness is a problem (for your theory). Otherwise, all of my previous comments wouldn't have made that much sense.

> Predictions are usually beliefs about the future.

I am not sure if I would agree that predictions and beliefs can simply be declared identical so easily. A belief doesn't need to be based on actual data and it doesn't have to be about the future while a prediction, by definition, is. Certain beliefs also seem to be emergent in humans. Otherwise, it would be difficult to explain the independent emergence of religion and belief in god. Beliefs are not necessarily emotions and that's why I have a hard time with the conflation of the two terms. "Predictions about the present" sounds more like a technical term to me. It doesn't make that much sense in the original meaning of the term "prediction". (I understand that it's used that way in many scientific works. Still, in this context, I think it's important to distinguish otherwise we end up redefining the meaning of words.)

> dodges the question of how the "subjective experience of having an emotion" emerges

Yea, it's convenient, right? But I think these are the central points of my argument that you brush away so swiftly. You won't get around subjective experience and consciousness. And how would you go about proving that an algorithm can have subjective experience and "feel" pain or emotions? And if you really believe that this is possible, how do you make sure that in your research nothing extremely unethical happens? By your definition, the Sims may evolve to something that could be protected by certain rights soon? ;)

I could imagine that in reality all of that isn't needed. Whether the algorithm really feels something or not - it doesn't matter as long as the expression is realistic enough for humans to believe that it feels something. Then you get the consciousness "injected" into your algorithm from the outside. Can you convince someone with a realistic sophisticated simulation that something is conscious even though it isn't? Probably. Still, it won't get you anywhere. Nevertheless, I find that to be a much more likely avenue than that we will ever be able to prove that an algorithm can experience feelings in a way that is equivalent to biological beings. I also don't think this kind of research is all that beneficial to us as humans, especially when mixed with advances in AI. But good luck with all of that and thanks for the references!

raindeer2|2 years ago

> I could imagine that in reality all of that isn't needed. Whether the algorithm really feels something or not - it doesn't matter as long as the expression is realistic enough for humans to believe that it feels something. Then you get the consciousness "injected" into your algorithm from the outside. Can you convince someone with a realistic sophisticated simulation that something is conscious even though it isn't? Probably. Still, it won't get you anywhere.

The thing is that I can't prove that you or anyone else is conscious either, including myself. With conscious acting AIs it will be the same, and these AIs will believe they are conscious in the same way as we do. So yes, we will have to treat them as if they are conscious.