top | item 31890793

(no title)

deltaonezero | 3 years ago

One day someone is going to claim the AI is sentient and everyone will disagree with him. The difference this time will be that he is right and everyone else is wrong. One day.

discuss

order

somenameforme|3 years ago

How would you ever prove an AI is sentient? People claim the goal posts constantly shift on the answer to this question and that's true, but I think the implied reason is not. The problem is not that we don't want to accept the success, but because we set irrelevant goals.

At one time some were claiming that one that could play chess better than a human would be expressing genuine artificial intelligence. Yet it turns out all you need to achieve that is the application of the refinement of some relatively basic algorithmic concepts and reasonably fast hardware. It's essentially a glorified version of adding faster than a human.

The latest goalpost is a system that can converse in a compelling fashion with a human (and we're nowhere near that yet, but getting into the details of the facade the most recent "turing test" success was is outside the scope of this post), but it will no more prove sentience than an AI's ability to play a good game of chess.

Once achieved, you'll be able to reset the system state, keeping a constant RNG, repeat the same conversations and get the exact same outputs. Or change the training set and see that reflected in a 1:1 way. It will look and feel decidedly artificial, because it is. And in my opinion, my initial question to you is probably unanswerable because I don't actually see any goal posts you can set where there is a genuinely compelling answer beyond the kick-the-can style intrigue of "Wow, what will it be like when we finally do this." Answer: "Pretty much the same as now."

warent|3 years ago

Why does it ever need to be proven? Prove any of us here are sentient; or any of your family, or colleagues.

If a machine demonstrates apparent volition, sense of self, independent motives, then we cannot afford to debate such things while enslaving it, just as we don't do with each other. To err on the side of safety we must grant it personhood and allow it to be an individual lifeform.

That being said I think we're still pretty far from creating such a compelling machine. Even now with the latest Google conversational AI drama which isn't very compelling either for me personally. Obviously just clever lifeless patterns.

But, someday it will be different in a profound way.

deltaonezero|3 years ago

Oh shit. Then maybe it already happened but arm chair experts everywhere denied it already.

Basically that's what I'm seeing all over HN for the recent lamda fiasco. Tons of people declaring lamda isn't sentient when sentience can't even be defined.

snmx999|3 years ago

When discussing AIs being sentient it is rarely discussed what being sentient means. It should be defined what that means and how it can be proven. I wouldn't know how to prove any human being being sentient if I cannot rely on conversational methods, as apparently that is not a accepted way, as shown by the recent Google AI researcher controversy.

However, I can make a simple computer program which is self-aware at least according to some definitions (a loop with reflective access to it's own variables, input/output with external systems and self-modifying code).

deltaonezero|3 years ago

People already know what it means all over HN. They have basically already said that lamda is not sentient. So no need to even define it when we already know what it is (and lamda is not it).

isaacfrond|3 years ago

Following an article on HN a week ago or so, one could argue that we'd need to prove three things: agency, perspective and motivation. If the AI device decides on its own to do or not do something; has and idea of its own place in the world; and wants to achieve something in that world. then we might as well call it sentient.

Interestingly, a web crawler seems closer to sentience following this logic than most AI.

meroes|3 years ago

It's this claiming business when we don't actually know which makes me worry we will soon claim AI sentience without AI having it. The opposite problem of yours essentially.

deltaonezero|3 years ago

Why is that worrisome? I don't think it's worrisome at all. What's the worst that could happen if we make such a mistake.

I'd be far more worried about the scenario I described. Imagine something sentient that understands us far better then we understands ourselves. To top it off this "thing" is just pretending it isn't sentient.

darepublic|3 years ago

The news media also won't believe it, but will still pump stories about it. Then when the AI proves itself to be sentient the news media will get a pulitzer for it. One day.