top | item 40513099

(no title)

js98 | 1 year ago

This seems just like a single argument and counter argument against the sentience (or lack thereof) of AI today. I feel like the article lacks some broader views on the nature of sentience, and is quite narrow in its approach

Although, in fairness, I probably wouldn’t make a much better case for either of the sides

discuss

order

jerf|1 year ago

I observe that current AIs are not embedded in time, and while we may not be able to agree exactly what "sentience" and "experience" is, "change over time" seems a basic requirement: https://news.ycombinator.com/item?id=31727428

(Contrarians, or meta-contrarians, may jump up to claim otherwise, but I would say that while the question of what a non-temporal consciousness could hypothetically be may be fun to debate, it is also so far out of our experience that it is clearly not what we generally mean by the term and is therefore a completely different conversation.)

LLMs do not strike me as amenable to fixing this. But that only applies to LLMs, not to any future architectures.

pulvinar|1 year ago

I've been thinking about this and haven't found any fundamental difference.

Sure, LLMs don't have our fine temporal resolution, but GPT-4 (at least) knows the date, and can get the current time using Python when asked. And can tell the order of text events within a session. Our resolution has a limit too, somewhere under 1/25 of a second while awake, with much larger gaps when we sleep.

So it's a matter of degree, or more to the point it's how we might gerrymander definitions to suit us.

Filligree|1 year ago

It also seems to prove too much. This argument works equally well to discount phantom limb pain in humans, which I hope we don't want to do.

llamaimperative|1 year ago

> One of the essential characteristics of general intelligence is “sentience,” the ability to have subjective experiences... Sentience is a crucial step on the road to general intelligence.

This also... not substantiated.

JohnFen|1 year ago

The problem, as usual, is that we don't have a solid understanding (or even definition) of what sentience actually is. The only thing that we can say with certainty is that we experience it.

This gives wide latitude for moving the goalposts by asserting a specific definition that is favorable to whatever it is you've built.

It gives an equally wide latitude for dismissing what's been built as not being actually sentient by defining "sentience" in a way that makes the dismissal true.