(no title)
iconosynclast | 3 years ago
The section on emergence makes a very convincing point about how such systems might, at least in theory, be doing absolutely anything, including "real" cognition, internally and then goes right ahead and dismisses this entirely on the basis of the system not having conversational intent. who cares if it has conversational intent? If it was shown to be doing "the real thing" (how ever you might want to define that) internally that would still be a big deal wether the part you interact with gives you direct access to that or not.
Then it goes on to argue that these systems can't possibly actually believe anything because they can't update believes. Frankly I'm neither convinced that the general use of the word "believe" matches the narrow definition they seem to be using here nor that even their narrow definition could not in principle still be taking place internally for the reasons laid out in the emergence section.
I agree people should probably be mindful of overly anthropomorphic language but at the same time we really shouldn't be so sure that a thing is definitely not doing certain things that we can't even really define beyond "I know it when I see it" and that it sure looks like it's doing.
beyond that I'm not even really sure there is a good philosophical grounding for insisting that "what's really going on inside" matters, like, at all. The core thing with the turing test isn't the silly and outdated test protocol but the notion that, if something is indistinguishable by observation from a conscious system, there is simply no meaningful basis to claim it isn't one.
all that said the current state of the art probably doesn't warrant a lot of anthropomorphizing but that might well change in the future without any change to the kinda of systems used that would be relevant to the arguments made in the paper
No comments yet.