top | item 46250454

(no title)

fabianhjr | 2 months ago

> “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” ~ Edsger W. Dijkstra

The point of the Turing Test is that if there is no extrinsic difference between a human and a machine the intrinsic difference is moot for practical purposes. That is not an argument to whether a machine (with linear algebra, machine learning, large language models, or any other method) can think or what constitutes thinking or consciousness.

The Chinese Room thought experiment is a compliment on the intrinsic side of the comparison: https://en.wikipedia.org/wiki/Chinese_room

discuss

order

tim333|2 months ago

I kind of agree but I think the point is what people mean by words is vague, so he said:

>Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

which is can you tell the AI answers from the humans ones in a test. It then becomes an experimental result rather than what you mean by 'think' or maybe by 'extrinsic difference'.

rcxdude|2 months ago

The Chinese Room is a pretty useless thought exercise I think. It's an example which if you believe machines can't think seems like an utterly obvious result, and if you believe machines can think it's just obviously wrong.

tim333|2 months ago

People used to take it surprisingly seriously. Now it's hard to make the argument that machines can't understand say Chinese when you can give a Chinese document to a machine and ask it questions about it and get pretty good answers.

cwmoore|2 months ago

s/compliment/complement/

Good luck.

cwmoore|2 months ago

Look it up. Words have meanings.