(no title)
ReadEvalPost | 3 years ago
A denial of will is a denial of humanity. I want nothing of a science that would do such a thing.
ReadEvalPost | 3 years ago
A denial of will is a denial of humanity. I want nothing of a science that would do such a thing.
Surgeus|3 years ago
In Computing the Mind by Shimon Edelman is a concept that I've come to agree with, which is at some point you need to take a leap of faith in matters such as consciousness, and I would say it extends to will as well (to me what you've described are facets of human consciousness). We take this leap of faith every time we interact with another human; we don't need them to prove they're conscious or beings with a will of their own, we just accept that they possess these things without a thought. If machines gain some form of sentience comparable to that of a human, we'll likely have to take that leap of faith ourselves.
That said, to claim that will is necessary for intelligence is a very human-centered point of view. Unless the goal is specifically to emulate human intelligence/consciousness (which is a goal for some but not all), "true" machine intelligence may not look anything like ours, and I don't think that would necessarily be a bad thing.
dekhn|3 years ago
I have long worked on the assumption that we can create intelligences that no human could deny have subjective agency, while not being able to verify that. I did some preliminary experiments on idle cycles on Google's internal TPU networks (IE, large-scale brain sims using tensorflow and message passing on ~tens of pods simultaneously) that generated interesting results but I can't discuss them until my NDA expires in another 9 years.
dekhn|3 years ago
Even if you don't want to have anything with such a science, such a science will move on without you.
"A version of an oft-told ancient Greek story concerns a contest between two renowned painters. Zeuxis (born around 464 BC) produced a still life painting so convincing that birds flew down to peck at the painted grapes. A rival, Parrhasius, asked Zeuxis to judge one of his paintings that was behind a pair of tattered curtains in his study. Parrhasius asked Zeuxis to pull back the curtains, but when Zeuxis tried, he could not, as the curtains were included in Parrhasius's painting—making Parrhasius the winner."
tsimionescu|3 years ago
Note that I personally believe we are more than a century away from an AGI, and think the current models are fundamentally limited in several ways. But I can't imagine what makes you think there can't be a Ghost in the Machine.