(no title)
tylerneylon | 1 year ago
Context for the article: I'm working on an ambitious long-term project to write a book about consciousness from a scientific and analytic (versus, say, a meditation-oriented) perspective. I didn't write this fact in the article, but what I'd love to happen is that I meet people with a similar optimistic perspective, and to learn and improve my communication skills via follow-up conversations.
If anyone is interested in chatting more about the topic of the article, please do email me. My email is in my HN profile. Thanks!
jcynix|1 year ago
Quoted from rom https://www.newyorker.com/magazine/2017/03/27/daniel-dennett...
Regarding the multiple layers: the most interesting thoughts I read about theories of a mind, are the books by Marvin Minsky, namely: The Society of Mind and The Emotion Machine which should be more widely known.
More on Minsky's ideas on “Matter, Mind, and Models” are mentioned in https://www.newyorker.com/magazine/1981/12/14/a-i
freilanzer|1 year ago
transpute|1 year ago
> In 1949 Time described it as "the closest thing to a synthetic brain so far designed by man".
unknown|1 year ago
[deleted]
tylerneylon|1 year ago
rerdavies|1 year ago
Pinker advocates for a model of mind that has multiple streams of consciousness. In Pinker's model of mind, there are multiple agents constructing models of the world in parallel, each trying to predict future states from current state plus current input data. A supervisory process then selects the model that has made the best prediction of current state in the recent past for use when reacting in the current moment. The supervisor process is free to switch between models on the fly as more data comes in.
Pinker grounds his model of mind in curious observations of what people remember, and how our short term memories change over a period of sometimes many seconds as our mind switches between different agent interpretations of what's going on. Witnesses are notoriously unreliable. Pinker concerns himself with why and how witnesses are unreliable, not for legal reasons, but for how those unreliabilities might reveal structure of mind. Pinker's most interesting observation (I think) is that what we seem to remember is output from the models rather than the raw input data; and that what we remember seeing can change dramatically over a period of many seconds as the supervisory process switches between models. Notably, we seem to remember, in short term memory, details of successful models that make "sense", even when those details contradict with what we actually saw. And when those details are moved to longer-term memory, the potentially inaccurate details of model are what are committed.
ypeterholmes|1 year ago
In particular, I think there's a nice overlap between my description of self-awareness and yours. You mention a model capable of seeing its own output, but I take it a step further and indicate model capable of generating its own simulated actions and results. Curious what you think, thanks!
lukasb|1 year ago
mensetmanusman|1 year ago
It will be a good practice to see how deeply terms can be applied in order to combat this gap.
n4r9|1 year ago
I can conceive of scientific experiments involving consciousness. For example:
Hypothesis: Consuming LSD gives me a hallucinatory experience.
Method: Randomized, blind trial. Every Saturday morning I consume one tab of either LSD or water, sit in a blank white room with a sitter (who does nothing), and record my experience.
Results: Every time after consuming water, I have no visual hallucinations and get bored. Every time after consuming LSD, I see shifting colour patterns, imagine music playing on the walls, and feel at one with the world.
Conclusion: Results strongly support hypothesis.
observationist|1 year ago
The brain is necessary and sufficient cause for subjective experience. If you have a normal brain, and have subjective experience, you should have near total certainty that other people's reports of subjective experience - the millions and billions of them throughout history, direct and indirect - are evidence of subjective experience in others.
Any claim of solipsism falls apart; to claim uncertainty is to embrace irrationality. In this framework, if you are going to argue for the possibility of the absence of consciousness in others who possess normal human brains, it is on you to explain how such a thing might be possible, and to find evidence for it. All neuroscience evidence points to the brain being a necessary, sufficient, and complete explanation for consciousness.
Without appealing to magic, ignorance of the exact mechanism, or unscientific paradigms, there exist no arguments against the mountains of evidence that consciousness exists, is the usual case, and likely extends to almost all species of mammal, given the striking similarity in brain form and function, and certain behavioral indicators.
Cases against this almost universally spring from religion, insistence on human exceptionalism, and other forms of deeply unscientific and irrational argument.
I can say, with a posterior probability of certainty exceeding 99.99999999% that a given human is conscious, simply by accepting that the phenomenon of subjective experience I recognize as such is not the consequence of magic, that I am not some specially endowed singular creature with an anomalous biological feature giving me subjective experience that all others, despite describing it or behaving directly or indirectly as if it were the case. Even, and maybe even especially, if the human in question is making declarations to the contrary.
Consciousness is absolutely subject to the scientific method. There's no wiggle room or uncertainty.
Quantum tubules, souls, eternal spirits, and other "explanations" are completely unnecessary. We know that if you turn the brain off (through damage, disease, or death) all evidence of consciousness vanishes. While the brain is alive and without significant variance in the usual parameters one might apply to define a "normal, healthy, functioning brain", consciousness happens.
Plato's cave can be a fun place to hang out, but there's nothing fundamental keeping us there. We have Bayesian probability, Occam's razor, modern neuroscience, and mountains of evidence giving us everything we need to simply accept consciousness as the default case in humans. It arises from the particular cognitive processes undergone in the network of computational structures and units in the brain.
Any claims to the contrary require profound evidence to even be considered; the question is all but settled. The simplest explanation is also the one with the most evidence, and we have everything from molecular studies to behavioral coherence in written and recorded history common to nearly every single author ever to exist.
I find that disputes almost inevitably stem from deeply held biases toward human exceptionalism, rooted in cultural anachronisms, such as Plato's Allegory of the Cave. We could have left the cave at almost any point since the enlightenment, but there is deep resistance to anything challenging dogmatic insistence that humans are specially appointed to cognition, that we alone have the special magic sauce that make our lives important, but the "lesser" animals morally and ethically ours to do with as we will.
Whenever we look more deeply into other mammal cognition, we find structural and behavioral parallels. Given hands, facile lips and vocal apparatus, and a comparable density and quantity of cortical neurons, alongside culture and knowledge, absent disruptive hormonal and biological factors, there don't appear to be any good reasons to think that any given mammal would not be as intelligent and clever as a human. Give a bear, a whale, a monkey, a dolphin an education with such advantages and science suggests that there is nothing, in principle, barring them from being just as intelligent as a human. Humans are special through a quirk of evolution; we communicate complex ideas, remember, reason, manipulate our environment, and record our experiences. This allows us to exert control in ways unavailable to other animals.
Some seemingly bizarre consequences seem to arise from this perspective; any network with particular qualities in connective architecture, processing capacity, external sensors, and the ability to interact with an environment has the possibility of being conscious. A forest, a garden, a vast bacterial mat, a system of many individual units like an ant colony, and other forms of life may host consciousness comparable to our own experience. Given education and the requisite apparatus, we may find it possible to communicate with those networks, despite the radically alien and disparate forms of experience they might undergo.
If your priors include mysticism, religion, magic, or other sources outside the realm of rational thinking, this might not be the argument for you. If you don't have a particular attachment to those ways of thinking, then recognize where they exert influence on ideas and update your priors all the way; brains cause consciousness. There's nothing particularly magical about the mechanics of it, the magic is all in the thing itself. By understanding a thing, we can aspire to behave more ethically, that we can include all forms of consciousness in answering the questions of how to make life as good a thing as possible for the most people... and we might have to update what we consider to be people to include the lions, tigers, and bears.
FrancisMoodie|1 year ago
Vecr|1 year ago
Animats|1 year ago
Too many people have written books about consciousness. There's much tail-chasing in that space, all the way back to Aristotle. Write one about common sense. Current AI sucks at common sense. We can't even achieve the level of common sense of a squirrel yet.
Working definition of common sense: getting through the next 30 seconds of life without a major screwup.
cornholio|1 year ago
ben_w|1 year ago
Others define it as "knowledge, judgement, and taste which is more or less universal and which is held more or less without reflection or argument", which LLMs absolutely do demonstrate.
What you ask for, "getting through the next 30 seconds of life without a major screwup", would usually, 99.7% of the time, be passed by the Waymo autopilot.
tgaj|1 year ago