top | item 39903038

(no title)

meat_machine | 1 year ago

from wikipedia:

>Occam's razor is the problem-solving principle that recommends searching for explanations constructed with the smallest possible set of elements

it's a general principle or _recommendation_, not some law of the universe. Some people might argue that "God wills it" is the simplest explanation for many things, but that doesn't mean it's true. The simplicity of an explanation doesn't necessarily have anything to do with its validity.

In addition, the introduction of panpsychism, just like the introduction of God into any argument, brings up a whole other set of questions that need to be answered -- additional complexity, which is the opposite of Occam's Razor.

Emergence out of complex systems is arguably the simpler explanation, because it's something that's already been observed, measured, and studied, like storms emerging from simpler principles of weather systems.

Or take your computer or smartphone -- do you truly understand all the mechanisms from which we go from "shocking rocks" to create series of on/off signals, to things like communicating on the internet, or watching videos? Is computing and mathematics some inherent property of silicon? Nearly part of a computer, on some fundamental level, is a relatively simple mechanism, and has an almost useless function on its own. Even for engineers who understand every level of abstraction, it must still be near-miraculous that any of this works, even though these emergent properties are deliberately crafted, well-documented, and understood.

Here's a video about GPT transformers: https://www.youtube.com/watch?v=wjZofJX0v4M

His ability to not only understand, but to also effectively communicate these concepts, I'd say makes him one of the smarter people out there. And yet, he remarks, "I don't know about you, but it really doesn't feel like this should actually work." There are still things people don't understand about why AI works the way it does, despite the fact that we built and trained them -- feel free to hit up Claude or your favorite resource for examples on emergent properties. LLMs can be passably apt at things they weren't trained for, and exhibit behaviors weirdly similar to people (like confabulation), despite the fact that their exposure to the world is literally only text.

I'm already imagining ways people could twist this into proof of panpsychism. But the point I'm getting to is that the human body is an absurdly, stupidly complex system of 37 trillion cells. The Milky Way is estimated to have 400 billion stars, at most. Like LLMs, we understand some things about our brains... but the complex interaction of many parts is less easy to understand. The purpose and value of feeling and awareness as a function of survival isn't a "tall order" -- it's just difficult for the human brain to grasp so many moving parts simultaneously. For some people, the complexity of the eyeball alone is proof that there must be a god -- the sheer magnitude of billions of years of brute force trial-and-error is difficult to comprehend.

Human intuition: a potentially powerful, but simultaneously and often error-prone weak force of the human brain.

>Panpsychism requires that the universe updates its state by conscious choice, which we already know happens

[citation overdue]

I think there are are at least two levels of logical fallacy here, not to mention avenues of undefined and fuzzy circular logic, but I've already spent too much time on this. I'd say try pasting that into Claude or another "big AI" and see what their critique is.

discuss

order

CuriouslyC|1 year ago

It isn't just about the simplicity of Panpsychism, friend, it's about the big things we have to explain if we rule it out that we are so far from being able to explain as to make us look foolish despite all our supposed knowledge. We don't even have a clue how to explain them, despite building atom bombs and space flight and getting close to artificial intelligence. That to me speaks volumes.

Please explain to me how emergence could create new "dimensions" that didn't exist before. Every emergent system we've ever observed creates unexpected complexity __WITHIN THE CONFINES AND STRUCTURE OF THE SYSTEM__. What you're describing is like if a flock of seagulls moved in unity then teleported to the other side of the earth they were so united - it makes zero sense within the framework, and only by ejecting from the framework can you salvage the notion.

I don't perfectly understand all the steps from zero to smartphone, but I have had enough education to have a decent overview, and I can gain that knowledge if I seek it. What will you study to understand consciousness?

The "emergence" you're describing from LLM behavior is a jump in capabilities that occurs due to complexity, but the LLM is just getting better at what it does, it isn't magically developing the ability to levitate researchers due to emergence, which is what "dumb" matter becoming conscious would be like.

The whole "god in the universe" angle is overblown, the root of panpsychism really is this: We and the rest of the stuff in the universe can perceive, feels, has free will, and makes decisions.

I understand it's hard to let go of your humancentric fallacies. The history of science has been brave men having to fight the power to point out the ways in which humans aren't unique or the center of the universe. Particularly if you're Christian, the idea that everything the bible said about man being god's chosen is bullshit must be a bitter pill to swallow.