top | item 43722753

(no title)

dmwilcox | 10 months ago

I've been saying this for a decade already but I guess it is worth saying here. I'm not afraid AI or a hammer is going to become intelligent (or jump up and hit me in the head either).

It is science fiction to think that a system like a computer can behave at all like a brain. Computers are incredibly rigid systems with only the limited variance we permit. "Software" is flexible in comparison to creating dedicated circuits for our computations but is nothing by comparison to our minds.

Ask yourself, why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism -- put the same random seed value in your code and get the same "random numbers" every time in the same order. Computers need to be like this to be good tools.

Assuming that AGI is possible in the kinds of computers we know how to build means that we think a mind can be reduced to a probabilistic or deterministic system. And from my brief experience on this planet I don't believe that premise. Your experience may differ and it might be fun to talk about.

In Aristotle's ethics he talks a lot about ergon (purpose) -- hammers are different than people, computers are different than people, they have an obvious purpose (because they are tools made with an end in mind). Minds strive -- we have desires, wants and needs -- even if it is simply to survive or better yet thrive (eudaimonia).

An attempt to create a mind is another thing entirely and not something we know how to start. Rolling dice hasn't gotten anywhere. So I'd wager AGI somewhere in the realm of 30 years to never.

discuss

order

throwaway150|10 months ago

> And from my brief experience on this planet I don't believe that premise.

A lot of things that humans believed were true due to their brief experience on this planet ended up being false: earth is the center of the universe, heavier objects fall faster than lighter ones, time ticked the same everywhere, species are fixed and unchanging.

So what your brief experience on this planet makes you believe has no bearing on what is correct. It might very well be that our mind can be reduced to a probabilistic and deterministic system. It might also be that our mind is a non-deterministic system that can be modeled in a computer.

slavik81|10 months ago

What is the distance from the Earth to the center of the universe?

preommr|10 months ago

> why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism

Then you've missed the part of software.

Software isn't computer science, it's not always about code. It's about solving problems in a way we can control and manufacture.

If we needed random numbers, we could easily use a hardware that uses some physics property, or we could pull in an observation from an api like the weather. We don't do these things because pseudo-random is good enough, and other solutions have drawbacks (like requiring an internet for api calls). But that doesn't mean software can't solve these problems.

dmwilcox|10 months ago

It's not about the random numbers it's about the tree of possibilities having to be defined up front (in software or hardware). That all inputs should be defined and mapped to some output and that this process is predictable and reproducible.

This makes computers incredibly good at what people are not good at -- predictably doing math correctly, following a procedure, etc.

But because all of the possibilities of the computer had to be written up as circuitry or software beforehand, it's variability of outputs is constrained to what we put into it in the first place (whether that's a seed for randomness or model weights).

You can get random numbers and feed it into the computer but we call that "fuzzing" which is a search for crashes indicating unhandled input cases and possible bugs or security issues.

AstroBen|10 months ago

> It is science fiction to think that a system like a computer can behave at all like a brain

It is science fiction to think that a plane could act at all like a bird. Although... it doesn't need to in order to fly

Intelligence doesn't mean we need to recreate the brain in a computer system. Sentience, maybe. General intelligence no

gloosx|10 months ago

BTW Planes are fully inspired by birds and they mimic the core principles of the bird flight.

Mechanically it's different since humans are not such advanced mechanics as nature, but of course comparing the whole brain function to a simple flight is a bit silly

ggreer|10 months ago

Is there any specific mental task that an average human is capable of that you believe computers will not be able to do?

Also does this also mean that you believe that brain emulations (uploads) are not possible, even given an arbitrary amount of compute power?

gloosx|10 months ago

1. Computers cannot self-rewire like neurons, which means human can pretty much adapt for doing any specific mental task (an "unknown", new task) without explicit retraining, which current computers need to learn something new

2. Computers can't do continuous and unsupervised learning, which means computers require structured input, labeled data, and predefined objectives to learn anything. Humans learn passively all the time just by existing in the environment

missingrib|10 months ago

Yes, they can't have understanding or intentionality.

potamic|10 months ago

The universe we know is fundamentally probabilistic, so by extension everything including stars, planets and computers are inherently non-deterministic. But confining our discussion outside of quantum effects and absolute determinism, we do not have a reason to believe that the mind should be anything but deterministic, scientifically at least.

We understand the building blocks of the brain pretty well. We know the structure and composition of neurons, we know how they are connected, what chemicals flow through them and how all these chemicals interact, and how that interaction translates to signal propagation. In fact, the neural networks we use in computing are loosely modelled on biological neurons. Both models are essentially comprised of interconnected units where each unit has weights to convert its incoming signals to outgoing signals. The predominant difference is in how these units adjust their weights, where computational models use back propagation and gradient descent, biological models use timing information from voltage changes.

But just because we understand the science of something perfectly well, doesn't mean we can precisely predict how something will work. Biological networks are very, very complex systems comprising of billions of neurons with trillions of connections acting on input that can vary in immeasurable number of ways. It's like predicting earthquakes. Even though we understand the science behind plate tectonics, to precisely predict an earthquake we need to map the properties of every inch of continental plates which is an impossible task. But doesn't mean we can't use the same scientific building blocks to build simulations of earthquakes which behave like any real earthquake would behave. If it looks like a duck, quacks like a duck, then what is a duck?

pdimitar|10 months ago

Seems to me you are a bit overconfident that "we" (who is "we"?) understand how the brain works. F.ex. how does a neuron actively stretching a tentacle trying to reach other neurons work in your model? Genuine question, I am not looking to make fun of you, it's just that your confidence seems a bit much.

ukFxqnLa2sBSBf6|10 months ago

I guarantee computers are better at generating random numbers than humans lol

uh_uh|10 months ago

Not only that but LLMs unsurprisingly make similar distributional mistakes as humans do when asked to generate random numbers.

pyfon|10 months ago

Computers are better at hashing entropy.

CooCooCaCha|10 months ago

This is why I think philosophy has become another form of semi-religious kookery. You haven't provided any actual proof or logical reason for why a computer couldn't be intelligent. If randomness is required then sample randomness from the real world.

It's clear that your argument is based on feels and you're using philosophy to make it sound more legitimate.

biophysboy|10 months ago

Brains are low-frequency, energy-efficient, organic, self-reproducing, asynchronous, self-repairing, and extremely highly connected (thousands of synapses). If AGI is defined as "approximate humans", I think its gonna be a while.

That said, I don't think computers need to be human to have an emergent intelligence. It can be different in kind if not in degree.

dmwilcox|10 months ago

I tried to keep my long post short so I cut things. I gestured at it -- there is nothing in a computer we didn't put there.

Take the same model weights give it the same inputs, get the same outputs. Same with the pseudo-random number generator. And the "same inputs" is especially limited versus what humans are used to.

What's the machine code of an AGI gonna look like? It makes one illegal instruction and crashes? If if changes tboughts will it flush the TLB and CPU pipeline? ;) I jest but really think about the metal. The inside of modern computers is tightly controlled with no room for anything unpredictable. I really don't think a von Neumann (or Harvard ;) machine is going to cut it. Honestly I don't know what will, controlled but not controlled, artificially designed but not deterministic.

In fact, that we've made a computer as unreliable as a human at reproducing data (ala hallucinating/making s** up) is an achievement itself, as much of an anti-goal as it may be. If you want accuracy, you don't use a probabilistic system on such a wide problem space (identify a bad solder joint from an image, sure. Write my thesis, not so much)

Krssst|10 months ago

If the physics underlying the brain's behavior are deterministic, they can be simulated by software and so does the brain.

(and if we assume that non-determinism is randomness, non-deterministic brain could be simulated by software plus an entropy source)

LouisSayers|10 months ago

What you're mentioning is like the difference between digital vs analog music.

For generic stuff you probably can't tell the difference, but once you move to the edges you start to hear the steps in digital vs the smooth transition of analog.

In the same way, AI runs on bits and bytes, and there's only so much detail you can fit into that.

You can approximate reality, but it'll never quite be reality.

I'd be much more concerned with growing organic brains in a lab. I wouldn't be surprised to learn that people are covertly working on that.

Borealid|10 months ago

Are you familiar with the Nyquist–Shannon sampling theorem?

If so, what do you think about the concept of a human "hear[ing] the steps" in a digital playback system using a sampling rate of 192kHz, a rate at which many high-resolution files are available for purchase?

How about the same question but at a sampling rate of 44.1kHz, or the way a normal "red book" music CD is encoded?

Aloisius|10 months ago

> Ask yourself, why is it so hard to get a cryptographically secure random number?

I mean, humans aren't exactly good at generating random numbers either.

And of course, every Intel and AMD CPU these days has a hardware random number generator in it.

bastardoperator|10 months ago

Computers can't have unique experiences. I think it's going to replace search, but becoming sentient? Not in my lifetime, granted I'm getting up there.

pstuart|10 months ago

On the newly released iPhone: "No wireless. Less space than a nomad. Lame."

;-)