top | item 39723573

(no title)

iteygib | 1 year ago

I think that is my overall point though - we created a system (AI) based on how we see one aspect of a particular organ or system (brain, cortex, etc.), and, in this case, labeled intelligence as 'predictive behavior', and so develop systems after that model. But for starters, only mammals and a few other life branches have cortexes, and cortexes weren't always around.

Evolutionary theory isn't hinged on prediction in itself, it's just one possible aspect of it. But, organisms that rely on prediction or primarily see themselves as predictive machines will state the opposite, because we cannot do anything else but model off what we think we know.

It is also further diluted in the sense that we are always limited in what we can model because of the digital nature of our medium as it attempts to model analog systems. It is like saying that the words that I am typing right now are just like having a real human conversation. No, not really. It is a diluted form of conversation that focuses on a specific, bare part of the communicative process.

discuss

order

HarHarVeryFunny|1 year ago

I don't think people are, yet, deliberately creating predictive machines because they see that as the path to intelligence. Things like ChatCPT are LLMs, born out of that (language model) line of research, where the goal has been to learn the rules of language. The fact that a language model, when made large enough, appears somewhat intelligent was an unexpected surprise.

Different species have evolved to have different capabilities. Humans have evolved to be generalists, able to survive in a huge variety of environments, which requires a high degree of adaptability. The key to adaptability is prediction - the ability to very rapidly (in space of minutes/hours/days - not evolutionary timescales) learn how things work in a new environment or in new conditions.

Not all animals need this degree of adaptability, since they have been able to survive and thrive in long-lasting stable environments. Examples might be crocodiles or sharks - very low intelligence, but great at what they do. Evolution is not generally about prediction or intelligence - it's about optimizing each species for their own environment(s).

We already know how to build machines that are more like crocodiles - great at doing one thing over and over, but now we have the capability and desire to also build machines that are generalists like ourselves, and that requires us to figure out a way how to implement intelligence. Given how hard a problem this has been (and continues to be) to solve, it makes sense to look at our brains for inspiration - where does our own intelligence come from, and it's highly notable that the part of our brain that most differentiates humans from other animals - our large neo-cortex - appears to be a prediction machine ... In studying humans no-one is saying that other animals are the same - it's just that humans are the animal who's capabilities we are trying to reproduce.

As I said, LLMs being intelligent was an accidental discovery - they were expected just to be language models, but it's certainly notable that the only thing they are trained to do is predict next word. They only do one thing, predict, and they exhibit unexpected intelligence, hmmm ...

At this point people are NOT yet all saying "prediction is the key to intelligence, so let's build predictive machines and assume they will be intelligent", but when you look at our cortex and look at LLMs, that does appear to be the obvious direction.

iteygib|1 year ago

In this case I would say AI is the crocodile, the same as all life is. It's specializing (or becoming specialized) in something, which is prediction, in the same way a human (or any life that shows the same definitions of intelligence as us, like a crow solving a puzzle) can show success in a new or novel situation. But life does not need this definition of intelligence to survive, which leads to the basis of evolutionary theory. The trait of adaptability/prediction/intelligence is not always useful given a niche and can get weeded out, which is why most life does not need it, yet they are still around. In organisms that do possess it, it can be a detriment as well given specific situations (over analyzing, stuck in anxiety, excessive risks to adapt, etc.).

In other words, when we say an LLM is becoming intelligent, it's not that it is in the general sense. It's that we recognize the traits within it because the traits make sense to us and mimic what we define ourselves in terms of specializing, because quite obviously, we made it and provide its data input. But, the key difference is that AI has none of the original impetus or evolutionary pressures that led to our own ability to generalize/specialize. This is because its output is derived from human input, which is fed through it through digitized means, which means there is always some kind of 'loss' since it is a specialized aspect of us.

It is why I made the reference to typing. We are communicating right now, but at the same time, it is a specialized form of it. It is not the full original human experience of talking to one another, but does not have to be in this case, because it works well enough and has some advantages given the niche. If we were using Facetime, it would be much closer, but still not quite the same as being in the same room face-to-face.

In my opinion, we are not so much prediction machines, but rather mimickers who can also create mimics of themselves via what we can make. You do not need to be able to predict that well if you can just mindlessly copy something that succeeded somehow.