top | item 45516954

(no title)

estomagordo | 4 months ago

What's a sign it's going to happen ever?

discuss

order

an0malous|4 months ago

I used to believe in AGI but the more AI has advanced the more I’ve come to realize that there’s no magic level of intelligence that can cure cancer and figure out warp drives. You need data, which requires experimentation, which requires labor and resources of which there is a finite supply. If you had AGI tomorrow and asked it to cure cancer, it would just ask for more experimental data and resources. Isn’t that what the greatest minds in cancer research would say as well? Why do we think that just being more rational or being able to compute better than humans would be sufficient to solve the problem?

It’s very possible that human beings today are already doing the most intelligent things they can given the data and resources they have available. This whole idea that there’s a magic property called intelligence that can solve every problem when it reaches a sufficient level, regardless of what data and resources it has to work with, increasingly just seems like the fantasy of people who think they’re very intelligent.

lossolo|4 months ago

Generally, I agree, but it also depends on perspective. Intelligence exists on many levels and manifests differently across species. From a monkey's standpoint, if they were capable of such reflection they might perceive themselves as the most capable creatures in their environment. Yet, humans possess cognitive abilities that go far beyond that, abstract reasoning, cumulative culture, large scale cooperation etc

A chimpanzee can use tools and solve problems, but it will never construct a factory, design an iPhone, or build even a simple wooden house. Humans can, because our intelligence operates at a qualitatively different level.

As humans, we can easily visualize and reason about 2D and 3D spaces, it's natural because our sensory systems evolved to navigate a 3D world. But can we truly conceive of a million dimensions, let alone visualize them? We can describe them mathematically, but not intuitively grasp them. Our brains are not built for that kind of complexity.

Now imagine a form of intelligence that can directly perceive and reason about such high dimensional structures. Entirely new kinds of understanding and capabilities might emerge. If a being could fully comprehend the underlying rules of the universe, it might not need to perform physical experiments at all, it could simply simulate outcomes internally.

Of course that's speculative, but it just illustrates how deeply intelligence is shaped and limited by its biological foundation.

SideburnsOfDoom|4 months ago

Agreed.

And, if you had AGI tomorrow and asked it to figure out FTL warp drives, it would just explain to you how it's not going to happen. It is impossible, the end. In fact the request is fantasy, nigh nonsensical and self-contradictory.

Isn’t that what the greatest minds in physics would say as well? Yes, yes it is.

No debate will be entered into on this topic by me today.

regularfry|4 months ago

AGI isn't a synonym for smarter-than-human.

kragen|4 months ago

Eliezer’s short story “That Alien Message” providing a convincing argument that humans are cognitively limited, not data-limited, through the device of a fictional world where people think faster: https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien...

> Yes. There is. The theoretical limit is that every time you see 1 additional bit, it cannot be expected to eliminate more than half of the remaining hypotheses (half the remaining probability mass, rather). And that a redundant message, cannot convey more information than the compressed version of itself. Nor can a bit convey any information about a quantity, with which it has correlation exactly zero, across the probable worlds you imagine.

> But nothing I've depicted this human civilization doing, even begins to approach the theoretical limits set by the formalism of Solomonoff induction.

This is also a commonplace in behavioral economics; the whole foundation of the field is that people in general don't think hard enough to fully exploit the information available to them, because they don't have the time or the energy.

——

Of course, that doesn't mean that great intelligence could figure out warp drives. Maybe warp drives are actually physically impossible! https://en.wikipedia.org/wiki/Warp_drive says:

> A warp drive or a drive enabling space warp is a fictional superluminal (faster than the speed of light) spacecraft propulsion system in many science fiction works, most notably Star Trek,[1] and a subject of ongoing real-life physics research. (...)

> The creation of such a bubble requires exotic matter—substances with negative energy density (a violation of the Weak Energy Condition). Casimir effect experiments have hinted at the existence of negative energy in quantum fields, but practical production at the required scale remains speculative.

——

Cancer, however, is clearly curable, and indeed often cured nowadays. It wouldn't be terribly surprising if we already had enough data to figure out how to solve it the rest of the time. We already have complete genomes for many species, AlphaFold has solved the protein-folding problem, research oncology studies routinely sequence tumors nowadays, and IHEC says they already have "comprehensive sets of reference epigenomes", so with enough computational power, or more efficient simulation algorithms, we could probably simulate an entire human body much faster than real time with enough fidelity to simulate cancer, thus enabling us to test candidate drug molecules against a particular cancer instantly.

Also, of course, once you can build reliable nanobots, you can just program them to kill a particular kind of cancer cell, then inject them.

Understanding this does not require believing that "intelligence that can solve every problem when it reaches a sufficient level, regardless of what data and resources it has to work with", which I think is a strawman you have made up. It doesn't even require believing that sufficient intelligence can solve every problem if it has sufficient data and resources to work with. It only requires understanding that being able to do the same thing regular humans do, but much faster, would be sufficient to cure cancer.

——

There does seem to be an open question about how general intelligence is. We know that there isn't much difference in intelligence between people; 90+% of the human population can learn to write a computer program, make a pit-fired pot from clay, haggle in a bazaar, paint a realistic portrait, speak Chinese, fix a broken pipe, interrogate a suspect and notice when he contradicts himself, fletch an arrow, make a convincing argument in courts, program a VCR, write poetry, solve a Rubik's cube, make a béchamel sauce, weave a cloth, sing a five-minute lullaby, sew a seam, or machine a screw thread on a lathe. (They might not be able to learn all of them, because it depends on what they spend time on.)

And, as far as we know, no other animal species can do any of those things: not chimpanzees, not dolphins, not octopodes, not African grey parrots. And most of them aren't instinctive activities even in humans—many didn't exist 1000 years ago, and some didn't exist even 100 years ago.

So humans clearly have some fairly flexible facility that these other species lack. "Intelligence" is the usual name for that facility.

But it's not perfectly general. For example, it involves some degree of ability to imagine three-dimensional space. Some of the humans can also reason about four- or five-dimensional spaces, but this is a much slower and more difficult process, far out of proportion to the underlying mathematical difficulty of the problem. And it's plausible that this is beyond the cognitive ability of large parts of the population. And maybe there are other problems that some other sort of intelligence would find easy, but which the humans don't even notice because it's incomprehensible to them.

NoMoreNicksLeft|4 months ago

Humans. There are arrangements of atoms that if constructed and activated, act perfectly like human intelligence. Because they are human intelligence.

Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term. If human intelligence is deterministic, then it can be written in software.

Thus, if we continue to strive to design/create/invent such, it is inevitable that eventually it must happen. Failures to date can be attributed to various factors, but the gist is that we haven't yet identified the principles of intelligent software.

My guess is that we need less than 5 million years further development time even in a worst-case scenario. With luck and proper investment, we can get it down well below the 1 million year mark.

SideburnsOfDoom|4 months ago

> Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term.

No, not all processes follow deterministic Newtonian mechanics. It could also be random, unpredictable at times. Are the there random processes in the human brain? Yes, there are random quantum processes in every atom, and there are atoms in the brain.

Yes, this is no less materialistic: Humans are still proof that either you believe in souls or such, or that human level intelligence can be made from material atoms. But it's not deterministic.

But also, LLMs are not anywhere close to becoming human level intelligence.

Paradigma11|4 months ago

"Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term. "

Determinism is a metaphysical concept like mathematical platonism or ghosts.

lm28469|4 months ago

> Thus, if we continue to strive to design/create/invent such, it is inevitable that eventually it must happen.

~200 years of industrial revolution and we already fucked up beyond the point of no return, I don't think we'll have resources to continue on this trajectory for 1m years. We might very well be accelerating towards a brick wall, there is absolutely no guarantee we'll hit AGI before hitting the wall

diffeomorphism|4 months ago

> if deterministic, then can be done in software.

You just need a few Dyson spheres and someone omniscient to give you all the parameter values. Easy peazy.

Just like cracking any encryption: you just brute force all possible passwords. Perfectly deterministic decryption method.

</s>

sampton|4 months ago

There needs to be break through papers or hardware that can expand context size in exponential way. Or a new model that can address long term learning.