top | item 45593452

(no title)

gota | 4 months ago

> I don’t really believe in the threat of AGI (Artificial General Intelligence—human-level intelligence) partly because I don’t believe in the possibility of AGI and I’m highly skeptical that the current technology underpinning LLMs will provide a route to it.

I'm on board with being skeptical that LLMs will lead to AGI; but - there being no possibility seems like such a strong claim. Should we really bet that there is something special (or even 'magic') about our particular brain/neural architecture/nervous system + senses + gut biota + etc.?

Don't, like, crows (or octopuses; or elephants; or ...) have a different architecture and display remarkable intelligence? Ok, maybe not different enough (not 'digital') and not AGI (not 'human-level') but already -somewhat- different that should hint at the fact that there -can- be alternatives

Unless we define 'human-level' to be 'human-similar'. Then I agree - "our way" may be the only way to make something that is "us".

discuss

order

kulahan|4 months ago

We still haven’t figured out what intelligence even is. Depending on what you care about, the second-smartest animal in the world varies wildly.

squidbeak|4 months ago

This is a bogus argument. There's a lot we don't understand about LLMs, yet we built them.

myrmidon|4 months ago

Evolution had a much worse understanding of intelligence than we do and still managed just fine (thus i'd expect us to need less time and iterations to get there).

audunw|4 months ago

Here’s my argument for why AGI may be practically impossible:

1. I believe AGI may require a component of true agency. An intelligence that has a stable self of self that its trying to act on behalf of.

2. We are not putting any resources of significant scale towards creating such an AGI. It’s not what any of us want. We want an intelligence that acts out specific commands on behalf of humans. There’s an absolutely ruthless evolutionary process for AIs where any AI that doesn’t do this well enough, at low enough costs, is eliminated.

3. We should not believe that something that we’re not actually trying to create, and for which there is no evolutionary process that select for it, will somehow magically appear. It’s good sci-fi, and it can be interesting to ponder about. But not worth worrying about.

Even before that, we need AI which can self-update and do long term zero shot learning. I’m not sure we’re even really gonna put any real work into that either. I suspect we will find that we want reproducible, dependable, energy efficient models.

There’s a chance that the agentic AIs we have now are nearly the peak of what we’ll achieve. Like, I’ve found that I value Zed’s small auto-complete model higher than the agentic AIs, and I suspect if we had a bunch of small and fast specialised models for the various aspects of doing development, I’d use that far more than a general purpose agentic AI.

It’s technically possible to do fully secure, signed end-to-end encrypted email. It’s been possible for many decades now. We can easily imagine the ideal communication system, and it’s even fairly easy to solve technically speaking. Yet it’s not happening the way we imagined. I think that shows how, what’s technically physically possible isn’t always relevant to what’s practically possible. I think we will get electronic mail right eventually. But if it takes half a century or more for that.. something that orders of magnitude more difficult (AGI) could take centuries or more.

pixl97|4 months ago

>only way to make something that is "us".

Which many people seem to neglect that instead of making us, we make an alien.

Hell, making us a is a good outcome. We at least somewhat understand us. Setting off a bunch of self learning, self organizing code to make an alien, you'll have no clue what comes out the other side.