top | item 46229058

(no title)

obruchez | 2 months ago

It's difficult to know what people really believe, especially after only a few minutes of discussion, but I would say most people I talk to don't believe AGI is even possible. And they probably think their life won't be changed much by LLMs, AI, etc.

discuss

order

dmurvihill|2 months ago

I believe AGI is possible. Also that LLMs are a dead end as far as that goes.

roenxi|2 months ago

I haven't heard a good argument for why AGI isn't already here. It has average humans beat and seems generally to be better-than-novice in any given field that requires intelligence. They play Go, they write music, they've read Shakespeare, they are better at empathy and conversation than most. What more are we asking AI to do? And can a normal human do it?

Peritract|2 months ago

I think you should consider carefully whether AI is actually better at these things (especially any one given model at all of them), or if your ability to judge quality in these areas is flawed/limited.

plastic-enjoyer|2 months ago

> they are better at empathy and conversation than most

Imagine the conversations this guy must have with people IRL lol

superultra|2 months ago

I’d say that an increasingly more common strand is that the way LLMs work is so wildly different than how we humans operate that it is effectively an alien intelligence pretending to be human. We have never and still don’t fully understand why LLMs work the way they do.

I’m of the opinion that AGI is an anthropomorphizing of digital intelligence.

The irony is that as LLMs improve, they will both become better at “pretending” to be human, and even more alien in the way they work. This will become even more true once we allow LLMs to train themselves.

If that’s the case than I don’t think that human criteria is really applicable here except in an evaluation of how it relates to us. Perhaps your list is applicable in LLM’s relativity to humans but many think we need some new metrics for intelligence.

Ekaros|2 months ago

I would expect sufficient "General Intelligence" to be able to correct itself in process. I hear way too often that you need to restart something to get it work. This to me doesn't sound sufficient yet for general intelligence. For that you should be able to leave it running all the time and learn and progress during run-time.

We have bunch of tools for specific tasks. This doesn't again sound like general.

kkapelon|2 months ago

>What more are we asking AI to do? And can a normal human do it?

1. Learn/Improve yourself with each action you take 2. Create better editions/versions of yourself 3. Solve problem in areas that you were not trained for simply by trial and error where you yourself decide if what you are doing is correct or wrong

oxag3n|2 months ago

> What more are we asking AI to do? And can a normal human do it?

Simple - go through an on-boarding training, chat to your new colleagues, start producing value.

lynx97|2 months ago

> they are better at empathy

Are you serious or sarcastic? Do you really consider this empty type of sycophancy as empathy?

kjhkjhksdhksdhk|2 months ago

exist in realtime. they don't, we do.

exasperaited|2 months ago

> they are better at empathy and conversation than most.

Do you know actual people? Even literal sociopaths are a bit better at empathy than ChatGPT (I know because I have met a couple).

And as for conversation? Are you serious? ChatGPT does not converse in a meaningful sense at all.