top | item 39197034

(no title)

two_in_one | 2 years ago

> 1. For "the singularity" to happen, we probably need something more to happen than just chatGPT to ingest more data or use more processing power.

It's not actually clear what "the singularity" is? Is it something running out of control or it's still controllable? There is a blurry line. People are afraid because they think it's sort of uncontrollable explosion.

The second question is about AGI. What is it? Is it something 'alive' or just a generic AI calculator with no 'creature' features. Like self preservation at least.

I think our view of these two things will change soon as we get a close up picture. Pretty much like Turing test doesn't look great anymore. As even dumb chatbots can pass.

discuss

order

com2kid|2 years ago

I personally define AGI as a technology capable of improving itself exponentially.

But I realize my definition is in the minority. :/

Of course if we ever manage to make a 1:1 cybernetic brain that works exactly like a human's brain, and is also a complete black box, we'll have achieved AGI. I'm not sure how useful that will be, but I'll have to admit it is AGI.

So maybe I should say, "interesting AGI" is technology that can improve itself exponentially. :-D

greysphere|2 years ago

Yeah, if you could input data set X with quality Q(X) and output data set Y of the same size with Q(Y) > Q(X), you'd really be on to something. But I don't think such a system exists yet, or even close. Inputting the internet, outputting a sea of garbage with a handful of diamonds that people have to spelunk through the garbage for seems the best so far. Madlibs is a pretty equivalent activity, and while fun, certainly not anything one would consider AGI. We need a revolutionary improvement on automated spelunking to get anywhere. Maybe we'll get a good spam filter as a side effect!

But even if you had a system, there's still the resource cost to run the algorithm (needs to be bounded or else you've just made a finite jump) and the gains you make need to not decay (or again you've just made a finite jump).

And all this needs to be compared against investing in humans - which seem pretty clearly to have AGI properties (but with some really bad constant factors. 20 year training time - ridiculous!)

To me it seems things are a long way off and least a couple of major innovations away. But at least there's some ideas of the problems to tackle, which is a big step up!

ChatGTP|2 years ago

I personally define AGI as a technology capable of improving itself exponentially.

Have you thought this through though? I mean you'd first have to know in what way to improve, which would become more challenging as you became more perfect, you'd have to want to become perfect (which let's be honest might be boring) and then I guess there's the fact that if you kept evolving / improving yourself exponentially, then you'd no longer exist because you'd likely be morphing into other forms all the time?

In a way, maybe the only thing I can think of that is observably doing something like this is the universe itself.

two_in_one|2 years ago

> can improve itself exponentially

This is close to singularity. Except 'does' instead of 'can'. A big difference ;)

Probably we need several AGI terms. Because sub-human robot capable of doing many not pre-programmed thing is sort of it. Still not smart enough to improve itself.

Actually most humans, the smartest creatures, cannot improve even current AI. Demand for self improvement will put its IQ in top 0.01% of all known intelligent creatures. Which is probably too much for just AGI, we may not recognize it when it will be already here. And there is another question. With such IQ do we really want to keep it slave forever?