top | item 40842963

(no title)

sonink | 1 year ago

> If people succeed in making that truly lifelike and humanlike, it will actually out-compete us for resource control. And will no longer be a tool we can use.

I believe it is almost certain that we will make something like this and that they will out-compete us. The bigger problem here is that too few people believe this to be a possibility. And when this becomes certainty becomes apparent to a larger set of people, it might be too late to tone this down.

AI isn't like the Atom Bomb (AB). AB didn't have agency. Once AB was built we still had time to think how to deploy it, or not. We had time to work across a global consensus to limit use of AB. But once AI manifests as AGI, it might be too late to shut it down.

discuss

order

mylastattempt|1 year ago

I very much agree with this line of thought. It seems for humans it is the default mode of operation to just think of what is possible within the foreseeable future, rather than thinking of a reality that includes the seemingly impossible (at the time of the thought).

In my opinion, this is easily noticeable when you try to discuss any system, be it political or economical, that spans multiple countries and interests. People will just revert to whatever is closest to them, rather than being able to foresee a larger cascading result from some random event.

Perhaps this is more of a rant than a comment, apologies, I suppose it would be interesting to have an online space to discuss where things are headed on a logical level, without emotion and ideals and the ridiculous idea that humanity must persevere. Just thinking out what could happen in the next 5, 10 and 99 years.

sonink|1 year ago

> I suppose it would be interesting to have an online space to discuss where things are headed on a logical level, without emotion and ideals and the ridiculous idea that humanity must persevere.

Absolutely. Happy to be part of it if you are able to set it up.

hollerith|1 year ago

>the ridiculous idea that humanity must persevere.

Could you expand on what you mean by this? Specifically, is it OK with you if progress in AI causes the death of all the original-type human people like you and I?

tivert|1 year ago

> I believe it is almost certain that we will make something like this and that they will out-compete us. The bigger problem here is that too few people believe this to be a possibility. And when this becomes certainty becomes apparent to a larger set of people, it might be too late to tone this down.

I think the bigger problem is that too many people are focused on short term things like personal wealth or glory.

The guy who make the breakthrough that enables the AGI that destroys humanity will probably win the Nobel Prize. That potential Nobel probably looms larger in his mind than any doubts that his achievement is actually a bad thing.

They guy who employs that guy or productionizes his idea will become a mega-billionaire. That potential wealth and power probably looms larger in his mind than any doubts, too.

hollerith|1 year ago

That is why the government should help the researcher and the tycoon do the right thing by shutting down the AI labs and banning research, teaching and publishing about frontier AI capabilities.

visarga|1 year ago

> Once AB was built we still had time to think how to deploy it, or not.

It's in human hands, we can hardly trust the enemy or even ourselves. We already came close to extinction a couple of times.

I presume when ASI will emerge one of its top priorities will be to stop the crazies with big weapons from killing us all.

mensetmanusman|1 year ago

It can’t outcompete us on the global level due to energy restraints.

It would require a civilization to consciously bond with its capability to do so (in such a way that it enhances the survival of the humans serving it). Not sure this would be competition in the normal sense.

rerdavies|1 year ago

The problem will not be the AIs; the problem will be who owns the AIs, and how will we control them?