top | item 31358731

(no title)

gurkendoktor | 3 years ago

Not you specifically, but I honestly don't understand how positive many in this community (or really anyone at all) can be about these news. Tim Urban's article explicitly touches on the risk of human extinction, not to mention all the smaller-scale risks from weaponized AI. Have we made any progress on preventing this? Or is HN mostly happy with deprecating humanity because our replacement has more teraflops?

Even the best-case scenario that some are describing, of uploading ourselves into some kind of post-singularity supercomputer in the hopes of being conscious there, doesn't seem very far from plain extinction.

discuss

order

idiotsecant|3 years ago

I think the best-case scenario is that 'we' become something different than we are right now. The natural tendency of life(on the local scale) is toward greater information density. Chemical reactions beget self-replicating molecules beget simple organisms beget complex organisims beget social groups beget tribes beget city states beget nations beget world communities. Each once of these transitions looks like the death of the previous thing and in actuality the previous thing is still there, just as part of a new whole. I suspect we will start with natural people and transition to some combination of people whose consciousness exists, at least partially, outside of the boundaries of their skulls, people who are mostly information on computing substrate outside of a human body, and 'people' who no longer have much connection with the original term.

And that's OK. We are one step toward the universe understanding itself, but we certainly aren't the final step.

37ef_ced3|3 years ago

Let's be real.

Not long from now all creative and productive work will be done by machines.

Humans will be consumers. Why learn a skill when it can all be automated?

This will eliminate what little meaning remains in our modern lives.

Then what? I don't know, who cares?

blueblob|3 years ago

I feel exactly the opposite. AI has not yet posed any significant threats to humanity other than issues with the way people choose to use it (tracking citizens, violating privacy, etc.).

So far, we have task-driven AI/ML. It solves a problem you tell it to solve. Then you, as the engineer, need to make sure it solves the problem correctly enough for you. So it really still seems like it would be a human failing if something went wrong.

So I'm wondering why there is so much concern that AI is going to destroy humanity. Is the theoretical AI that's going to do this even going to have the actuators to do so?

Philosophically, I don't have an issue with the debate, but the "AI will destroy the world" side doesn't seem to have any tangible evidence. It seems to me that people seem to take it as a given that it's possible AI could eliminate all of humanity and they do not support that argument in the least. From my perspective, it appears to be fearmongering because people watched and believed Terminator. It appears uniquely out-of-touch.

JohnPrine|3 years ago

Agreed. People think of the best case scenario without seriously considering everything that can go wrong. If we stay on this path the most likely outcome is human extinction. Full stop

JoeAltmaier|3 years ago

Says a random internet post. It takes a little more evidence or argument to be convincing, besides hyperbole.

alm1|3 years ago

Mechanized factories failed to kill humanity two hundreds ago and the Luddite movement against them seems comical today. What makes you think extinction is most likely?

stefs|3 years ago

this path will indeed lead to human extinction, but the path is climate change. AI is one of the biggest last hopes for reversing it. from my perspective, if it does kill us all, well, it's most likely still a less painful death.

londons_explore|3 years ago

> Or is HN mostly happy with deprecating humanity because our replacement has more teraflops?

If we manage to make a 'better' replacement for ourselves, is it actually a bad thing? Our cousin's on the hominoid family tree are all extinct, yet we don't consider that a mistake. AI made by us could well make us extinct. Is that a bad thing?

gurkendoktor|3 years ago

Your comment summarizes what I worry might be a more widespread opinion than I expected. If you think that human extinction is a fair price to pay for creating a supercomputer, then our value systems are so incompatible that I really don't know what to say.

I guess I wouldn't have been so angry about any of this before I had children, but now I'm very much in favor of prolonged human existence.

JoeAltmaier|3 years ago

We have Neanderthal, Denisovan DNA (and two more besides). Our cousins are not exactly extinct - we are a blend of them. Sure no pure strains exist, but we are not a pure strain either!

goatlover|3 years ago

> If we manage to make a 'better' replacement for ourselves, is it actually a bad thing?

It's bad for all the humans alive at the time. Do you want to be replaced and have your life cut short? For that matter, why should something better replace us instead of coexist? We don't think killing off all other animals would be a good thing.

> Our cousin's on the hominoid family tree are all extinct, yet we don't consider that a mistake.

It's just how evolution played out. But if there was another hominid still alive along side us, advocating for it's extinction because we're a bit smarter would be considered genocidal and deeply wrong.

tim333|3 years ago

>happy with deprecating humanity because our replacement has more teraflops?

For me immortality a bigger thing than the teraflops. Also I don't think regular humanity would be got rid of but continue in parallel.