top | item 21531080

(no title)

bronz | 6 years ago

i say this with respect and humility, but i am very surprised at the naivete with which John addressed the subject of AGI. he is so casual about it -- not only the idea of working on it but also the idea of it existing at all. he seems oblivious to the gravity of that discovery. it is not just "very valuable," it will be earth shattering and probably wipe out humanity. and its his side-project. and his son will help out.

John is the perfect representation of what is wrong with peoples attitude toward AGI. aloof and naive.

discuss

order

K0SM0S|6 years ago

I'll say this: I'd prefer if the brightest minds approached the matter from the "AI safety" angle (a sub field concerned with building not just AI but "safe" AI, ie that we can control or understand in a practical manner).

Because that's really where the line of human history will be drawn if AGI and above becomes real. AI safety, how advanced we are in it, will directly map to civilization's progress or endangerment as a result of AI.

Edit: this is already true with regards to "psychological safety" from undue influence or outright manipulation with motive (usually financial) by current "ANI" algorithms (newsfeeds, "recommendations", ads, etc). It's a real topic that reduces to human psychological freedom, freewill. It's a BIG topic.

zelly|6 years ago

It would be easier to take the AI doomsayers' seriously if we were remotely close to AGI. For now it's treated the same way as some guy in a cape in Central Park trying to summon Satan. No one cares because everyone knows it's basically impossible.

asadlionpk|6 years ago

I think the real naive are the people who are doing it as a day job.

popup21|6 years ago

Oh those are some spicy peppers!

randomidiot666|6 years ago

He might as well casually work on Faster Than Light travel, or a Grand Unified Theory.

whamlastxmas|6 years ago

What is your source for saying AGI will probably wipe out humanity? How could we ever even attempt to guess at the motivations of something we can barely comprehend and doesn't even exist yet?

goatlover|6 years ago

The main concern is not that it's like Skynet and wishes us harm, but that it does things harmful to us because the means to accomplish the AI's goals are at odds with human values, which was unanticipated by the human creators, since the AGI is coming up with its own solutions. And the AGI doesn't have the same values as humans, so it doesn't care if its solutions are harmful.

AnimalMuppet|6 years ago

> and probably wipe out humanity.

Probably? I do not think that word means what you think it means... or else I don't think the balance of probability lies where you think it does.

ageofwant|6 years ago

You are saying "probably wipe out humanity" is a bad thing ?

I know of several million species that would strongly disagree. Especially if AGIv1 decides to tune the genetics of say most mammals to append 'sapient' to the end of their species name.

Perhaps more constructively consider that AGI is simply the next iteration of 'humanity', yea sure the old versions are redundant anarchisms and apart from some living reserve specimens functionally extinct, but nobody cares as you can sim one up at almost no cost.

Bizarro|6 years ago

You know of 0 species that would strongly disagree, because those several million species don't have the capacity to agree or disagree on whether "wiping out humanity is a bad thing". And you don't speak for them, no matter how much caring about the planet you think you're doing.

bronz|6 years ago

AGI is not the next iteration of humanity because it will not resemble humanity in any way besides being sentient in some capacity. you will feel quite silly if you get to see it in your lifetime.