(no title)
stoniejohnson | 1 year ago
But if you assume that we have created something that is agentic and can reason much faster and more effectively than us, then us dying out seems very likely.
It will have goals different from ours, since it isn't us, and the idea that they will all be congruent with our homeostasis needs evidence.
If you simply assume:
1. it will have different goals (because it's not us)
2. it can achieve said goals despite our protests (it's smarter by assumption)
3. some goals will be in conflict with our homeostasis (we would share resources due to our shared location, Earth)
then we all die.
I just think this is silly because of the assumption that we can create some sort of ASI, not because of the syllogism that follows.
(As an intuition pump, we can hold on the order of ones of things in our working memory. Imagine facing a foe who can hold on the order of thousands of things when deciding in real time, or even millions.)
card_zero|1 year ago
I'm also unconvinced by the idea that rapid reasoning can reach meaningful results, without a suitably rapid real world environment to play with. Imagine a human, or 8 billion humans if you like, cut off from external physical reality, like brains in jars, but with their lives extended for a really long time so that they can have a really good long think. Let them talk to one another for a thousand years, even, let them simulate things on computers, but don't allow them any contact with anything more physical. Do they emerge from this with a brilliant plan about what to do next? Do they create genius ideas appropriate for the year 3000? Or are they mostly just disoriented and weird?
stoniejohnson|1 year ago
My reasoning is simple, there are a whole class of problems that require embodiment, and I assume ASI would be able to solve those problems.
Regarding
> Point 1 is a big assumption. I am also not you, and although it's true that I have different goals, I share most of your human moral values and wish you no specific harm.
Yeah I also agree this a huge assumption. Why do I make that assumption? Well, to achieve cognition far beyond ours, they would have to be different from us by definition.
Maybe morals/virtues emerge as you become smarter, but I feel like that shouldn't be the null hypothesis here. This is entirely vibes based, I don't have a syllogism for it.
jfoster|1 year ago
As Geoffrey Hinton points out, a generally useful subgoal of any task is power accumulation. In other words, you can assume that a very intelligent AI will always be not just smarter than us but also accumulate power for anything that you ask it to do, simply in order to do that thing more effectively.
Imagine if everyone had access to a magic genie. Eventually someone is going to wish for something bad.
bamboozled|1 year ago
stoniejohnson|1 year ago
varelse|1 year ago
[deleted]