(no title)
kogus | 10 months ago
Asimov says in this that there are things computers will be good at, and things humans will be good at. By embracing that complementary relationship, we can advance as a society and be free to do the things that only humans can do.
That is definitely how I wish things were going. But it's becoming clear that within a few more years, computers will be far better at absolutely everything than human beings could ever be. We are not far even now from a prompt accepting a request such as "Write a another volume of the Foundation series, in the style of Isaac Asimov", and getting a complete novel that does not need editing, does not need review, and is equal to or better than the quality of the original novels.
When that goal is achieved, what then are humans "for"? Humans need purpose, and we are going to be in a position where we don't serve any purpose. I am worried about what will become of us after we have made ourselves obsolete.
mperham|10 months ago
Read some philosophy. People have been wrestling with this question forever.
https://en.wikipedia.org/wiki/Philosophy
In the end, all we have is each other. Volunteer, help others.
quxbar|10 months ago
Can an AI novel add something new to the conversation of literature? That's less clear to me because it is so hard to get any model I work with to truly stand by its convictions.
lm28469|10 months ago
We already live lives which are artificial in almost every way. People used to die of physical exhaustion and malnutrition, now they die of lack of exercise and gluttony, surely we could have stopped somewhere in the middle. It's not a ressource or technology problem at that point, it's societal/political
charlie0|10 months ago
Another possibility is not let us scale. I thought Logan's Run was a very interesting take on this.
jillesvangurp|10 months ago
AIs aren't really part of the whole evolutionary race for survival so far. We create them. And we allow them to run. And then we shut them down. Maybe there will be some AI enhanced people that start doing better. And maybe the people bit become optional at some point. At that point you might argue we've just morphed/evolved into whatever that is.
dominicrose|10 months ago
"we" don't control ourselves. If humans can't find enough energy sources in 2200 it doesn't mean they won't do it in 1950.
It would be pretty bad to lose access to energy after having it, worse than never having it IMO.
The amount of new technologies discovered in the past 100 years (which is a tiny amount of time) is insane and we haven't adapted to it, not in a stable way.
norir|10 months ago
empath75|10 months ago
Comparative advantage. Even if that's true, AI can't possibly do _everything_. China is better at manufacturing pretty much anything than most countries on earth, but that doesn't mean China is the only country in the world that does manufacturing.
Philpax|10 months ago
Why not? There's the human bias of wanting to consume things created by humans - that's fine, I'm not questioning that - but objectively, if we get to human-threshold AGI and continue scaling, there's no reason why it couldn't do everything, and better.
belter|10 months ago
- The world already hosts millions of organic AI (Actual Intelligence). Many statistically at genius-level IQ. Does their existence make you obsolete?
Philpax|10 months ago
Depends on your definition of "intelligence." No, they can't reliably navigate the physical world or have long-term memories like cats or dogs do. Yes, they can outperform them on intellectual work in the written domain.
> Does their existence make you obsolete?
Imagine if for everything you tried to do, there was someone else who could do it better, no matter what domain, no matter where you were, and no matter how hard you tried. You are not an economically viable member of society. Some could deal with that level of demoralisation, but many won't.
foobarian|10 months ago
Folding laundry
giraffe_lady|10 months ago
“I don't like cleaning or dusting or cooking or doing dishes, or any of those things," I explained to her. "And I don't usually do it. I find it boring, you see."
"Everyone has to do those things," she said.
"Rich people don't," I pointed out.
Juniper laughed, as she often did at things I said in those early days, but at once became quite serious.
"They miss a lot of fun," she said. "But quite apart from that--keeping yourself clean, preparing the food you are going to eat, clearing it away afterward--that's what life's about, Wise Child. When people forget that, or lose touch with it, then they lose touch with other important things as well."
"Men don't do those things."
"Exactly. Also, as you clean the house up, it gives you time to tidy yourself up inside--you'll see.”
rqtwteye|10 months ago
nthingtohide|10 months ago
Let me paint a purpose for you which could take millions of years. How about building a Atomic Force microscope equivalent which can probe Calabi Yau manifolds to send messages to other multiverses.
shortrounddev2|10 months ago
20after4|10 months ago
Philpax|10 months ago
js8|10 months ago
This complementarity already exists in our brains. We have evolutionary older parts of brain that deal with our basic needs through emotions and evolutionary younger neocortex that deals with rational thought. They have complicated relationship, both can influence our actions, through mutual interaction. Morality is managed by both, neither of them is necessarily more "humane" than the other.
In my view, AI will be just another layer, an additional neocortex. Our biological neocortex is capable of tracking un/cooperative behavior of around 100 people of the tribe, and allows us to learn couple useful skills for life.
The "personal AI neocortex" will track behavior of 8 billion people on the planet, and will have mastery of all known skills. It is gonna change humans for the better, I have little doubt about it.
Longtemps|10 months ago
[deleted]