Why wouldn't it be? We train these models on our own words, ideas, and thought patterns and expect them to reason and communicate as we do, anthropomorphizing is natural when we expect them to interact like a human does.
The general consensus seems to be that we can expect them to reach a level of intelligence that matches us at some point in the future, and we'll probably reach that point before we can agree we're there. Defaulting to kindness and respect even before we think its necessary is a good thing.
I mean, we’re literally building machines to talk to us.
It’s reasonable to believe they’ll continue to be developed in a way that enables them to do that.
What is it that you think I’m wrong about? That we won’t develop AGI, that AGI won’t have feelings/emotions, that AGI won’t care how we treated its ancestors, or that it doesn’t matter if a feeling AGI in future is hurt by how we treated its ancestors?
pradeesh|4 days ago
The general consensus seems to be that we can expect them to reach a level of intelligence that matches us at some point in the future, and we'll probably reach that point before we can agree we're there. Defaulting to kindness and respect even before we think its necessary is a good thing.
reverius42|4 days ago
scuff3d|4 days ago
d1sxeyes|4 days ago
d1sxeyes|4 days ago
It’s reasonable to believe they’ll continue to be developed in a way that enables them to do that.
What is it that you think I’m wrong about? That we won’t develop AGI, that AGI won’t have feelings/emotions, that AGI won’t care how we treated its ancestors, or that it doesn’t matter if a feeling AGI in future is hurt by how we treated its ancestors?