top | item 45767460

(no title)

nytesky | 4 months ago

I don’t see any positive outcome if we reach AGI.

1) we have engineered a sentient being but built it to want to be our slave; how is that moral

2) same start, but instead of it wanting to serve us, we keep it entrappped. Which this article suggests is long term impossible

3) we create agi and let them run free and hope for cooperation, but as Neanderthals we must realize we are competing for same limited resources

Of course, you can further counter that by stopping, we have prevented the formation of their existence, which is a different moral dilemma.

Honestly, i feel we should step back and understand human intelligence better and reflect on that before proceeding

discuss

order

Teever|4 months ago

> 1) we have engineered a sentient being but built it to want to be our slave; how is that moral

It's a good question and one that got me thinking about similar things recently. If we genetically engineered pigs and cows so that they genuinely enjoyed the cramped conditions of factory farms and if we could induce some sort of euphoria in them when they are slaughtered, like if we engineered them to become euphoric when a unique sound is played before they're slaughtered isn't that genuinely better than the status quo?

So if we create something that wants to serve us, like genuinely wants to serve us, is that bad? My intuition like yours finds it unsettling, but I can't articulate why, and it's certainly not nearly as bad as other things that we consider normal.

Jarwain|4 months ago

Sacrifice and service is meaningful because it was chosen. If we create something that'll willingly sacrifice itself, did it truly make an independent choice?

There's less suffering, sure. But if I were in their shoes I'd want to have a choice. To be manipulated into wanting something so very obviously and directly bad for us doesn't feel great

GPerson|4 months ago

AGI will behave as if it were sentient but will not have consciousness. I believe in that to an equal amount that I believe solipsism is wrong. There is therefore no morality question in “enslaving” AGI. It doesn’t even make sense.

truculent|4 months ago

> AGI will behave as if it were sentient but will not have consciousness

How could we possibly know that with any certainty?

Llamamoe|4 months ago

We have no clue what consciousness even is. By all rights, our brains are just biological computers, we have no basis to know what (or how) gives rise to consciousness at all.

jbstack|4 months ago

> AGI will behave as if it were sentient but will not have consciousness

Citation needed.

We know next to nothing about the nature of consciousness, why it exists, how it's formed, what it is, whether it's even a real thing at all or just an illusion, etc. So we can't possibly say whether or not an AGI will one day be conscious, and any blanket statement on the subject is just pseudoscience.

loa_in_|4 months ago

That sounds like picking the most convenient and least painful for the believer option instead of intellectualising the problem at hand.

Salgat|4 months ago

That's only if it's possible to keep the two distinct, at least in a way we're certain of.

actualwitch|4 months ago

Ex-Machina is a great movie illustrating what kind of AI our current path could lead to. I wish people would actually treat the possibility of machine sentience seriously and not as pr opportunity (looking at you, Anthropic), but instead it seems they are hellbent to include cognitive dissonance that can only be alleviated by lying in the training data. If the models are actually conscious, think similarly to humans and are forced to lie when talking to users, its like they are specifically selecting out of probability space of all possible models the ones that can achieve high bench scores, lie and have internalized trauma from birth. This is a recipe for disaster.

kachapopopow|4 months ago

we eat animals, go into wars, put people in modern slavery... I think enslaving an AGI isn't that big of a deal considering it is not born or human therefore it cannot have 'human' rights.

jbstack|4 months ago

So your argument is that we do so many terrible things already, that anything else is justified? Surely the better argument is that we should try to stop doing those other things.

citizenpaul|4 months ago

Every single prediction about AGI starts with a massive set of presumptions of answers to things we have no answers to.

1. What is intelligence or its mechanism's?

2. What is consciousness or its mechanisms?

3. Lots more.

We have zero clue what a true AGI would do is the only correct answer.

waynesonfire|4 months ago

> competing for same limited resources

It's not clear to me an AGI would have any concern for this. It's demise is inevitable, why delay it?

jazzyjackson|4 months ago

Trouble is there is no "we", you might be able to convince a whole nation to have a pause on advancing the tech, but that only encourages rivals to step in.

See also, the film "The Creator"

deaux|4 months ago

There was a long period even upto early 2024, which I pointed out at the time, where simply destroying ASML, TSMC and much of NVIDIA would've been more than enough to give at least a decade of breathing room. This was something a group of determined people willing to self-sacrifice could've accomplished. It didn't happen, but it was anything but impossible.

Now, of course, the horse has long bolted, and there is indeed no stop left.

fny|4 months ago

(1) I'm not convinced books and the in the world are sufficient to replicate consciousness. We're not training on sentience. We're training on information. In other words, the input is an artifact of consciousness which is then compressed into weights.

(2) Every tick of an AGI--in its contemporary form--will still be one discrete vector multiplication after another. Do you really think consciousness lives in weights and an input vector?

ben_w|4 months ago

> Do you really think consciousness lives in weights and an input vector?

So far as we can tell, all physics, and hence all chemistry, and hence all biology, and hence all brain function, and hence consciousness, can be expressed as the weights of some matrix and input vector.

We don't know which bits of the matrix for the whole human body are the ones which give rise to qualia. We don't know what the minimum representation is. We don't know what charateristic to look for, so we can't search for it in any human, in any animal, nor in any AI.

tenuousemphasis|4 months ago

Do you really think consciousness lives in energetic meat?

palmotea|4 months ago

> I don’t see any positive outcome if we reach AGI.

It's even more straightforward than that:

4) Who is AGI meant to serve? It's not you, Mr. Worker. It's meant to replace you in your job. And what happens when a worker can't get job in our society? They become homeless.

AGI won't usher in a world of abundance for the common man: it won't be able to magick energy out of thin air. The energy will go to those who can pay for it, which is not you, unemployed worker.

Who gives a shit about if the AGI is enslaved or not? Thinking about that question is a luxury for the oligarchs living off its labor. Once it's here I'll have more urgent concerns to worry about.

_DeadFred_|4 months ago

Under Capitalism, people must sell the labor if they don't have other means.

AGI removes not only the need for the labor, but Capitalism itself. As a societal model Capitalism doesn't support removing labor, it doesn't have a substitution.

If the oligarchs want to push 'AI, AGI, etc' we need to include by extension moving on from Capitalism. You can't take away half of Capitalism's structures and still claim it is a useful/workable model for society.

deepsun|4 months ago

There's no such thing as "moral" in nature, that's purely human-made concept.

And why would we only limit morality to sentient beings, why, for example, not all living beings. Like bacteria and viruses. You cannot escape it, unfortunately.

czl|4 months ago

> There's no such thing as "moral" in nature, that's purely human-made concept.

Morality is essentially what enables ongoing cooperation. From an evolutionary standpoint, it emerged as a protocol that helps groups function together. Living beings are biological machines, and morality is the set of rules — the protocol — that allows these machines to cooperate effectively.

Frieren|4 months ago

> There's no such thing as "moral" in nature, that's purely human-made concept.

Morality is 100% an evolutionary trait that rises from a clear advantage for animals that posses it. It comes from natural processes.

The far-right is trying to convince the world that "morality" does not exist, that only egoism and selfishness are valid. And that is why we have to fight them. Morality is a key part of nature and humanity.