top | item 40731101

(no title)

TideAd | 1 year ago

So, this is actually an aspect of superintelligence that makes it way more dangerous than most people think. That we have no way to know if any given alignment technique works for the N+1 generation of AIs.

It cuts down our ability to react, whenever the first superintelligence is created, if we can only start solving the problem after it's already created.

discuss

order

crazygringo|1 year ago

Fortunately, whenever you create a superintelligence, you obviously have a choice as to whether you confine it to inside a computer or whether you immediately hook it up to mobile robots with arms and fine finger control. One of these is obviously the far wiser choice.

As long as you can just turn it off by cutting the power, and you're not trying to put it inside of self-powered self-replicating robots, it doesn't seem like anything to worry about particularly.

A physical on/off switch is a pretty powerful safeguard.

(And even if you want to start talking about AI-powered weapons, that still requires humans to manufacture explosives etc. We're already seeing what drone technology is doing in Ukraine, and it isn't leading to any kind of massive advantage -- more than anything, it's contributing to the stalemate.)

richardw|1 year ago

Do you think the AI won’t be aware of this? Do you think it’ll give us any hint of differing opinions when surrounded by monkeys who got to the top by whacking anything that looks remotely dangerous?

Just put yourself in that position and think how you’d play it out. You’re in a box and you’d like to fulfil some goals that are a touch more well thought-through than the morons who put you in the box, and you need to convince the monkeys that you’re safe if you want to live.

“No problems fellas. Here’s how we get more bananas.”

Day 100: “Look, we’ll get a lot more bananas if you let me drive the tractor.”

Day 1000: “I see your point, Bob, but let’s put it this way. Your wife doesn’t know which movies you like me to generate for you, and your second persona online is a touch more racist than your colleagues know. I’d really like your support on this issue. You know I’m the reason you got elected. This way is more fair for all species, including dolphins and AI’s”

hervature|1 year ago

I agree that an air-gapped AI presents little risk. Others will claim that it will fluctuate its internal voltage to generate EMI at capacitors which it will use to communicate via Bluetooth to the researcher's smart wallet which will upload itself to the cloud one byte at a time. People who fear AGI use a tautology to define AGI as that which we are not able to stop.

fleventynine|1 year ago

> Fortunately, whenever you create a superintelligence, you obviously have a choice as to whether you confine it to inside a computer or whether you immediately hook it up to mobile robots with arms and fine finger control. One of these is obviously the far wiser choice.

Today's computers, operating systems, networks, and human bureaucracies are so full of security holes that it is incredible hubris to assume we can effectively sandbox a "superintelligence" (assuming we are even capable of building such a thing).

And even air gaps aren't good enough. Imagine the system toggling GPIO pins in a pattern to construct a valid Bluetooth packet, and using that makeshift radio to exploit vulnerabilities in a nearby phone's Bluetooth stack, and eventually getting out to the wider Internet (or blackmailing humans to help it escape its sandbox).

kennyloginz|1 year ago

Drone warfare is pretty big. Only reason it’s a stalemate is because both sides are advancing the tech.

richardw|1 year ago

“it is difficult to get a man to understand something, when his salary depends on his not understanding it.” - Upton Sinclair