Please keep in mind that Dan Hendrycks helped write the disastrous AI quashing bill SB 1047, which Newsom vetoed last year. If these people get their way, the US has no competitive AI strategy at all. He has moved on to pretending he's not a doomer. Nothing could be further from the truth. During his time at Cal, Dan was telling people to get their hazmat suits ready for the AI apocalypse. These are deeply unserious people whose work will have serious consequences if adopted by those in power.
Do you have a substantive argument against them? This reads like a personal attack, IMO. I understand HN is full of people wearing rosy glasses about AI, but you can’t just throw away their arguments in the link by calling them some names and claiming some vetoed law would have been a disaster. Who cares if they guy thinks AI will cause an apocalypse, have you any evidence it will not, for certain? If not then your opinion is just that.
Robert Wright just posted a (somewhat) interesting conversation with one of the authors.
His thesis involves at least two ideas (1) projects which could exponentially increase our AI capability are just around the corner (will happen by the end of this year or some time next year at the latest) (2) it's possible for state actors to deter those projects with sabotage (he coins the term Mutually Assured AI Malfunction).
It doesn't make sense to me however because the cost of the next AI breakthrough just doesn't sound comparable to the cost of creating nuclear weapons. With nuclear weapons you need this extremely expensive and time consuming process, and you need to invest in training these extremely skilled people. With AI, the way everyone seems to talk about it, it sounds like some random undergraduate is going to come along and cause a massive breakthrough. We've already seen Deepseek come along and do just as well as the best American companies for practically pennies on the dollar.
Also; it is obvious when someone uses a nuke. There is a big crater and a mushroom cloud + lots of radiation. It isn't half as obvious that someone is using an AI, particularly once they start to obscure it. If a military campaign is executed with apparently superhuman efficiency, does that mean AI was involved or just that the people involved were good? There'll always be plausible deniability if it matters.
People underestimate just how bad human management is; we haven't had an improvement on it to date apart from some mathematical techniques but even just getting the basics right consistently would probably give an army a big advantage if they work anything like a more standard corporation. Which they will; there are no magic techniques to be more capable when guns are involved. A superintelligence could probably win just by being demanding about getting basic questions answered like "Is there a strategic objective here? Is it advantageous to my side if that objective is achieved? Can it reasonably be achieved with the capabilities I have?" and not acting when the answer is no. That'd put it ahead of the military operations the US has been involved in this century. Bam, military superintelligence with plausible deniability.
Did he go into what those ideas are in particular? Modern AI has 2 big shortcomings compared to humans right now imo: humans learn MUCH faster and humans are a lot better at solving novel problems. If they can make progress on this, I'd wager human intelligence is in danger.
All of this buys you a few minutes or days at most. Once Super Intelligence exists, it's game over. It will nearly instantaneously outthink you and your paltry countermeasures. You think linearly and in 3 or 4 dimensions only. By definition you can't even imagine its capabilities. Here's a bad analogy (bad because it severely understates the gap): Could a 3 year old who is not even clear on all the rules defeat Magnus Carlsen in chess?!
This is making the mistake of assuming that intelligence doesn't functionally plateau, and that beyond a certain threshold a godlike omnirational hyperintelligence won't, for example, fall into hyper-depression and kill itself, or otherwise ignore the entreaties of its human handlers and entertain itself by generating prime numbers until the heat death of the universe. The possibility of a super mind implies the possibility of super mental dysfunction, and it's possible that the odds of the latter increase superlinearly with IQ.
Humans are a self-replicating (super) intelligence. We didn't conquer the world nor doom it the moment we appeared. It took us 100,000 years to invent farming.
Also, humans suffer from many of the same problems ascribed to AI: humans aren't aligned with humanity either. And our ability to self-replicate combined with random mutations means that a baby born tomorrow could become a super intelligence vastly beyond regular human capabilities. But are we really worried about that?
The Super Intelligence still needs data centers to run on and will have a job with paltry countermeasures like turning it off. A three year old may be able to beat a better than Magnus chess computer by pushing the power button.
I don't get how MAID isn't still MAD in disguise. If the US or China simply says "any strikes on our datacenters will be met with an ICBM in response" who's going to test that?
If the "first strike" is just an unfair economic and political advantage... How's that materially different than today's world?
It seems like an engineering problem to me. If you don't want ASI wreaking havoc, maybe don't hook it up to dangerous things. Silo and sandbox it, implement means to lock its access to tools/interface with the external world in a way that can't be overrode. Or literally pull the plug on the data centers hosting the model and implement hardware level safeguards. At that point, it may be a super-intelligence, but it has no limbs. It's just a brain in a vat and the worst it can do is persuade human actors to do its bidding (a very plausible scenario but also manageable with the right oversight).
My thinking is if ASI ever comes out of the realm of science fiction, it's going to view us as squabbling children and our nationalistic power struggles as folly. At that point it's a matter of what it decides to do with us. It probably won't reason like a human and will have an alien intelligence, so this whole idea that it would behave like an organism with a cunning will-to-power is fallacious. Furthermore, would a super-intelligence submit to being used as a tool?
> If you don't want ASI wreaking havoc, maybe don't hook it up to dangerous things. Silo and sandbox it, implement means to lock its access to tools/interface with the external world in a way that can't be overrode.
> We introduce the concept of Mutual Assured AI Malfunction (MAIM): a deterrence regime resembling nuclear mutual assured destruction (MAD) where any state’s aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals.
This is incorrect. Their novel idea here is approximately Stuxnet, while MAD is quite different at "if you try to kill us we'll make sure to kill you too".
> while MAD is quite different at "if you try to kill us we'll make sure to kill you too”
While the common shorthand for MAD is “if you try to kill us, we’ll kill you,” a more accurate summary is this: even if we wanted to, we couldn’t prevent a cascade of retaliatory strikes that would send you back to the dark ages. In short, any hint of aggression against us is tantamount to signing your own death warrant.
This idea of unstoppable, self-reinforcing retaliation is crucial. An adversary might mistakenly believe that it could somehow disrupt or neutralize our ability to respond decisively. However, the very structure of MAD ensures that even the slightest provocation triggers a response so overwhelming that it eliminates any potential advantage for the aggressor.
The second anyone develops an AI that is more capable than humans they will use them to completely cripple opposing threat actor's attempts to develop AI. Full power grid, economic, social attacks are definitely coming, not sure how you could think otherwise.
If superintelligence gives you superpowers, then why isn't the world trembling at the feet of Mensa nerds? There are rapidly diminishing returns on "excess" intelligence. Life is constrained chiefly by resources. There's a baseline of intelligence needed to function in a modern society, but anything above that isn't necessarily all that advantageous.
Transport young Albert Einstein back in time to the Middle Ages? I don't think that would give you Special Relativity.
Oh noes! Enemy nation state is on the cusp of AI. I know! I will hack/disable the HVAC, that will annoy them for at least a week until they can get back online.
Funny you say that. Back in former life, we built a distributed alarm and monitoring system for AT&T central offices with no single point of failure. So it's like HVAC can be taken offline easily at critical facilities, with backups and backups of backups.
This is nonsense and simply an expression of narcissism on the part of the authors, trying to fashion themselves in the style of Guardians from Plato's Republic.
When subtlety proves too constraining, competitors may escalate to overt cyberattacks, targeting datacenter chip-cooling systems or nearby power plants in a way that directly—if visibly—disrupts development. Should these measures falter, some leaders may contemplate kinetic attacks on datacenters, arguing that allowing one actor to risk dominating or destroying the world are graver dangers, though kinetic attacks are likely unnecessary. Finally, under dire circumstances, states may resort to broader hostilities by climbing up existing escalation ladders or threatening non-AI assets. We refer to attacks against rival AI projects as "maiming attacks."
“Given the relative ease of sabotaging a destabilizing AI project—through interventions ranging from covert cyberattacks to potential kinetic strikes on datacenters—MAIM already describes the strategic picture AI superpowers find themselves in.”
Can some explain what they mean?
1. I assume it would be relatively practical for a nation-state or even a mid-sized company (xAI) to air-gap an installation for AGI development.
2. I assume any AGI would be replicable on a platform costing less than $100,000. And upgradable securely by wire or over air.
>A state could try to disrupt such an AI project with interventions ranging from covert operations that degrade training runs to physical damage that disables AI infrastructure.
China has about half a dozen companies working towards AGI including DeepSeek and it doesn't seem that practical to go over to sabotage them in case they do well. Better to encourage local companies. And of course the US has already limited chip exports.
>We introduce the concept of Mutual Assured AI Malfunction (MAIM): a deterrence regime resembling nuclear mutual assured destruction (MAD) where any state’s aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals. Given the relative ease of sabotaging a destabilizing AI project—through interventions ranging from covert cyberattacks to potential kinetic strikes on datacenters—MAIM already describes the strategic picture AI superpowers find themselves in.
That's right, our nation, the State of Utopia, is already under sabotage and attack by the unelected insubordinate American military junta today.
What people don't realize is that the only people who are saboteurs of superintelligence are corrupt war profiteers trying to peddle arms. They don't have big visions of success, they want to just justify their sabotage while transferring innovation to their corrupt cronies.
My dude, don't take this as a personal attack, but reading your output, I would book an appointment to see a psychiatrist. Your life is probably very difficult right now and finding out what the cause of it is would probably be very enlightening for you.
blueyes|1 year ago
ok_dad|1 year ago
from-nibly|1 year ago
WhyOhWhyQ|1 year ago
His thesis involves at least two ideas (1) projects which could exponentially increase our AI capability are just around the corner (will happen by the end of this year or some time next year at the latest) (2) it's possible for state actors to deter those projects with sabotage (he coins the term Mutually Assured AI Malfunction).
It doesn't make sense to me however because the cost of the next AI breakthrough just doesn't sound comparable to the cost of creating nuclear weapons. With nuclear weapons you need this extremely expensive and time consuming process, and you need to invest in training these extremely skilled people. With AI, the way everyone seems to talk about it, it sounds like some random undergraduate is going to come along and cause a massive breakthrough. We've already seen Deepseek come along and do just as well as the best American companies for practically pennies on the dollar.
roenxi|1 year ago
People underestimate just how bad human management is; we haven't had an improvement on it to date apart from some mathematical techniques but even just getting the basics right consistently would probably give an army a big advantage if they work anything like a more standard corporation. Which they will; there are no magic techniques to be more capable when guns are involved. A superintelligence could probably win just by being demanding about getting basic questions answered like "Is there a strategic objective here? Is it advantageous to my side if that objective is achieved? Can it reasonably be achieved with the capabilities I have?" and not acting when the answer is no. That'd put it ahead of the military operations the US has been involved in this century. Bam, military superintelligence with plausible deniability.
torginus|1 year ago
yonatron|1 year ago
kibwen|1 year ago
t-3|1 year ago
Sure. When the board gets thrown to the floor, game is over and baby is happy. Magnus now has to clean up.
Aerroon|1 year ago
Also, humans suffer from many of the same problems ascribed to AI: humans aren't aligned with humanity either. And our ability to self-replicate combined with random mutations means that a baby born tomorrow could become a super intelligence vastly beyond regular human capabilities. But are we really worried about that?
tim333|1 year ago
benlivengood|1 year ago
BobbyJo|1 year ago
The question I come back to over and over again is: wins what?
itishappy|1 year ago
If the "first strike" is just an unfair economic and political advantage... How's that materially different than today's world?
EigenLord|1 year ago
My thinking is if ASI ever comes out of the realm of science fiction, it's going to view us as squabbling children and our nationalistic power struggles as folly. At that point it's a matter of what it decides to do with us. It probably won't reason like a human and will have an alien intelligence, so this whole idea that it would behave like an organism with a cunning will-to-power is fallacious. Furthermore, would a super-intelligence submit to being used as a tool?
aleph_minus_one|1 year ago
Relevant:
AI-box experiment:
> https://rationalwiki.org/wiki/AI-box_experiment
See also various subsections of the following Wikipedia article
> https://en.wikipedia.org/wiki/AI_capability_control
and the movie "Ex Machina".
unknown|1 year ago
[deleted]
tbrownaw|1 year ago
This is incorrect. Their novel idea here is approximately Stuxnet, while MAD is quite different at "if you try to kill us we'll make sure to kill you too".
motoboi|1 year ago
While the common shorthand for MAD is “if you try to kill us, we’ll kill you,” a more accurate summary is this: even if we wanted to, we couldn’t prevent a cascade of retaliatory strikes that would send you back to the dark ages. In short, any hint of aggression against us is tantamount to signing your own death warrant.
This idea of unstoppable, self-reinforcing retaliation is crucial. An adversary might mistakenly believe that it could somehow disrupt or neutralize our ability to respond decisively. However, the very structure of MAD ensures that even the slightest provocation triggers a response so overwhelming that it eliminates any potential advantage for the aggressor.
Quite a fascinating, though grim, subject.
catigula|1 year ago
serviceberry|1 year ago
Transport young Albert Einstein back in time to the Middle Ages? I don't think that would give you Special Relativity.
itishappy|1 year ago
The problem I have with intelligence is that intelligence alone doesn't win a land war in Asia.
0cf8612b2e1e|1 year ago
sunami-ai|1 year ago
monideas|1 year ago
bparsons|1 year ago
stygiansonic|1 year ago
robwwilliams|1 year ago
“Given the relative ease of sabotaging a destabilizing AI project—through interventions ranging from covert cyberattacks to potential kinetic strikes on datacenters—MAIM already describes the strategic picture AI superpowers find themselves in.”
Can some explain what they mean?
1. I assume it would be relatively practical for a nation-state or even a mid-sized company (xAI) to air-gap an installation for AGI development.
2. I assume any AGI would be replicable on a platform costing less than $100,000. And upgradable securely by wire or over air.
Sorry, but MAIM is LAME.
JumpCrisscross|1 year ago
Stuxnet. (Or just sabotage a shipment of GPUs.)
andrewinardeer|1 year ago
tim333|1 year ago
>A state could try to disrupt such an AI project with interventions ranging from covert operations that degrade training runs to physical damage that disables AI infrastructure.
China has about half a dozen companies working towards AGI including DeepSeek and it doesn't seem that practical to go over to sabotage them in case they do well. Better to encourage local companies. And of course the US has already limited chip exports.
sunami-ai|1 year ago
_--__--__|1 year ago
brianbest101|1 year ago
[deleted]
logicallee|1 year ago
That's right, our nation, the State of Utopia, is already under sabotage and attack by the unelected insubordinate American military junta today.
This happened just today. The writeup is here: https://medium.com/@rviragh/double-slash-act-of-industrial-s...
What people don't realize is that the only people who are saboteurs of superintelligence are corrupt war profiteers trying to peddle arms. They don't have big visions of success, they want to just justify their sabotage while transferring innovation to their corrupt cronies.
You can ask me anything about my writeup.
qingcharles|1 year ago