>“We are going to be shocked by the speed, the chaos, the bloodiness, and the friction of a future fight in which this will be playing out,
I think this article's focus is way off
It's more plausible that algorithmic warfare is going to make us unsure as to what is even happening. We are going to be lost, drowned, in conflicting plausible information to the point we won't even know what is going in nor who to believe
I agree with you. Since WW2, the trend in military weaponry and intelligence is precision.
- Instead of blowing up a submarine, why not disable the one critical crew member needed for it to launch?
- Instead of leveling a city to stop arms production, why not just infect the control systems of the factories so only plows can be manufactured and not swords?
- Cause a single screw to fall out of a piece of artillery so that it can no longer fire on target?
- Better yet, cause a fire in the iron mine so that gun can't even be manufactured.
To be effective, the affected parties wouldn't even know they were being targeted. They wouldn't even know they were at war. The lowest friction conduit your enemy civilization can proceed on is the one you want them to, and over time that's the one they'll inevitably settle into after becoming frustrated with their inability to make meaningful war on you.
In a perfect scenario, you might not even know you're attacking them either.
The series "The Golden Oecumene" explores this concept. In the story, the entire world's military might has reduced to a single sophisticated computer A.I. and a single man, who is usually cryogenically frozen and only brought out under extreme circumstances for specific operations that might include severing a single synaptic connection in a target using an x-ray laser stationed in orbit -- in order to change the target's mind.
What is the target of algorithm warfare supposed to be? This concept makes no sense to me.
Submarine warfare? Makes sense. You kill subs and ships and control waters. What is the algorithm seeking to control?
It doesn't even seem like a new kind of warfare, just a decision maker for existing types of warfare. So it basically has rules for land, air, sea, internet warfare and ties everything together?
Is the US President basically going to enter "control Russia" at a command line and the AI plans and executes a cold/hot war? This seems laughable.
I think the disinformation you're thinking about matters [cf 0], but being able to project power to the right place at the right time requires a lot of logistical planning and material even if the information can be trusted. There are a lot of military areas where AI can inform traditional bureaucratic processes from planning to command and control [cf 1].
DOD maintains a lot of troops/bases/relationships around the world, and my guess is they want to be optimizing all of them as much as possible and be able to adapt policy to changes. Like any large organization, it's hard for the higher ups to have a good grasp on what's going on at lower echelons. They've had some pretty bad policy decisions handed to them from not being able to explain what was going on (e.g. the disbanding of the Baath party in Iraq). Being able to algorithmically integrate the information they've got and present it to policy makers has been an unsolved problem in its own right without the disinformation stuff.
Who's to say that we're not in the middle of it now? Lots of little surgical strikes appearing to be russian hackers, heart attacks, ethical lapses and folks caught in unlucky situations?
Meanwhile, in the late-twentieth-century phase of the arms race, the role of unpredictable chance increased. When hours (or days) and miles (or hundreds of miles) separate defeat from victory, and therefore an error of command can be remedied by throwing in reserves, or retreating, or counterattacking, then there is room to reduce the element of chance. But when micromillimeters and nanoseconds determine the outcome, then chance enters like a god of war, deciding victory or defeat; it is magnified and lifted out of the microscopic scale of atomic physics. The fastest, best weapons system comes up against the Heisenberg uncertainty principle, which nothing can overcome, because that principle is a basic property of matter in the Universe. It need not be a computer breakdown in satellite reconnaissance or in missiles whose warheads parry defenses with laser beams; if a series of electronic defensive impulses is even a billionth of a second slow in meeting a similar series of offensive impulses, that is enough for a toss of the dice to decide the outcome of the Final Encounter.
Unaware of this state of affairs, the major antagonists of the planet devised two opposite strategies. One can call them the "scalpel" and the "hammer." The constant escalation of pay-load megatonnage was the hammer; the improvement of detection and swift destruction in flight was the scalpel. They also reckoned on the deterrent of the "dead man's revenge": the enemy would realize that even in winning he would perish, since a totally obliterated country would still respond -- automatically and posthumously -- with a strike that would make defeat universal. Such was the direction the arms race was taking, and such was its destination, which no one wanted but no one knew how to avoid.
How does the engineer minimize error in a very large, very complex system? He does trial runs to test it; he looks for weak spots, weak links. But there was no way of testing a system designed to wage global nuclear war, a system made up of surface, submarine, air-launched, and satellite missiles, antimissiles, and multiple centers of command and communications, ready to loose gigantic destructive forces in wave on wave of reciprocal atomic strikes. No maneuvers, no computer simulation, could re-create the actual conditions of such a battle.
Increasing speed of operation marked each new weapons system, particularly the decision-making function (to strike or not to strike, where, how, with what force held in reserve, at what risk, etc.), and this increasing speed also brought the incalculable factor of chance into play. Lightning-fast systems made lightning-fast mistakes. When a fraction of a second determined the safety or destruction of a region, a great metropolis, an industrial complex, or a large fleet, it was impossible to achieve military certainty. One could even say that victory had ceased to be distinguishable from defeat. In a word, the arms race was heading toward a Pyrrhic situation.
War, as in pitched battles on a field, has been going away for a long time. Perpetual war, unfortunately, is back.
If you believe Clauzewitz's dictum that "war is diplomacy by any other means" then than disinformation, continual, slow, background degradation of the infrastructure of perceived enemies and the like will become the standard mode of large nation states, with actual shooting being the guerrilla/terrorist activity of those who can't afford the electronic ops. Something that also has been the nature of the world since forever (David/Goliath, US revolution, WWII partisans, Al Qaida....any so many in between).
And BTW once you have this capability wouldn't you use it internally to "ensure domestic tranquility"?
Once again The Sheep Look Up and Stand on Zanzibar are highly predictive.
One of the big areas where AI will have a huge impact in warfare is logistics. When fighting in hostile territory (which pretty much all US wars have been since the Civil War), one of the most vulnerable parts is the supply train. Having to devote humans to drive the supplies and guard the supplies is expensive and risk prone.
Instead, I imagine that supplies will be brought to the front lines via self-driving vehicles watched over by armed drones that are programmed to destroy anything that tries to interfere with the supply convoy.
Just doing that one thing will have huge implications for the way wars are fought the the costs associated with the wars.
> Instead, I imagine that supplies will be brought to the front lines via self-driving vehicles watched over by armed drones that are programmed to destroy anything that tries to interfere with the supply convoy.
You only need to wreck the lead vehicle in a convoy to stop those behind making any progress. A high-tech adversary could do this by a deep precision strike, and a low-tech enemy could simply emplace some mines or IEDs (remotely triggered).
We discovered and innovated heavily on amazing pieces of technology (Computers, Web, Phones, Efficient Algorithms). And seemingly, the two foremost use cases have become to captivate people’s attention to sell them ads, and to threaten the lives of people we disagree with ideologically. (War, mass surveillance) Just look at what China has done to the Uighurs.
I often worry about the direction we’re heading. While it is undeniable that fewer people are dying due to war, maybe we’re just moving the agony elsewhere.
For every startup that begins with the ambition to make money, there lack in startups that actually aim to address a problem. Yet, somehow we manage to convince ourselves that food delivery, ride sharing, or scooters are somehow going to fix our most primal issues.
I have never met a startup founder that wanted to address a problem or improve the world. Instead they all want to make sure that the Series A investors are happy. Nothing else really matters besides doing whatever the investors think will make number go up.
One time I almost got involved with a person who truly wanted to improve the world, but didn't get the job.
Outside of that one person I almost met 15 years ago, literally every founder in tech I've interacted with is just trying to run the ponzi scheme and get to that exit event, consequences be damned, employees be damned, profit be damned.
Maybe there are some good people in tech, but I still haven't met them after a whole career in this industry. Just a lot of people who make number go up like they're told to.
*edit I think this comment will be very poorly received and downvoted to -100000 karma, but honestly typing it out has made me realize I just fucking hate this industry with every fiber of my being. Every new platform and channel and tool gets completely subverted by ever more intrusive and personalized advertising, and the total data surveillance ecosystem of the big tech giants ensures that they will be able to kill or consume any truly good business idea before it gets off the ground. I think I'm done. Fuck tech. Fuck computers. What a waste of a career.
It's been a while now but while I (briefly) worked on my Masters, we analyzed technology development and what drove it.
Surprisingly (at the time), war and sex tend to drive the vast majority of it. GPS, microwaves, the internet, and many medical treatments had roots in national defense. VHS over Betamax and high speed internet adoption came from porn. We identified hundreds of big and little things in many industries.
>And seemingly, the two foremost use cases have become to captivate people’s attention to sell them ads, and to threaten the lives of people we disagree with ideologically.
In my highly unscientific observation those use cases are minuscule in comparison to entertainment (Netflix/Hulu/YouTube shows, funny cat videos, arguing with idiots on Reddit, reading the news, reading blogs, pornography, reading HN, etc, etc).
It's society deciding who breeds and who doesn't. It's a belief that's existed since Francis Galton that we can quantify the traits of humans, and that, like Gregor Mendel controlled the color of flowers, we can quantify and control those human traits.
> We discovered and innovated heavily on amazing pieces of technology (Computers, Web, Phones, Efficient Algorithms).
A lot of the research efforts that discovered these things were funded by governments for the purpose of conducting war more effectively and efficiently. The internet was created to maintain command and control in the advent of a nuclear war. A lot of operations research was about improving factory production of war materiel. The desire to simulate nuclear explosions drove a lot of super computing research (look at all the Department of Energy contracts for supercomputers).
Throughout human history, the desire to efficiently kill other humans without being killed yourself has driven a lot of technology development, and this is unlikely to change in the future.
How can you escape survival of the fittest? With 8G people, it's a numbers game. Even if one or the other is driven by interesting motivations, is that a strategic advantage the competition for the fittest business processes?
It's kind of weird how the US DoD complains about lack of computation power and has to ask the private sector for help, while the NSA [0] apparently has no problem running their own AI program.
Imho way more interesting and relevant than those somewhat far-flung future concepts about fully autonomous warfare.
>“We are going to be shocked by the speed, the chaos, the bloodiness, and the friction of a future fight in which this will be playing out, maybe in microseconds at times. How do we envision that fight happening? It has to be algorithm against algorithm,” Shanahan said during a conversation with former Google CEO Eric Schmidt and Google VP of global affairs Kent Walker.
This will also push the front-line of war back in time, because democracy is vulnerable to psy-ops.
For example, there is a lot of hatred brewing against the Chinese government caused by unverifiable claims of concentration camps. There doesn't need to be a state actor behind it. There doesn't even need to be a conspiracy. Individual people are willing to cause plausibly deniable harm for even imaginary profit. They can exploit pagerank-type algorithms based on intuition alone. Pagerank algos aren't designed for war but it is difficult to claim they didn't have a contribution in the current round of unrest in the Middle East.
The exploitation of psychology for mass-marketing has been going on since shortly after WWII. Currently market forces which are also algorithmic in nature are pushing to make mobile devices ever more addictive and it would be surprising if using AI to accelerate this process is a new idea. With 2 billion users and no ethics Facebook has become a kind of shadow government, with the power to have non-users like me frozen-out by omission by my own family. It is crazy not to be terrified of what they have done.
> For example, there is a lot of hatred brewing against the Chinese government caused by unverifiable claims of concentration camps.
"unverifiable claims" - what? Even the Chinese government admits they exist/existed [0].
Overall I think your point that it's overwhelmingly easy to confuse the truth these days is exactly right, but you're the subject in your example instead of the people you point the finger at.
[+] [-] naringas|6 years ago|reply
I think this article's focus is way off
It's more plausible that algorithmic warfare is going to make us unsure as to what is even happening. We are going to be lost, drowned, in conflicting plausible information to the point we won't even know what is going in nor who to believe
[+] [-] bane|6 years ago|reply
- Instead of blowing up a submarine, why not disable the one critical crew member needed for it to launch?
- Instead of leveling a city to stop arms production, why not just infect the control systems of the factories so only plows can be manufactured and not swords?
- Cause a single screw to fall out of a piece of artillery so that it can no longer fire on target?
- Better yet, cause a fire in the iron mine so that gun can't even be manufactured.
To be effective, the affected parties wouldn't even know they were being targeted. They wouldn't even know they were at war. The lowest friction conduit your enemy civilization can proceed on is the one you want them to, and over time that's the one they'll inevitably settle into after becoming frustrated with their inability to make meaningful war on you.
In a perfect scenario, you might not even know you're attacking them either.
The series "The Golden Oecumene" explores this concept. In the story, the entire world's military might has reduced to a single sophisticated computer A.I. and a single man, who is usually cryogenically frozen and only brought out under extreme circumstances for specific operations that might include severing a single synaptic connection in a target using an x-ray laser stationed in orbit -- in order to change the target's mind.
[+] [-] vsareto|6 years ago|reply
Submarine warfare? Makes sense. You kill subs and ships and control waters. What is the algorithm seeking to control?
It doesn't even seem like a new kind of warfare, just a decision maker for existing types of warfare. So it basically has rules for land, air, sea, internet warfare and ties everything together?
Is the US President basically going to enter "control Russia" at a command line and the AI plans and executes a cold/hot war? This seems laughable.
[+] [-] mlb_hn|6 years ago|reply
DOD maintains a lot of troops/bases/relationships around the world, and my guess is they want to be optimizing all of them as much as possible and be able to adapt policy to changes. Like any large organization, it's hard for the higher ups to have a good grasp on what's going on at lower echelons. They've had some pretty bad policy decisions handed to them from not being able to explain what was going on (e.g. the disbanding of the Baath party in Iraq). Being able to algorithmically integrate the information they've got and present it to policy makers has been an unsolved problem in its own right without the disinformation stuff.
[0] Whaley,B. (1982). "Toward a General Theory of Deception" https://www.tandfonline.com/doi/abs/10.1080/0140239820843710...
[1] MCDP 1. (1997). "Warfighting" https://www.marines.mil/Portals/1/Publications/MCDP%201%20Wa...
[+] [-] Damorian|6 years ago|reply
[+] [-] m463|6 years ago|reply
[+] [-] joyjoyjoy|6 years ago|reply
[+] [-] gumby|6 years ago|reply
If you believe Clauzewitz's dictum that "war is diplomacy by any other means" then than disinformation, continual, slow, background degradation of the infrastructure of perceived enemies and the like will become the standard mode of large nation states, with actual shooting being the guerrilla/terrorist activity of those who can't afford the electronic ops. Something that also has been the nature of the world since forever (David/Goliath, US revolution, WWII partisans, Al Qaida....any so many in between).
And BTW once you have this capability wouldn't you use it internally to "ensure domestic tranquility"?
Once again The Sheep Look Up and Stand on Zanzibar are highly predictive.
[+] [-] emilfihlman|6 years ago|reply
But calling perpetual competition war because is questionable. We have always been influencing each other. It's not war.
[+] [-] RcouF1uZ4gsC|6 years ago|reply
Instead, I imagine that supplies will be brought to the front lines via self-driving vehicles watched over by armed drones that are programmed to destroy anything that tries to interfere with the supply convoy.
Just doing that one thing will have huge implications for the way wars are fought the the costs associated with the wars.
[+] [-] KineticLensman|6 years ago|reply
You only need to wreck the lead vehicle in a convoy to stop those behind making any progress. A high-tech adversary could do this by a deep precision strike, and a low-tech enemy could simply emplace some mines or IEDs (remotely triggered).
[+] [-] bransonf|6 years ago|reply
We discovered and innovated heavily on amazing pieces of technology (Computers, Web, Phones, Efficient Algorithms). And seemingly, the two foremost use cases have become to captivate people’s attention to sell them ads, and to threaten the lives of people we disagree with ideologically. (War, mass surveillance) Just look at what China has done to the Uighurs.
I often worry about the direction we’re heading. While it is undeniable that fewer people are dying due to war, maybe we’re just moving the agony elsewhere.
For every startup that begins with the ambition to make money, there lack in startups that actually aim to address a problem. Yet, somehow we manage to convince ourselves that food delivery, ride sharing, or scooters are somehow going to fix our most primal issues.
[+] [-] moltensodium|6 years ago|reply
One time I almost got involved with a person who truly wanted to improve the world, but didn't get the job.
Outside of that one person I almost met 15 years ago, literally every founder in tech I've interacted with is just trying to run the ponzi scheme and get to that exit event, consequences be damned, employees be damned, profit be damned.
Maybe there are some good people in tech, but I still haven't met them after a whole career in this industry. Just a lot of people who make number go up like they're told to.
*edit I think this comment will be very poorly received and downvoted to -100000 karma, but honestly typing it out has made me realize I just fucking hate this industry with every fiber of my being. Every new platform and channel and tool gets completely subverted by ever more intrusive and personalized advertising, and the total data surveillance ecosystem of the big tech giants ensures that they will be able to kill or consume any truly good business idea before it gets off the ground. I think I'm done. Fuck tech. Fuck computers. What a waste of a career.
[+] [-] caseysoftware|6 years ago|reply
Surprisingly (at the time), war and sex tend to drive the vast majority of it. GPS, microwaves, the internet, and many medical treatments had roots in national defense. VHS over Betamax and high speed internet adoption came from porn. We identified hundreds of big and little things in many industries.
[+] [-] dsfyu404ed|6 years ago|reply
In my highly unscientific observation those use cases are minuscule in comparison to entertainment (Netflix/Hulu/YouTube shows, funny cat videos, arguing with idiots on Reddit, reading the news, reading blogs, pornography, reading HN, etc, etc).
[+] [-] Braggadocious|6 years ago|reply
[+] [-] RcouF1uZ4gsC|6 years ago|reply
A lot of the research efforts that discovered these things were funded by governments for the purpose of conducting war more effectively and efficiently. The internet was created to maintain command and control in the advent of a nuclear war. A lot of operations research was about improving factory production of war materiel. The desire to simulate nuclear explosions drove a lot of super computing research (look at all the Department of Energy contracts for supercomputers).
Throughout human history, the desire to efficiently kill other humans without being killed yourself has driven a lot of technology development, and this is unlikely to change in the future.
[+] [-] bumblebee4|6 years ago|reply
[+] [-] est|6 years ago|reply
These technologies are funded by defence industries in the first place.
https://steveblank.com/secret-history/
[+] [-] s_y_n_t_a_x|6 years ago|reply
We see the largest strides in innovation when we're pitted against each other and the stakes are high and clear.
[+] [-] freeflight|6 years ago|reply
Imho way more interesting and relevant than those somewhat far-flung future concepts about fully autonomous warfare.
[0] https://arstechnica.com/information-technology/2016/02/the-n...
[+] [-] remarkEon|6 years ago|reply
So, A Taste of Armageddon[1]?
[1]https://en.wikipedia.org/wiki/A_Taste_of_Armageddon
[+] [-] carapace|6 years ago|reply
I mean, technology is a general amplifier, we should choose what to amplify.
[+] [-] ganzuul|6 years ago|reply
For example, there is a lot of hatred brewing against the Chinese government caused by unverifiable claims of concentration camps. There doesn't need to be a state actor behind it. There doesn't even need to be a conspiracy. Individual people are willing to cause plausibly deniable harm for even imaginary profit. They can exploit pagerank-type algorithms based on intuition alone. Pagerank algos aren't designed for war but it is difficult to claim they didn't have a contribution in the current round of unrest in the Middle East.
The exploitation of psychology for mass-marketing has been going on since shortly after WWII. Currently market forces which are also algorithmic in nature are pushing to make mobile devices ever more addictive and it would be surprising if using AI to accelerate this process is a new idea. With 2 billion users and no ethics Facebook has become a kind of shadow government, with the power to have non-users like me frozen-out by omission by my own family. It is crazy not to be terrified of what they have done.
[+] [-] arcticfox|6 years ago|reply
"unverifiable claims" - what? Even the Chinese government admits they exist/existed [0].
Overall I think your point that it's overwhelmingly easy to confuse the truth these days is exactly right, but you're the subject in your example instead of the people you point the finger at.
[0] https://www.nytimes.com/2019/07/30/world/asia/china-xinjiang...
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] scarmig|6 years ago|reply