Artificial intelligence and war

at

Artificial intelligence and war

Bruce Newsome reviews the recently published book: “Strategy, Evolution, and War: From Apes to Artificial Intelligence,” authored by Kenneth Payne and published by Georgetown University Press.

Artificial intelligence (AI) has been explicit in the practices and policies of defence since at least the 1970s, at least in high-capacity countries, given the exponential growth in the power of electronic computing per unit cost. It was already specified in training and forecasting simulations, decision-making aids, targeting aids, robotics, adaptive navigation systems (as in the Tomahawk Cruise Missile), and ballistic missile defence. Any child with a video game could experience AI.

AI raced up Western governmental priorities in the 2000s by application to countering terrorism; in 2009, the US escalated its cyber capabilities and authorities, partly on the promise of AI; in 2014, the Russians seemed to know first what the defenders of Ukraine were doing, in part because of integration of AI; and in 2016, Western governments consensually blamed Russia for unprecedented interference in American and other elections, partly aided by AI.

Last week, a British defence minister justified a restructuring of the British Army due to “rapid developments in artificial intelligence, quantum computing, and robotics”. Just this week, a former US government appointee and current Deputy of the Organization for Economic Cooperation and Development called for international legal control of military use of AI, analogous to controls on nuclear proliferation and the weaponization of space.

Investment in military AI is partly justified by dual-use opportunities outside of defence, such as Britain’s public healthcare, which has topped 7% of GDP per year for the last decade, while defence spending has barely topped 2%. In May of this year, the Prime Minister pledged to revolutionize healthcare with semi-autonomous analysis of patient data, with the promise of preventing more than 20,000 cancer-related deaths each year by 2033.

Thus, we could use a good book on the prospects for AI. The Americans Peter Singer and Paul Springer were early to provide accessible and practical guides to military automation.

By contrast, Kenneth Payne promises to ground AI in our biological evolution, followed by “how the arrival of culture has modified this evolutionary process.”  The book is short, which is a virtue except when it leaves too much unexplained and unexplored.

The writing is not easy to read. The sections appear and disappear every couple of pages without warning. Too many sentences take re-reading due to inconsistent syntax and terms. The style frequently turned unhelpful, such as to describe an area of research as “worth keeping in mind.” His introduction to AI starts with his personal journey through science fiction.

The casualness of style overlaps the looseness of substance. In the tradition of war studies, the approach is idiosyncratically philosophical and historical, while sneering at how theorists and empiricists in the social sciences just don’t understand war.

The author’s engagement with the relevant disciplines seems collateral. The first anthropological citation is of Napoleon Chagnon’s claims about indigenous Amazonian warfare, without any acknowledgement of the controversies. The “theory of mind” is introduced without any citations or acknowledgement of the difference between the philosophy and the science. The theories of evolution are mostly linguistic.

Even the citations of military thought smack of a first reading list for an undergraduate in War Studies. The book treats Basil Liddell Hart as the spokesperson for a strategic tradition back to the ancients, but Hart was a military journalist who twisted history to fit his own agendas and ego. The review of the concept of strategy is surprisingly short, relies on the most abstract historians and philosophers of strategy, and ignores the practical work from the cognitive and management sciences, which are highly engaged with AI. At one point (page 28), the author excuses himself from these other disciplines by asserting that his focus is on conflict, but his focus is on AI too.

The book largely ignores the relevant applied sciences and spends most of its space on the evolutionary and cultural roots of strategy. The book keeps promising to explain and to forecast, but avoids theories. For instance, the theories of evolution get a short, selective review that concludes: “the lesson is to treat such ‘just so’ theories sensitively – as theories, with some supporting evidence but with a degree of uncertainty.”

The book offers no explicit theory of its own, but frequently promises both an “approach” and an “argument,” something both “speculative” and “convincing,” something both “thematic” and generalizable. This theory is scattered, wordy, repetitive, and yet far from profound – I got tired of being told that war-making is “inherently” or “intensely psychological,” that a more “sophisticated culture” or “society” will make war more complex. Such propositions are effectively circular anyway.

The second part (on culture) has a chapter on each of Thucydides, Karl von Clausewitz, and nuclear weapons. The book’s justifications for such selectivity are perfunctory: both Thucydides and Clausewitz are described as “timeless”; the justification for nuclear weapons is essentially that AI is analogous in its revolutionary impact on warfare, but the chapter on nuclear weapons begins by asserting that “nuclear weapons are not as revolutionary for strategy as is sometimes thought.” I could find no justification for why other military technologies should not be considered more analogous to AI. Why not electronic computers, which were at the centre of what was termed the “revolution in military affairs” in the 1970s and are directly relevant to AI?

The final and third part promises to focus on AI, but only about 40 of the book’s 270 pages fall in this part.

I was expecting a detailed review of current technologies or capabilities, followed by a review of expert expectations for the future, but the observations are unremarkable or misleading, such as the claim that AI will allow aircraft to be flown unmanned and thence stealthier. In fact, we saw that development without AI, decades ago.

I turned to the next page (166) to find this deflating sentence: “There is no space here to develop a detailed account of the field of artificial intelligence research.” Then I really don’t see the point of the book.

There are some uses of relevant terms such as “deep learning” and “pattern recognition,” but no definitions and no description of their current application in military systems. I found a reference to the autonomous vehicle competition sponsored by the US Defence Department, but only as an example of AI (in the same sentence appears an autonomous chess-playing computer). The same US Defence agency has a competition for autonomous robots that can perform manual and pedestrian tasks like a human, but the book immediately dismisses this because the robots stumbled “with comic effect.” You’ll find no description of what DARPA intends for this competition or what it has learnt, or what the competitors have tried or learnt.

The book is more engaged with past abstractions than current practices and policies, such that Clausewitz is referenced in the same section (for analogizing war to a game of cards – a tedious analogy that repeats throughout the book).

The book’s forecasts for war are disappointing – too many seem obvious or circular: AI will speed up decision-making; this will “challenge” human control; systems without AI will become obsolete.

Nevertheless, the book keeps spinning these repetitions as profound. Take this quote: “The lessons here for strategy are profound: machines, at least as we presently understand them, will navigate between goals according to principles that are inherently inhuman.” The back cover promised that the argument would be “provocative.”

Where the book’s forecasts cease to be platitudinous they are unconvincing, such as the expectation that AI will favour the offensive. This expectation is not based on theory so much as selective insights from early modern military philosophy. It sounds too much like Liddell Hart’s airy confidence that the technologies of the 1920s would favour the offensive (before he switched in the 1930s to assert the defensive as dominant).

In practice, AI serves the defensive at least as well as the offensive. Take cyber security, which the book inexplicably ignores: AI is already much superior to human intelligence in spotting cyber intrusions and adapting to them; offence dominance in cyber space is not due to AI, but to the exposure that most people accept as the price of accessibility; most people sacrifice security in order to be social accessible. Intelligence (whether natural or artificial) affects that choice little.

The book admits few defensive uses of AI, and those that are listed turn out to be inaccurate, such as the claim that AI will enable decoys, but AI is not necessary to decoys; in any case, AI is just as useful in massing decoys in time and space to overwhelm defences as it is defensively to intercept attackers.

Book reviewed: “Strategy, Evolution, and War: From Apes to Artificial Intelligence,” authored by Kenneth Payne, published by Georgetown University Press, hardcover, 270 pp. £75.00, ISBN 9781626165793

5.00 avg. rating (95% score) - 2 votes
  • contribute
  • mm
    Bruce Newsome
    Bruce Newsome, Ph.D. is a lecturer in International Relations at the University of California Berkeley
    x
    We’re committed to providing a free platform to host insightful commentary from across the political spectrum. To help us expand our readership, and to show your support, please like our Facebook page: