Play all audios:
Cold War culture produced some enduring witticisms, but few are wiser than this gem from Dr. Strangelove: “The whole point of a Doomsday Machine is lost if you keep it a secret.” In a recent
_War on the Rocks_ article, Joshua Rovner portrays cyber capabilities as the anti-Doomsday machine: nations use them in public but see little blowback. Rovner bases his conclusions off
empirical work he’s done with Tyler Moore, published in a new journal article. I don’t buy the argument (at least as it was written up for _War on the Rocks_). Before I get to why, though,
one important caveat: I applaud Rovner and Moore for trying to bring more data-driven research methods to the intersection of cyber-security and international affairs. Too often, the
research in this area lacks empirical observation. But data, especially in a field that’s mostly hidden from public view and which requires technical knowledge to understand, has its limits.
Incomplete numbers lie. Rovner’s most important argument is straightforward: “There may be good reasons to eschew offensive cyber operations … but officials need not be deterred by the fear
of extensive and lasting collateral damage.” To get there, he argues that Stuxnet and the operations revealed by Edward Snowden did not cause much blowback, even though each went public in
a way that the United States did not intend. I think there are five reasons to be skeptical of this claim. First, the officials who authorized Stuxnet were greatly concerned with the fear of
collateral damage. Indeed, as Rovner acknowledges in his article, Stuxnet had extensive target verification checks and a highly customized payload — so much so that Richard Clarke noted
that the code looked like “it had been written by a team of Washington lawyers.” It’s a good thing it did, too, because these failsafe measures almost certainly limited the damage done by
the worm when it got out. Stuxnet may be the most public cyber-attack ever — making it ripe for scholarly case studies — but it’s also probably the exception when it comes to preventing
collateral damage. It’s also a worm, meaning it spreads in a much different way than most nation-state malicious code. We shouldn’t generalize from it. Second, unintended consequences happen
all the time in cyber operations. Not only does collateral damage occur in cyber-attacks, but it even occurs in supposedly-benign cyber-espionage — operations not meant to be attacks at
all. In 2012, the Internet went out in all of Syria, in what’s reported to be an NSA espionage operation gone wrong. In 2015, an industrial control system in a German steel mill
malfunctioned. Some believe that a cyber espionage effort, which unintentionally caused an accidental shutdown, did the damage, though the operation’s motivation is unclear. Malicious
computer code can have direct and unpredictable consequences, a fact that’s hard to appreciate from a scholarly distance. Third, strategic blowback is real. Rovner suggests that there wasn’t
much of it in response to the NSA activities revealed by Edward Snowden. He mentions that the Chinese stopped their cooperation on cyber issues with the United States, but says it might
have been unrelated. Yet the Chinese specifically cited the NSA activities as their reason for their withdrawal from working groups. Moreover, many nations have increased requirements for
data localization in an attempt to try to keep it out of the hands of the NSA. Countries like Brazil — which Rovner cites as trying to increase cooperation with the United States on
cyber-security — have started laying their own fiber optic cables in response to the publicized American wiretapping efforts. Even basic cybersecurity cooperation talks have collapsed on the
international diplomatic scene. Imitating the United States, a wide range of nations announced their own cyber-warfare units and commands. One can debate how effective these steps are, but
the notion that foreign governments haven’t reacted strongly to the NSA’s published activities doesn’t hold up. Private companies have also responded in a way that is likely to the detriment
of effective cyber espionage. There are reports that American companies now voluntarily cooperate much less with the NSA than they used to, fearful of getting caught in a government action
that appears to overreach. Firms now more widely build encryption into software by default, to the great frustration of American law enforcement and intelligence officials. The subtext is
also often explicit, as corporate executives regularly take aim at NSA activities in public. Rovner says that these technical responses make the Internet more “resilient” and, if I
understand him correctly, should therefore mean that offensive cyber operations are less likely to do collateral damage. But encrypting communications is usually a safeguard against
interception and espionage efforts, not destructive cyber-attacks. This is because encryption doesn’t really stop intruders from getting on endpoints (computers). Instead, it just stops them
from reading data in the middle. If you’re on an endpoint, you usually have the same access as a legitimate user. So, encrypted communications make the work of intelligence collection
harder, but shouldn’t be seen as some bulwark against collateral damage. Fourth, and more generally, it’s hard to determine what is a response to what in the international arena, but even
more so when much of the activity is hidden. Was the group — often-linked to Russia — that burned NSA exploits, including the one later used in WannaCry and NotPetya, responding to
something? Maybe the disclosure of the agency’s hacking tools was an answer to some secret U.S. intelligence activity, which itself was a response to the ongoing Russian hacking efforts
against American political institutions. Maybe it was linked to something in Ukraine. Or maybe it was a counterpunch against the Panama Papers published a few months before, which exposed
global financial corruption, and which, in the fever dreams of the Kremlin, might be seen as another American attempt to dislodge Putin from power. Cause and effect are hard to determine.
It’s often the unenviable job of political scientists to provide explanations where explanations are difficult. Nevertheless, this difficulty means that we should be skeptical of claims that
operations do or don’t have blowback. There are empirical things that we can measure, but these often aren’t the right ones. Rovner says that, despite attacks like Stuxnet, “the vast
majority [of people] continued to communicate, share photos and videos, buy Christmas gifts, and so on.” Yet I’d argue that the use of Gmail, Snapchat, and Amazon tell us comparatively
little about international affairs, what governments fear, and what they do to assuage that fear. Fifth, Rovner — and many others — favor a sharp distinction between offense and defense in
cyber-operations. This is useful for doctrine-writing and for those conceptualizing cyber operations at a general level, but it doesn’t hold up when one looks more deeply at national hacking
efforts. There’s no doubt tools like antivirus software are defensive, but top-tier national cyber-defense doesn’t just rely on Norton. Instead, nations in part defend themselves by hacking
others, gathering intelligence on how their adversaries operate and how to thwart those operations. In my book on this subject, _The Cybersecurity Dilemma_ (which I also discussed on the
War on the Rocks podcast), I relate the story of how the NSA improved its defenses against People’s Liberation Army hackers by penetrating their networks, understanding the tools, and
informing American defenses in advance. To American policymakers, these operations aren’t offensive. They’re certainly not conducted under offensive legal authorities. But the Chinese, had
they discovered the intrusions, would be unlikely to share this view (just as the United States would hardly be nonplussed if it found foreign governments poking around classified NSA
networks). For a variety of reasons, including the extensive reconnaissance required to enable high-end cyber-attacks, it’s hard to tell the intruders’ intent. The risk of misinterpretation,
blowback, and escalation is not exclusive to offensive cyber-operations, but exists for some defensive-minded hacking too. What should we take away from all of this? Simply put, studying
cyber operations is hard. Empirics are important, and I commend Moore and Rovner for their work in being more rigorous than most of those writing on the topic, but detailed technical
knowledge matters too. Forensic analyses of intrusions, for example, are widely overlooked in scholarly literature because they can be specialized and technical, yet they provide great
insight into what nations and other actors actually do with their malicious code. This discussion is important for everyone. As Rovner rightfully suggests, significant national security
choices depend on our answers to these cyber policy questions. Maybe the only thing worse than a Doomsday Machine that’s secret is one that’s misunderstood. _Ben Buchanan is a Postdoctoral
Fellow at Harvard University’s Cybersecurity Project, where he conducts research on the intersection of cybersecurity and statecraft. His first book_, The Cybersecurity Dilemma_, was
published by Oxford University Press in 2017. He has also written on attributing cyber attacks, deterrence in cyber operations, cryptography, election cybersecurity, and machine learning. He
received his PhD in War Studies from King’s College London, where he was a Marshall Scholar, and earned masters and undergraduate degrees from Georgetown University. _ Image: Pexels, CC
Commentary