Cyberwarfare, Part II: Secret Influence Campaigns
Last week, in Part I of our discussion of cyberwarfare, we talked about some of the technical cyberattacks that states can use to weaken or destroy their adversaries by penetrating or damaging networks, computer systems, and data assets. In Part II, we’ll consider how states can use clandestine social media campaigns to achieve their geopolitical ends at others’ expense.
As we discussed previously, cyberattacks offer the attacker many advantages. First, they can potentially achieve sizable benefits at low cost, with minimal military risk. More important, since the sources and effects of an attack are hard to attribute with confidence, the debate over what happened can also divide the targeted nation. That’s exactly what the attacker wants.
We’ll discuss the Russian efforts to disrupt the 2016 U.S. election as one example. Not because it’s the only example of a state using social media to intervene in another’s affairs: it certainly isn’t. A report to the European Parliament recently noted “growing evidence that governments and political actors across the political spectrum… are embracing these techniques to manipulate political conversations.” Nor are we choosing this example because we’re taking a position on its effectiveness (we aren’t arguing here that the Russians elected Donald Trump).
Rather, we’ll use this example because it’s especially well documented. Materials like the “Mueller Report,” the U.S. indictments of the Russian Internet Research Agency (IRA), Oxford University’s The IRA, Social Media and Political Polarization in the United States, 2012-2018, and New Knowledge’s The Tactics & Tropes of the Internet Research Agency show how any determined state can use social media to undermine an adversary by manipulating and inflaming its internal divisions.
Start with research
If that’s your aim, you start with research. According to U.S. prosecutors, members of Russia’s IRA began systematically studying political groups on U.S. social media sites in 2014 to understand their performance and user engagement, much as a marketer would. Posing as Americans, they reached out to grassroots activists to learn more about their motivations and strategies. Recognizing that sloppy work could give them away, the IRA offered extensive feedback to its employees to help them improve their authenticity and effectiveness.
Using hundreds of fraudulent accounts, the IRA created fictitious U.S. personas and began trying to develop them into “public opinion leaders.” In some cases, these fake influencers gained hundreds of thousands of Facebook followers; overall, between 2015-2017, more than 30,000,000 users shared their posts. On platforms such as Twitter, their reach was boosted by the extensive use of automated bots.
Fake activism
U.S. prosecutors say they used their influence to “create political intensity” by supporting radical groups on both sides – often in direct opposition to each other. In one case, fake “activists” organized opposing demonstrations across the street from each other: one from a Russian-organized anti-Muslim Facebook group called “Heart of Texas,” and another from a Russian-organized “United Muslims of America.”
Early in 2016, according to the Mueller Report, the Russian campaign “evolved from a generalized program [to] undermine the U.S. electoral system, to a targeted operation that… favored candidate Trump and disparaged candidate Clinton.” The IRA put its “organic” influencers to work, while also purchasing targeted social media advertising in the names of U.S. individuals and groups. Concurrently, says Mueller, Russian intelligence began actively hacking the Democratic Party, and in coordination with others, it timed damaging releases of hacked materials through the summer and fall.
Oxford University’s Computational Propaganda Research Project found that the IRA’s social media efforts prior to the 2016 election touched many audiences, but especially focused on two groups: conservative voters and African Americans. The Russians’ key goals included:
- “campaigning for African American voters to boycott elections or follow the wrong voting procedures in 2016, and more recently for Mexican American and Hispanic voters to distrust US institutions;
- “encouraging extreme right-wing voters to be more confrontational; and
- “spreading sensationalist, conspiratorial, and other forms of junk political news and misinformation to voters across the political spectrum.”
Clearly, no matter who you are, if you want to weaken a geopolitical adversary, it helps to convince its citizens to hate each other and their institutions. (That’s little different from the propaganda Nazi broadcasters spread to Allied troops in the 1940s -- except that social media enables unprecedented reach, personalization, and sophistication.) And, of course, as with any propaganda, it also helps if you can start with a grain of truth, even if your goal is to make it impossible for your audiences to believe any source of truth.
What’s next
Since the 2016 election, leading social media platforms have become smarter about recognizing manipulation. But there’s no reason foreign influencers can’t get smarter, too.
For example, in India’s recent election, domestic WhatsApp groups based on caste or religion were apparently used to spread divisions, hidden by WhatsApp’s encryption. It’s easy to envision foreigners using the same techniques elsewhere. So, too, “deepfake” media clips continue to improve, thanks to advances in artificial intelligence. There’s debate about how widely deepfakes will be used to tell political lies, given that people can be so easily fooled through lower-tech means. But it seems likely that someone will soon try.
Citizens have been warned. If they value their democracies, they’ll have to get a lot smarter about what they believe – and a lot more unified against unfair hidden interference from outside, whatever the source.