We are exposed. In September 2020, FBI Director Christopher Wray testified to Congress: “If you are an American adult, it is more likely than not that China has stolen your personal data.”1 The scale of the theft—and number of affected Americans—is staggering: 22.1 million from the 2015 U.S. Office of Personnel Management (OPM) hack; 78 million from the 2014 hack of the insurance provider Anthem; and 147 million from the 2017 hack of Equifax, which alone included “nearly half of the American population and most American adults.”2
However, these thefts are only part of the concern. Americans consciously share extensive personal information online, and unwittingly generate volumes more that is collected from their digital wake. In aggregate, this data is making service members increasingly vulnerable. Adversaries will weaponize legally available technology originally developed for the online advertising industry and employ it against the United States, with national security implications.
User Data
While traditional print, radio, and television focused on managed one-way mass communication, internet-enabled communications (e.g., social media, blogs, and websites) enable content generation quickly and at scale, creating a “many-to-many” democratized distribution model. Users are not only consumers, but also producers of content shared through posts, tweets, and uploaded photos and video, much of which is personal. User account registrations may contain name, age, gender, marital status, political affiliation, sexual preference, and even education.
Users now play an important role in mediating content as they interact with it. This includes the ability to “like,” “share,” or “retweet,” feeding metrics about that content’s interest value. A prime example is the popular website Reddit. Reddit users post news and self-generated content into topical communities; other users can comment and vote it up or down, which increases or decreases its ranking and visibility.3 This interaction previously helped propagate Russian disinformation among voters during U.S. elections, and it will likely increase in the wake of real-world effects achieved by Reddit users manipulating financial markets in the GameStop saga.4
All this generation, accessing, and interaction leaves a digital trail, with data created and captured (for the most part legally) unbeknownst to the person navigating the web. This includes unique identifying information (e.g., IP and MAC addresses), peripheral information (e.g., location, device model, operating system, and browser), and raw behavioral data (web history, time spent on each site, and even hand mouse movements to determine where a cursor hovers and for how long).5 While collectively this yields a massive amount of data, the data alone is of limited value without tools to process, make sense from, and employ it. Guided by a business model based on maximizing advertising revenue through user engagement (screentime), technology companies have created powerful tools to monetize user data.
Microtargeting
The continuously generated user data yields specific user profiles.6 Web tracking software and cookies enable the collection of extensive information, which in aggregate sheds light on the behavior, routines, interests, preferences, and beliefs of the user.7 This information then enters a vast “digital advertising ecosystem,” where it is sold, merged with other personal information, and continually aggregated and resold by data brokers.8 Additional tracking algorithms make connections to infer unique users even as they transition across devices and platforms.
This process has enabled “microtargeting,” a highly refined “audience segmentation” tool for advertisers to precisely filter data-derived user profiles to deliver tailored content to specific audiences. Microtargeting enables the eerie personal specificity and relevance of digital advertising and gained wider prominence during the 2016 U.S. election. After modifying an existing Facebook app that asked users to answer personality-related questions, Cambridge Analytica deceptively collected identifying information and “likes” of the app’s users and their friends; this fed an algorithm that assigned personality scores to the users and their friends, matched those to voter records, and ultimately profiled voters for targeted political content delivered as advertisements.9 While the concept of using “psychographic profiles” to tailor messages for individual personalities and interests is not necessarily new, this episode brought it to public attention.10
The technology associated with microtargeting has advanced to “sniper-targeting,” with content tailored for extremely small predetermined groups.11 For example, Facebook—with more than 1.5 billion users—allows advertisers to create custom audiences as small as 20 people.12 Regardless of content, the recipients do not know the messages are tailored specially for them based on their predicted likelihood to believe and react to it.
Americans are particularly vulnerable to this threat. China and Russia take extensive measures to strictly control internet access and its content. Even the European Union has the General Data Protection Regulation that limits companies’ ability to collect and use personal data.13 The United States has less regulation (with the exception of California), and user data is largely collected, used, and sold with little user awareness.14 As such, adversaries do not need to exert much effort to develop collection capabilities or deploy teams to engage in close surveillance; they can acquire this information legally and cheaply from afar.
Operations Security
In 2018, researchers from the NATO Strategic Communications Center of Excellence conducted an experiment to determine what could be revealed about a large field exercise and its participants using only open source data, and whether that data could be used to “influence the participants’ behaviors against their given orders.”15 Before the exercise, the researchers established multiple social media accounts, message themes, and lines of persuasion.16
Once underway, researchers used general internet searches to reveal background information on the exercise from official sources, news, and social media posts. Researchers then used microtargeting approaches (including extensive employment of Facebook advertising) to identify attributes and profiles of likely exercise participants and invite them to join “honeypot” pages and user groups related to the exercise.17 On those forums, the researchers posed as soldiers and directly engaged them via group discussions to elicit information.18
The team was able to identify individual members taking part in the exercise, map out entire units based on identification of a single member using Facebook’s “Suggested Friends” feature, determine exact locations of several battalions, and gain knowledge of troop movements and key operational exercise dates.19 The researchers then applied social engineering techniques to elicit more personal information, including phone numbers and email addresses. Eventually, all participants targeted for social engineering provided requested pictures of equipment.20 The researchers were able to “instill undesirable behavior” in some soldiers, including not fulfilling duties, with two soldiers lured from their posts to meet a fictitious woman from Tinder.21
An important point is that the researchers did not individually seek out these soldiers. Using very limited resources (about $60), the researchers used publicly available data and microtargeting tools (e.g., Facebook advertising) to induce the soldiers to come to them as self-nominating targets for exploitation.
Counterintelligence
Intelligence collectors often target people for recruitment based on money, ideology, compromise, or ego (MICE).22 Following the MICE paradigm, adversaries could leverage the growing volumes of personal data with the improving aggregation, analytic, and microtargeting tools to find service members with financial difficulties, with affiliations or loyalties to a particular cause, who are having affairs, or who are disgruntled. While the NATO experiment did not segment for these traits, it did encounter compromising material. One of the researchers remarked, “We managed to find quite a lot of data on individual people, which would include sensitive information . . . like a serviceman having a wife and also being on dating apps,” hinting that this area is ripe for exploitation.23
When one factors in data lost through major hacking episodes, the picture becomes more concerning. For example, Office of Personnel Management data includes security background investigation files used to assess a person’s reliability and vulnerability to compromise; this is exactly what adversary intelligence is trying to assess, too.
Adversaries could also use personal information available online to coerce service members and threaten their families. In 2017, Russia reportedly targeted Estonian soldiers by hacking their phones.24 A U.S. service member attending a sporting event in Latvia and another on a train in Poland were approached by strangers who revealed personal details about their families.25 In eastern Ukraine, Russian forces sent frontline soldiers threatening texts, then texted their families, “Your son has been killed in action,” prompting texts and phone calls to/from the front, and then shelled the geolocated positions of the troops’ cell phones.26 In 2015, the Islamic State published a hit list containing profiles of U.S. service members; while they claimed to have hacked military servers, they likely assembled this from publicly available content.27 However, the most concerning prospect is the weaponization of personal information for military deception.
Military Deception
In simplified terms, military deception aims to “hide the real” and “show the fake” so an adversary decisionmaker will “see” certain activity, “think” the situation is unfolding in a particular way, and “do” a specific action in response.28 “Hiding the real” may involve reducing observable signatures associated with particular activities to conceal a force’s size, composition, location, or intentions from adversary intelligence. “Showing the fake” traditionally relies on staging actions that one wants to be observed so adversaries draw incorrect conclusions.
A person’s background and cognitive processes shape how they perceive and make sense of information.29 To anticipate a decision maker’s reactions, deception planners study their target extensively.30 They examine their target’s background, experiences, personality, interests, beliefs, and decision-making style to create a profile. This insight lets the planner infer the decision maker’s cognitive biases (systematic errors in information processing and analysis) and schemas (mental models based on experience in identifying and matching patterns).31
Collectively, these processes allow a person’s brain to take shortcuts in perception, reaching conclusions, anticipating subsequent actions, and selecting a course of action, freeing brainpower for other cognitive tasks that require deeper deliberation. The adversary deception planner presents exquisitely tailored information so the decision maker will easily assimilate it, bypassing the tendency for deep thought because it matches patterns the target recognizes.32 While painstaking, this effort is the basis for adversary doctrine.
Adversary Doctrine
The Russian concept of “reflexive control” is a scientific approach that involves setting specific background conditions and then forcing a decision point for an adversary. The background conditions appear to narrow the decision space, herding an adversary decision maker toward adopting an action that one desires them to take.33
Reflexive control is so deeply ingrained in Russian doctrine that actions are integrated and synchronized with traditional combat task planning and execution in a concept called “double track control.”34 Reflexive control begins before combat operations and directly affects force allocation decisions, and the supported/supporting relationships constantly alternate between units executing combat and those managing reflexive control.35 Information packages are tailored in content and delivery method and are sent to the adversary through means and formats that one knows the adversary can receive, based on their sensors, equipment, and data processing capabilities.36 Knowledge of the adversary decision maker is key to effectiveness.
The Chinese concept of “stratagem” is based on the idea that perceptions are the output of subjective processes and therefore are susceptible to manipulation. Chinese doctrine has merged stratagem with new information technology to create information warfare stratagems to “seize and maintain information supremacy.”37 This involves offensive and defensive operations, such as deceiving adversary intelligence, surveillance, and reconnaissance and communications, corrupting information processing, and interfering with command-and-control systems.38 Mechanisms include “thought directing,” “intimidation through momentum-building” (psychologically targeting the adversary by presenting the appearance that China is on the verge of victory), and “contaminating” information flows to hide reality, achieve surprise, and drive adversary commanders toward bad decisions.39 Like reflexive control, understanding the individual decision maker’s cognition is key to successful execution.40
Going Forward
U.S. leaders must admit this is a national security problem. While repeated data breaches have left service members exposed, they have been treated as financial and law enforcement issues. Treating these breaches as national security matters would provide justification for increased spending on system and database cybersecurity.
Data generating and sharing behaviors must be altered to reduce adversary opportunities to manipulate or coerce. Key decision makers should stop posting biographical details on organizational websites. For this group, official information risk assessments could run in parallel with the continual assessments associated with their security clearances. While many other service members hold dismissive “I’m too junior to target” attitudes, maturing artificial intelligence and machine learning (AI/ML) capabilities will enable data aggregation, analytics, and targeting at scale, drastically decreasing the effort required to pursue lower ranking service members.
All service members need to be aware of and manage their personal aggregate digital footprints (information currently visible, posted over time, and lost through breaches) and associated vulnerabilities. Future training must highlight when advancing technology enables new threats, such as extraction of biometric signatures from high-resolution photos posted online.41 Training must go beyond protecting data because of the potential for financial and identity theft, and discuss how it also can be used to shape and manipulate perception. While service members will not stop sharing information online, minimizing the unwitting creation of new user data could reduce risk; just as service members qualify for free home-use antivirus software to increase cybersecurity, extending free “managed-attribution” and virtual private network software could limit the unconscious generation of personal data.
In the future, integration of AI/ML into decision systems might partially mitigate the threat; if assisted by algorithms, a decision maker’s cognition could be much harder to anticipate and influence. Unfortunately, this raises new concerns. Analogous to a human’s background, an algorithm’s developmental and training data could reveal insights necessary to manipulate its future behavior. The U.S. military will need to start protecting the algorithm’s “personal information,” too.
1. Christopher Wray, “Statement of Christopher A. Wray, Director, Federal Bureau of Investigation Before the Committee on Homeland Security, U.S. House of Representatives, World Wide Threats Hearing,” 17 September 2020.
2. Wray, “Statement of Christopher A. Wray.”
3. Reddit website, www.reddit.com.
4. U.S. Senate Select Committee on Intelligence, Report On Russian Active Measures Campaigns and Interference in the 2016 U.S. Election, vol. 2: Russia’s Use Of Social Media, October 2019; Allison Morrow, “Everything You Need to Know about How a Reddit Group Blew Up GameStop’s Stock,” CNN, 28 January 2021.
5. Dipayan Ghosh and Ben Scott, “Digital Deceit: The Technologies Behind Precision Propaganda on the Internet,” New America Policy Paper (Washington, DC: 23 January 2018), 5.
6. Ghosh and Scott, “Digital Deceit.”
7. Ghosh and Scott.
8. Ghosh and Scott.
9. Federal Trade Commission, “FTC Sues Cambridge Analytica, Settles with Former CEO and App Developer: FTC Alleges They Deceived Facebook Users about Data Collection,” FTC news release, 24 July 2019.
10. Sebastian Bay and Nora Biteniece, “The Current Digital Arena and Its Risks to Serving Military Personnel,” NATO StratCom Center of Excellence, January 2019, 7–8; Timothy Revell, “How to Turn Facebook Into a Weaponised AI Propaganda Machine,” New Scientist, 28 July 2017.
11. Marc Faddoul, Rohan Kapuria, and Lily Lin, “Sniper Ad Targeting,” UC Berkeley School of Information, 10 May 2019.
12. Faddoul, Kapuria, and Lin, “Sniper Ad Targeting.”
13. “Europe’s Tough New Data-Protection Law,” The Economist, 5 April 2018.
14. Carole Piovesan, “How Privacy Laws Are Changing to Protect Personal Information,” Forbes, 5 April 2019.
15. Issie Lapowsky, “NATO Group Catfished Soldiers to Prove a Point About Privacy,” Wired, 18 February 2019; Bay and Biteniece, “The Current Digital Arena.”
16. Bay and Biteniece, “The Current Digital Arena and its Risks.”
17. Bay and Biteniece.
18. Bay and Biteniece.
19. Bay and Biteniece.
20. Bay and Biteniece.
21. Bay and Biteniece.
22. Randy Burkett, “An Alternative Framework for Agent Recruitment: From MICE to RASCLS,” Studies in Intelligence 57, no. 1 (Extracts, March 2013).
23. Lapowsky, “NATO Group Catfished Soldiers to Prove a Point About Privacy,” Wired, 18 February 2019.
24. Thomas Grove and Drew Hinshaw, “Russia Targets NATO Soldier Smartphones, Western Officials Say,” The Wall Street Journal, 4 October 2017.
25. Ashley Collman, “Russian Agents Using Sophisticated Drones Are ‘Hacking Into NATO Soldiers’ Cellphones in the Baltics to Steal Personal Information, Track Troop Movements and Intimidate Them,” The Daily Mail, 6 October 2017.
26. Liam Collins, “Russia Gives Lessons in Electronic Warfare,” Association of the United States Army, 26 July 2018.
27. Daniel Costa-Roberts, “ISIS Publishes Online Hit List of US Service Members,” PBS, 22 March 2015.
28. Edwin Grohe, “Military Deception: Transparency in the Information Age,” DTIC, 11 June 2007.
29. Richards Heuer Jr., Psychology of Intelligence Analysis (CIA Center for the Study of Intelligence, 1999).
30. Scott Gerwehr and Russell Glenn, Unweaving the Web: Deception and Adaptation in Future Urban Operations, ch. 4 (Santa Monica, CA: RAND Corporation, 2003).
31. Heuer, Psychology of Intelligence Analysis.
32. Heuer.
33. V. G. Kazakov, V. F. Lazukin, A. N. Kiryushin, “Double-Track Control over Combat Actions,” Military Thought 23, no. 2 (2014): 136–44 (Original in Russian; English translation).
34. Kazakov, Lazukin, and Kiryushin, “Double-Track Control over Combat Actions,”; Ronald Sprang, “Russian Operational Art, New Type Warfare, and Reflexive Control,” Small Wars Journal (4 September 2018).
35. Kazakov, Lazukin, and Kiryushin, “Double-Track Control over Combat Actions.”
36. Tim Thomas, Decoding the Virtual Dragon (Fort Leavenworth, KS: U.S. Army Foreign Military Studies Office, 2007).
37. Thomas, Decoding the Virtual Dragon.
38. Thomas.
39. Thomas.
40. Thomas.
41. Mike Elgan, “5 Shocking New Threats to Your Personal Data,” ComputerWorld, 4 February 2017.