Deepfakes are ultrarealistic audio and video files that appear to be truthful, accurate recordings but are in fact fraudulent creations. Using artificial intelligence (AI) and machine-learning applications to map and replicate the faces, bodies, and voices of real people, deepfake programs make it possible to create convincingly real videos of anyone doing or saying almost anything.1 Deepfakes are often paired with social proof and association networks on social media to lend them plausibility. Together, these have the potential to fundamentally undermine the credibility of video and audio evidence, which eventually will alter how society constructs its perception of reality.
For the Sea Services and the other branches of the U.S. military, the danger is more immediate. Adversaries can use deepfakes to undermine command and control, discredit audio-video files as legitimate sources of intelligence, and damage the public perception of U.S. forces—potentially creating risks for service members deployed abroad.
A Harmless Entertainment
The technology that enables deepfakes originated in the entertainment industry, part of major productions that required significant capital and labor to produce. As early as the 1994 film Forrest Gump, Tom Hanks’ title character was digitally added to historical footage to interact with long-deceased presidents and celebrities.
In recent years, deepfake technology has become more prevalent, cheap, and easy to access. In 2017, FaceApp, a smartphone app from the Russian company Wireless Lab, became one of the first widely available deepfake programs, using AI to make the user’s image appear younger, older, or as the opposite gender.2 In 2019, a Chinese company produced an arguably even more impressive app, Zao, that allows users to replace the face of an actor or actress in major motion pictures with the user’s own face.3
In a humorous (albeit grave) warning to the United States, comedian Jordan Peele produced a deepfake video in which former President Barack Obama appeared to make vulgar statements that he never actually said.4 More recently, a Massachusetts Institute of Technology team working at the Center for Advanced Virtuality produced a video in which President Richard M. Nixon appears to deliver a speech that was written as a contingency in case the 1969 Apollo moon landing had gone wrong.5
Both these presidential deepfake videos were benign and intended to caution the public about the dangers of deepfakes, but deepfake tools also can be used for more nefarious purposes. In 2012, Russians created a malicious deepfake video of U.S. Ambassador to Russia Michael McFaul, a vocal critic of Russian President Vladimir Putin.6 The video falsely suggested that Ambassador McFaul was a pedophile, presumably an attempt to force the United States to replace McFaul with an ambassador less critical of Putin.
The Nature of Information on Social Media
Social media’s role is essential to deepfakes. Social media is not only the means for distribution, but also enhances deepfake weaponization because of how information is shared. Social media allows influencers to target specific interest groups (e.g., social, demographic, and ideological/party affiliations)—users who have identified themselves with certain trends, dispositions, and attitudes. Consequently, it is easy both to find a network and to tailor a message to it.
Rather than relying on the credibility of an established institution, much of the information circulated on social media uses “social proof”—the use of one’s social associations to construe information as fact. Social proof is particularly powerful, as psychological research shows that humans are statistically more likely to comply with a suggestion or request if they see it is supported by someone they associate with.7 The result is to increase the likelihood that information, true or otherwise, spread through social media networks is more likely to be believed by those receiving it.
Studies suggest that social proof can shape users’ perception, even if the users believe much of the news on social media to be apocryphal in general. One Pew Research Center study found that: about two-thirds of American adults (68%) say they at least occasionally get news on social media. . . . Many of these consumers, however, are skeptical of the information they see there: A majority (57%) say they expect the news they see on social media to be largely inaccurate. Still, most social media news consumers say getting news this way has made little difference in their understanding of current events, and more say it has helped than confused them (36% compared with 15%).8
In essence, social media has further socialized people’s construct of reality.
Deepfakes and the Nature of Evidence
Deepfakes are particularly dangerous because they use what had previously been accepted as incontrovertible evidence—video and audio recordings—to spread disinformation.9 For example, audio recordings were used as evidence that President Nixon had attempted to obstruct the congressional investigation of the Watergate scandal.10 More recently, the international community accepted a video recording of Saudi Arabian journalist Jamal Khashoggi walking into the Saudi consulate in Turkey as evidence of the Saudi government’s role in the journalist’s murder.11
But sophisticated fake videos have the potential to change what people perceive as evidence. Imagine the trial of an alleged bank robber. Prosecutors could create a video of the defendant breaking into the bank, while the defense could produce a counter-deepfake video “showing” the defendant at an entirely different location at the time of the robbery. Without witnesses or additional evidence, how would the court determine which video represented reality? Software might be able to detect a deepfake, but software can be fooled. When social proof has taken the place of established authority to verify the authenticity of information, it is likely jurors might arrive in court predisposed to accept one video or the other.
Military Threat Analysis
Several hypothetical scenarios can help conceptualize the dangers of deepfakes for the military:
• A deepfake video that depicts the Surgeon General of the Navy claiming that a particular U.S.-produced COVID-19 vaccine is carcinogenic is circulated through social media.
• A human intelligence source provides a U.S. military unit with a deepfake video that depicts an Iranian general briefing a planned nuclear strike against a U.S. target.
• A deepfake video depicting U.S. sailors defiling a mosque is circulated through Muslim religious and social groups coincident with a U.S. Navy port visit in a majority-Muslim country.
To defend against each scenario, the military must attack the means (i.e., the deepfake audio or video files themselves) and protect the ends (i.e., the human minds the deepfake is intended to influence). In essence, the military must first determine that the video is inauthentic and then convince the target audience not to believe what they are seeing.
Attacking Means
There are two primary steps for combating deepfakes. The first is detection. John Jay College of Criminal Justice professors Marie-Helen Maras and Alex Alexandrou say this detection technology—often called digital image forensics—focuses “on detecting low-level alterations in images, such as dropping or duplicating a frame or frames and/or regions, splicing and copy-pasting a part or parts of the original image and placing them in other areas (i.e., a copy-move manipulation).”12
The second step is developing and tracking digital fingerprints that enable users to trace the origins of an image to its source.13 In 2021, Facebook—in cooperation with Michigan State University—developed artificial intelligence software that the company says can both detect deepfakes and trace them back to their source.14
Such an advance is welcome, but even if the software works perfectly, there are still two major challenges that can quickly diminish its effectiveness. First, deepfake technology will continue to advance; detection software must do the same to remain effective. Second, the success of the detection software’s ability to inform the public on what is fact and what is deepfake rests on the assumption that the targeted audiences have inherent trust in the detection software and the organization (e.g., Facebook) implementing the software. Ultimately, trust will be the most valuable currency in the battle against deepfakes.
Depending on Ignorance
Detection and tracing are only the first steps. The goal in the U.S. military’s fight against deepfakes is to prevent targeted audiences from responding to them as if they were real. Its accomplishment will require educating and earning the trust of the targeted audiences, so they understand that deepfakes exist and the institutions that discredit deepfakes can be trusted.
The military should begin by educating its force. An internal educational program does not have to be extensive. It could be as simple as the circulation of a one- to two-minute video throughout the Department of Defense that depicts the capacity of deepfakes to produce a convincing verisimilitude and closes by stressing the importance of trusting only official information sources.
Convincing foreign civilian populations not to react to adversary deepfakes will be a challenge. Such populations are likely to favor their own domestic media sources and may be inclined toward basic skepticism about the U.S. military. Collaboration with the private sector will therefore be essential. The good news is that many tech companies that control the platforms by which deepfakes would be distributed are already taking measures to stop their spread. However, this is by no means the end of the fight, as deepfake technology will continue to adapt to avoid detection.
The 19th-century French poet Charles Baudelaire said, “The devil’s finest trick is to persuade you that he does not exist.” At present, large swaths of the world society are unaware of the existence of deepfakes. To remain ahead of its adversaries, the U.S. military must ensure as many people as possible are educated on the deepfake threat. DoD’s efforts should begin with the education of its service members and progress outward from there. Service members and ships’ crews must be prepared to respond if threats related to deepfakes materialize unexpectedly. In the fight against deepfakes, ignorance is the greatest weakness, and education is the greatest defense.
1. Marie-Helen Maras and Alex Alexandrou, “Determining Authenticity of Video Evidence in the Age of Artificial Intelligence and in the Wake of Deepfake Videos,” The International Journal of Evidence & Proof 23, no. 3 (1 July 2019): 255–62.
2. Kate O’Flaherty, “The FBI Investigated FaceApp. Here’s What It Found,” Forbes, 3 December 2019.
3. Marie C. Baca, “Viral Chinese App Zao Puts Your Face in Place of Leonardo DiCaprio’s in ‘Deepfake’ Videos,” The Washington Post, 3 September 2019.
4. Ian Hislop, “How the Obama/Jordan Peele Deepfake Actually Works,” BBC.
5. Suzanne Day, “MIT Art Installation Aims to Empower a More Discerning Public,” MIT News, 25 November 2019.
6. Deb Reichmann, “I Never Said That! High-Tech Deception of ‘Deepfake’ Videos,” The Seattle Times, 1 July 2018.
7. Robert B. Cialdini, Influence: The Psychology of Persuasion, rev. ed. (New York: Collins, 2006), 20.
8. Elisa Shearer and Katerina Eva Matsa, “News Use Across Social Media Platforms 2018,” Pew Research Center’s Journalism Project (blog), 10 September 2018.
9. Maras and Alexandrou, “Determining Authenticity of Video Evidence.”
10. Marisa Iati, “Inside the Supreme Court Ruling That Made Nixon Turn over His Watergate Tapes,” The Washington Post, 3 October 2019.
11. Julian E. Barnes, Eric Schmitt, and David D. Kirkpatrick, “‘Tell Your Boss’: Recording Is Seen to Link Saudi Crown Prince More Strongly to Khashoggi Killing,” The New York Times, 12 November 2018.
12. Maras and Alexandrou, “Determining Authenticity of Video Evidence.”
13. Haya R. Hasan and Khaled Salah, “Combating Deepfake Videos Using Blockchain and Smart Contracts,” IEEE Access 7 (2019): 41596–41606.
14. Jaclyn Diaz, “Facebook Researchers Say They Can Detect Deepfakes and Where They Came From,” NPR, 17 June 2021.