The final scene in Christopher Nolan’s 2010 science fiction thriller Inception focuses for a few tense seconds on a small spinning top—a crucial plot point. If the top wobbles and falls, it proves the protagonist’s perception of reality is, in fact, real. If it spins indefinitely, it means he is still in a dream-induced virtual reality. In other words, the world around him (and the seemingly happy ending) is a lie.
Does it fall? Nolan leaves that for the viewer to decide. What matters for the story is that the “top test” provides a way to distinguish truth from ultrarealistic fiction. Our reality, unfortunately, lacks such a convenient device. Yet the need to validate the authenticity of what we read, see, and hear extends beyond movie plots. Deepfake imagery, accusations of fake news, spoofing, catfishing, and viral hoaxes all contribute to the challenge of confirming legitimacy and truth with confidence. As concerning as the problem already is, it is quickly heading from bad to worse. One key reason is that the recent proliferation of publicly available generative artificial intelligence (AI) tools has opened a wealth of possibilities for those seeking to wield malign influence through disinformation.
Disinformation was already a major concern before generative AI went mainstream. Domestic agitators and foreign adversaries have wielded it to foment chaos in the U.S. democratic process and weaken confidence in public institutions. The Promethean potential of generative AI adds a disruptive new variable. Experts predict that generative AI will be used to create a “tsunami of disinformation” during the 2024 presidential election.1 While Russia and China remain the usual suspects for disinformation, AI empowers nonstate actors such as transnational criminal organizations, extremist groups, and even individuals with outsized disinformation-generation capability.
For the military services, disinformation poses an acute threat. Campaigns to undermine confidence or provoke controversy can be customized to target specific organizations, missions, or units. After examining the threat posed by AI and disinformation convergence through the lens of my own service, the Coast Guard, three conclusions are evident. First, generative AI makes designing an anti–Coast Guard disinformation campaign and mass-producing content incredibly fast and easy, a feature geopolitical rivals and bad actors will likely exploit. Second, the Coast Guard is highly vulnerable to targeted disinformation campaigns because of its dual law-enforcement and military identity and nature of some of its missions. Third, mitigating the disinformation threat requires overcoming significant political and technical obstacles, but it absolutely must be done.
A disinformation tsunami is coming, and the Coast Guard needs to set the heavy weather bill.
Background
Generative AI and disinformation are frequently in the news, but they are not always well-defined. The following are brief descriptions of the terms.
Generative AI is a subfield of artificial intelligence that uses computer algorithms to create human-like content, including text, images, graphics, music, or code. It learns pattern recognition by analyzing huge amounts of training data, which it then uses to generate new content that shares characteristics with the input data. Notable examples of generative AI tools include large language models such as OpenAI’s ChatGPT and Google’s Bard, image-generation programs such as DALL-E and Midjourney, and coding tools such as GitHub Copilot.
Disinformation refers to the spread of false information to influence others, rouse them to action, and cause harm. It is distinct from misinformation, which can also be harmful but lacks a deliberate intent to deceive. A disinformation campaign is an organized effort to wield false information to achieve a specific goal. Typically, it begins with a plan and target audience, generates false or misleading media, seeds it across social media platforms, tracks effects, and adjusts as necessary.2 Common tactics include creating false personas, websites, scientific research studies, and imagery; generating conspiracy theories; flooding social media with artificial content; crafting memes and viral slogans; and exploiting politically divisive issues. In the article “Like War: The Weaponization of Social Media,” Commander Doyle Hodges and futurists P. W. Singer and Emerson Brooking describe how bad actors can use social media to manipulate the accepted narrative of truth with a goal of fomenting confusion, chaos, and distrust by “spinning up an audience to chase myths, believe in fantasies, and listen to faux ‘experts’ until they simply tune out.”3 In this virtual battleground for the narrative, generative AI is a potent weapon.
Weapon of Mass Creation
A classic Far Side cartoon captioned “God makes the snake” depicts Gary Larson’s impression of the Creator twisting clay between his hands and exclaiming “Boy, these things are a cinch!”
Generative AI inspires a similar feeling. It allows users to craft complex, convincing media content as fast as they can enter prompts. This includes not only blog posts, tweets, news, and magazine articles, but also technical reports for scientific journals, complete with seemingly authentic data and citations. The key word is “seemingly,” because users can engineer prompts to get the output to say almost anything, even something false, yet make it appear convincing enough that few will detect the artifice. Even other AI programs struggle to determine whether something was created by AI or a human. Generative AI programs such as ChatGPT and its successor versions have not only famously aced the Turing test (meaning they can pass for human in conversation), but also excelled in closely scrutinized, intricate written exams such as the Uniform Bar Exam.4 Early experiments with AI-generated media content to disinform have proven it can influence people as well as—and in some cases even better than—human-created content.5
The volume of near-instantaneous media generation made possible by generative AI can be compared with the massively increased volume of fire that Hiram Maxim’s machine gun introduced to late 19th-century battlefields. Just as machine guns radically changed battlefield tactics, Generative AI will similarly affect media-based influence contests. Unlike the machine gun, however, AI tilts the advantage in favor of the attacker rather than the defender. After all, as the saying goes, a lie travels around the world before the truth gets its boots on.
Red Teaming the Threat to Brand and Mission
What makes the convergence of AI and disinformation so dangerous is that adversaries can exploit it to target a critical vulnerability—trust.
A breach of trust can harm any organization, but the Coast Guard is particularly exposed. Without the continued trust of the U.S. public and international partners, the Coast Guard cannot fulfill its missions effectively. Trust is fragile, especially for politically sensitive missions. To examine how the erosion of trust (and, by extension, public support) could undermine a core Coast Guard mission, consider just one mission—counterdrug operations.
Counterdrug operations are the main effort in the Coast Guard’s maritime law enforcement mission. The Coast Guard routinely deploys cutters, aircraft, and unmanned systems to the main narcotrafficking corridors between South and Central America and the United States. It is by far the most effective federal agency in this mission. Between 2016 and 2020, it interdicted 207.9 metric tons of cocaine, more than half the total amount seized by all federal agencies, valued at an estimated $6.14 billion.6
Success in counterdrug operations relies on maintaining both domestic public support and international cooperation. The Coast Guard’s vast authorities and scope of operational capabilities derive from an expansive legal framework, which includes the Maritime Drug Law Enforcement Act (MDLEA), the Drug Trafficking Vessel Interdiction Act, and more than 40 bilateral agreements with partner nations. Undermining that framework would jeopardize the Coast Guard’s ability to continue performing the mission. In May 2022, for example, a First Circuit Court of Appeals ruling challenged a provision of the MDLEA, signaling the law’s susceptibility to further challenges on Constitutional grounds.7 In 2017, a New York Times article portraying drug smugglers interdicted at sea as victims of unjust circumstances incited scrutiny of the Coast Guard’s operating procedures. Although the article was not itself disinformation, it crafted a narrative that could easily be co-opted for that purpose.
Taking an already controversial issue and building a disinformation campaign around it to further incite controversy is a well-worn tactic in the disinformation playbook. The Coast Guard could be vulnerable to these tactics because it routinely conducts operations that touch on highly visible and controversial issues such as immigration, fisheries, and regulation enforcement; national defense; and disaster response. In addition, while the Coast Guard has thus far not been targeted in protests of police militarization, as both a military and law enforcement organization it could be susceptible to future “defund the police” campaigns.
Narcotraffickers, antigovernment and anti–law enforcement movements, conspiracy theorists, and a variety of other foreign state and nonstate actors all would stand to benefit from degrading Coast Guard operations—and have ample incentive to do so. So, what would it take for them to wage a campaign to undermine domestic and international support for the Coast Guard? With AI assistance, surprisingly little. Ask AI for a strategy to plan the campaign and it will provide an outline, examples to model, and links to more specific information. Ask it which audiences should be targeted for maximum effect and it will provide that, too. Want some insight into what aspects of the counterdrug mission are most likely to be controversial and divisive? No problem, it will make a list. Occasionally a query might trigger programmatic guardrails and it will refuse to fulfill the request. No sweat. There are dozens of online forums devoted to sharing how to “jailbreak” AI to get the desired output.
Once a campaign strategy is mapped out, what about creating media content? This is where AI shines. What would otherwise take considerable time and technical savvy to produce is short work with AI assistance. One can churn out blogs, convincing-sounding anecdotes, ersatz news reports, exposés, and research papers in minutes. Want a message in different languages? The latest ChatGPT iteration is fluent in at least 26 languages.8 Crafting memes, images, or audio/video? There is an app for that, too. Several, in fact, and specialized AI tools to tie those apps together. HuggingGPT, for example, can connect multiple AI models and machine-learning programs to complete complex tasks across several modalities and domains. Indeed, one need not have access to a state-sponsored troll farm to quickly generate huge volumes and varieties of disinformation.
More alarming, generative AI is still in its infancy. ChatGPT has already gone through several evolutionary upgrades since its initial public release in 2022. Calls for a general pause on AI development, spurred by concerns it might be hurtling humanity toward calamity, have for the most part gone unheeded.9 On the contrary, the race to be the world leader in AI is a U.S. national security objective.10
Disinformation Mitigation: Difficult But Necessary
There is an urgent need for a comprehensive approach to mitigate the disinformation threat, but achieving that goal is fraught with challenges, both technical and political.
While most can agree that countering disinformation is important, achieving consensus on how best to do so has proven elusive. A recent Department of Homeland Security (DHS) effort ignited a political firestorm and was ultimately unsuccessful. The case study highlights the considerable political sensitivities that counter-disinformation efforts must overcome to succeed.
DHS began addressing the disinformation threat before generative AI went mainstream. It got a nudge to act in a 2022 Office of the Inspector General (OIG) report that highlighted a need for a department-wide strategy to counter disinformation in social media:
Although DHS components have worked across various social media platforms to counter disinformation, DHS does not yet have a unified department-wide strategy to effectively counter disinformation that originates from both foreign and domestic sources. DHS faced challenges unifying component efforts because disinformation is an emerging and evolving threat. . . . Without a unified strategy, DHS and its components cannot coordinate effectively, internally, or externally to counter disinformation campaigns that appear in social media.11
In a concerted effort to address this shortcoming, DHS established a Disinformation Governance Board in May 2022. The board was to coordinate department activities related to disinformation that targeted the U.S. population and infrastructure. However, it quickly became a lightning rod for criticism, with some likening it to an Orwellian “Ministry of Truth.” Three months after its creation, the board was disestablished, and its chairperson resigned amid a torrent of personal attacks.12 Some observers mused that the Disinformation Governance Board seemed itself to have fallen victim to a disinformation campaign.13
The board’s demise, and the resulting effects on DHS’s counter-disinformation strategy, left the Coast Guard without a department-level structure to frame its own mitigation effort. Recalibrating the DHS approach will take time, but the Coast Guard should not delay action in the interim. While remaining aligned with DHS’s ongoing efforts, the Coast Guard should simultaneously study other approaches and explore promising technical solutions to mitigate risk.
The Department of State developed a “Disarming Disinformation” strategy that it executes through its Global Engagement Center. Its strategy is based on the principle that the best way to counter disinformation is to provide accurate information and promote transparency. It conducts a variety of activities, including working with the private sector to identify and remove disinformation from its platforms, undertaking proactive countermessaging efforts, and building resistance to disinformation influence through sponsorship of media literacy and critical-thinking programs, including interactive games.14 While specifically tailored for countering foreign disinformation (thus avoiding the controversy of analyzing domestic sources), the center’s strategy has yielded several best practices. For the Coast Guard in particular, basing the strategy on transparency aligns with a recent senior-level initiative to promote accountability and transparency throughout the service.15
In addition, promising efforts are underway to develop technical solutions to counter deepfake and other synthetically developed media. MIT Lincoln Laboratory, for example, built a Reconnaissance of Influence Operations (RIO) system to automatically detect and analyze social media accounts spreading disinformation. In one experiment, the RIO system analyzed 28 million social media posts from one million accounts and detected disinformation accounts with 96-percent precision.16 Other efforts to validate authenticity of media content include building algorithms and AI tools to analyze content, verify its source, and identify patterns that link various disinformation sources. While technology is unlikely to produce the equivalent of a true/false test like the top in Inception, it will be an important tool in identifying and exposing false content.
Ideally, in time, a comprehensive national plan to counter disinformation will align action across the U.S. government and private industry. However, for now, the threat appears to be outpacing mitigation efforts. As daunting as the challenges to strategy development at the service- and organizational-level are, they must be overcome or the Coast Guard risks foundering in a sea of false narratives.vv
1. Daniel Howley, “Generative AI Will Create a ‘Tsunami of Disinformation’ during the 2024 Election,” Yahoo! Finance, 15 November 2023.
2. “The Code Book,” Media Manipulation Casebook, 7 January 2022.
3. CDR Doyle Hodges, USN (Ret.), P. W. Singer, and Emerson T. Brooking, “Like War: The Weaponization of Social Media,” Naval War College Review 72, no. 3 (Summer 2019).
4. Lakshmi Varanasi, “ChatGPT Is on Its Way to Becoming a Virtual Doctor, Lawyer, and Business Analyst. Here’s a List of Advanced Exams the AI Bot Has Passed So Far,” Business Insider, 5 November 2023.
5. Rhiannon Williams, “Humans May Be More Likely to Believe Disinformation Generated by AI,” MIT Technology Review, 28 June 2023.
6. Department of Homeland Security, Counter-Drug Operations: Fiscal Year 2020 Report to Congress (Washington, DC: 14 August 2020).
7. Joshua Goodman, “New Ruling Threatens Coast Guard’s High Seas Counter-Drug Mission,” Navy Times, 6 May 2022.
8. OpenAI, “GPT-4,” Openai.com, 14 March 2023.
9. Jyoti Narayan, Krystal Hu, Martin Coulter, and Supantha Mukherjee, “Elon Musk and Others Urge AI Pause, Citing ‘Risks to Society,’” Reuters, 5 April 2023.
10. U.S. Government Accountability Office, “How Artificial Intelligence Is Transforming National Security,” GAO.gov, 19 April 2022.
11. Office of the Inspector General, DHS Needs a Unified Strategy to Counter Disinformation Campaigns (Washington, DC: Department of Homeland Security, 10 August 2022).
12. Steven Lee Myers, “A Panel to Combat Disinformation Becomes a Victim of It,” The New York Times, 18 May 2022.
13. Steven Lee Myers and Eileen Sullivan, “Disinformation Has Become Another Untouchable Problem in Washington,” The New York Times, 6 July 2022.
14. U.S. State Department, Global Engagement Center.
15. Department of Homeland Security, U.S. Coast Guard, “Accountability and Transparency Review to Ensure a Culture Where Everyone Is Safe and Valued.”
16. Anne McGovern, “Artificial Intelligence System Could Help Counter the Spread of Disinformation,” MIT News, Massachusetts Institute of Technology, 27 May 2021.