Skip to main content
USNI Logo USNI Logo USNI Logo
Donate
  • Cart
  • Join or Log In
  • Search

Main navigation

  • About Us
  • Membership
  • Books & Press
  • USNI News
  • Proceedings
  • Naval History
  • Archives
  • Events
  • Donate
USNI Logo USNI Logo USNI Logo
Donate
  • Cart
  • Join or Log In
  • Search

Main navigation (Sticky)

  • About Us
  • Membership
  • Books & Press
  • USNI News
  • Proceedings
  • Naval History
  • Archives
  • Events
  • Donate

Sub Menu

  • Essay Contests
    • About Essay Contests
    • Innovation for Sea Power
    • Marine Corps
    • Naval Intelligence
  • Current Issue
  • The Proceedings Podcast
  • American Sea Power Project
  • Contact Proceedings
    • Submission Guidelines
    • Media Inquiries
  • All Issues
AI image creator
Artificial intelligence has made it easy to create lifelike images of almost anyone doing practically anything. The Navy needs to prepare for when that capability is used to target sailors.
Shutterstock

Sub Menu

  • Essay Contests
    • About Essay Contests
    • Innovation for Sea Power
    • Marine Corps
    • Naval Intelligence
  • Current Issue
  • The Proceedings Podcast
  • American Sea Power Project
  • Contact Proceedings
    • Submission Guidelines
    • Media Inquiries
  • All Issues

It Wasn’t Me: Deepfake Threats to National Security

By Lieutenants Sophia Costes and Justin Barnard, U.S. Navy
December 2024
Proceedings
Vol. 150/12/1,462
Now Hear This
View Issue
Comments
Body

Fabricating a convincing photo of someone doing something they did not do used to be complicated; altering images in programs such as Photoshop required significant time and effort to do well.

But not anymore.

AI Generated Image of man viewing Deepfake data
Protecting the force from deepfake attacks is not just a matter of mental health and well-being—
it is a matter of national security. Courtesy of the author, created using Open AI’s GPT-4

Artificial intelligence (AI) has made it easy to generate lifelike pictures and videos—or deepfakes—of anyone doing practically anything. That capability already is wreaking havoc among the U.S. civilian population.

In January, X (formerly Twitter) had to block all searches for “Taylor Swift” after a torrent of hyper-realistic AI-generated pornographic images resembling the popstar went viral on the platform. The White House called the situation “alarming” and implored Congress to legislate a means to prevent this type of harassment. At the same time, teenagers across the country were getting caught using “nudification” apps to pervert photos of fully clothed classmates, triggering sexual harassment cases that test the limits of the law.

These same schemes could be used to hold sailors at risk.

Deepfake attacks can have devastating consequences for victims’ reputations, mental health, and even physical safety. They are a threat to everyone, but they present an especially pernicious hazard to sailors: They can be used for sex-based extortion—or sextortion—a crime that already targets service members.

Sextortion typically involves criminals cajoling their victims into providing sexually explicit images of themselves and then threatening to share those images publicly unless their demands are met. The motivation for these attacks is usually financial gain, but U.S. adversaries could use the same methods to coerce sailors into divulging classified information or sabotaging military assets.

Historically, sextortion required some indiscretion on the victim’s part, and sailors are taught to be wary of such schemes. Deepfakes, however, can be used against even the most cautious sailors, since they can be made to resemble anyone with even the slightest online presence. The FBI is already seeing a rise in reports of fake images and videos created from social media content and web postings, and the Navy is underprepared for when this issue starts to affect sailors.

The Navy might not be able to prevent deepfake attacks, but it can inoculate sailors against the most severe effects by training them how to respond. Initially, this could include a fleetwide safety standdown to inform sailors that (1) the threat exists, and (2) their commands are ready to help if they are victimized. In the longer run, the Navy should add a deepfake harassment module to its Sexual Assault Prevention and Response (SAPR) curriculum covering the nature of AI-enabled crime and what to do if it occurs.

Instances of sexual harassment and sextortion are most harmful when they go unreported. Right now, sextortion cases involving sailors are referred straight to the Naval Criminal Investigative Service, but that may be too intimidating, indiscreet, and inaccessible for many sailors to feel comfortable coming forward. To encourage reporting, a unit-level option is needed.

The response procedure would fit easily into the Navy’s existing SAPR architecture. Every unit has a SAPR victim advocate—a professionally trained and certified member of the command who provides a range of services to victims of sexual harassment or assault. These individuals are approachable and should help sailors feel a greater sense of comfort and privacy when reporting crimes of this nature. Treating deepfake harassment and sextortion like any other form of sexual harassment should make it more likely sailors will get the help they need, thereby making them less likely to yield to the demands of would-be perpetrators.

Perhaps the most important thing is to ensure commands foster an environment in which individuals feel safe reporting these crimes and believe they will be taken seriously if they do. Sailors often are reticent to report sexual harassment or assault over fear of repercussions or humiliation. Given how realistic deepfake pornography can appear, the fear of seeking help may be further exacerbated: Will people believe me if I tell them this is not real? Who will see these pictures if I report this? Answering these kinds of questions beforehand is vital to getting people to talk.

It is not a question of if deepfake harassment and sextortion will be used against sailors but when. Protecting the force from these types of attacks is not just a matter of mental health and well-being—it is a matter of national security. While the tech industry and Congress are trying to figure out how to prevent these attacks, Navy leaders have the tools available in the SAPR program to minimize the harm they can cause.

Sophia Costes

Lieutenant Costes is attending Surface Warfare Officer Department Head School in Newport, Rhode Island. She holds a master’s degree in public policy, international and global affairs, from Harvard’s John F. Kennedy School of Government.

More Stories From This Author View Biography

Justin Barnard

Lieutenant Barnard is attending the Submarine Officer Advanced Course in Groton, Connecticut. He holds a master’s degree in public policy, international and global affairs, from Harvard’s John F. Kennedy School of Government.

More Stories From This Author View Biography

Related Articles

In the age of digital interconnectedness, an individual service member’s data can become a pivotal factor in operational security.
Nobody Asked Me, But . . .

The Sea Services Need Better Personal Data Protection

By Joshua T. Nieubuurt
April 2024
Personal data can become a pivotal factor in operational security
(Above) Palestinians burn an Israeli Merkava battle tank after crossing the border fence with Israel in the Gaza Strip on 7 October 2023. Hackers and catphishers played a vital role in the planning for the Hamas attacks.
P Featured Article

The Soft Cyber Underbelly of the U.S. Military

By Major W. Stone Holden, U.S. Marine Corps
June 2024
Information Warfare Essay Contest—Third Prize, Sponsored by Booz Allen. The Israel-Hamas conflict reveals service members are more vulnerable to cyber attacks than they know.
Enforcing restrictions on operatives from China and Russia on monitored social media networks may serve as a strategic counter to the information warfare threats they pose. However, the costs of such a ban, including implementing a registration system, must be balanced against its benefits.
Now Hear This

The Urgent Need for Protection Against Information Warfare

By Commandant (junior grade) Pratik Bardhan, Indian Coast Guard
May 2024
An international governing system and new U.S. laws are needed to combat malign activities from China and Russia on social media.

Quicklinks

Footer menu

  • About the Naval Institute
  • Books & Press
  • Naval History
  • USNI News
  • Proceedings
  • Oral Histories
  • Events
  • Naval Institute Foundation
  • Photos & Historical Prints
  • Advertise With Us
  • Naval Institute Archives

Receive the Newsletter

Sign up to get updates about new releases and event invitations.

Sign Up Now
Example NewsletterPrivacy Policy
USNI Logo White
Copyright © 2025 U.S. Naval Institute Privacy PolicyTerms of UseContact UsAdvertise With UsFAQContent LicenseMedia Inquiries
  • Facebook
  • Twitter
  • LinkedIn
  • Instagram
×

You've read 1 out of 5 free articles of Proceedings this month.

Non-members can read five free Proceedings articles per month. Join now and never hit a limit.