Fabricating a convincing photo of someone doing something they did not do used to be complicated; altering images in programs such as Photoshop required significant time and effort to do well.
But not anymore.
it is a matter of national security. Courtesy of the author, created using Open AI’s GPT-4
Artificial intelligence (AI) has made it easy to generate lifelike pictures and videos—or deepfakes—of anyone doing practically anything. That capability already is wreaking havoc among the U.S. civilian population.
In January, X (formerly Twitter) had to block all searches for “Taylor Swift” after a torrent of hyper-realistic AI-generated pornographic images resembling the popstar went viral on the platform. The White House called the situation “alarming” and implored Congress to legislate a means to prevent this type of harassment. At the same time, teenagers across the country were getting caught using “nudification” apps to pervert photos of fully clothed classmates, triggering sexual harassment cases that test the limits of the law.
These same schemes could be used to hold sailors at risk.
Deepfake attacks can have devastating consequences for victims’ reputations, mental health, and even physical safety. They are a threat to everyone, but they present an especially pernicious hazard to sailors: They can be used for sex-based extortion—or sextortion—a crime that already targets service members.
Sextortion typically involves criminals cajoling their victims into providing sexually explicit images of themselves and then threatening to share those images publicly unless their demands are met. The motivation for these attacks is usually financial gain, but U.S. adversaries could use the same methods to coerce sailors into divulging classified information or sabotaging military assets.
Historically, sextortion required some indiscretion on the victim’s part, and sailors are taught to be wary of such schemes. Deepfakes, however, can be used against even the most cautious sailors, since they can be made to resemble anyone with even the slightest online presence. The FBI is already seeing a rise in reports of fake images and videos created from social media content and web postings, and the Navy is underprepared for when this issue starts to affect sailors.
The Navy might not be able to prevent deepfake attacks, but it can inoculate sailors against the most severe effects by training them how to respond. Initially, this could include a fleetwide safety standdown to inform sailors that (1) the threat exists, and (2) their commands are ready to help if they are victimized. In the longer run, the Navy should add a deepfake harassment module to its Sexual Assault Prevention and Response (SAPR) curriculum covering the nature of AI-enabled crime and what to do if it occurs.
Instances of sexual harassment and sextortion are most harmful when they go unreported. Right now, sextortion cases involving sailors are referred straight to the Naval Criminal Investigative Service, but that may be too intimidating, indiscreet, and inaccessible for many sailors to feel comfortable coming forward. To encourage reporting, a unit-level option is needed.
The response procedure would fit easily into the Navy’s existing SAPR architecture. Every unit has a SAPR victim advocate—a professionally trained and certified member of the command who provides a range of services to victims of sexual harassment or assault. These individuals are approachable and should help sailors feel a greater sense of comfort and privacy when reporting crimes of this nature. Treating deepfake harassment and sextortion like any other form of sexual harassment should make it more likely sailors will get the help they need, thereby making them less likely to yield to the demands of would-be perpetrators.
Perhaps the most important thing is to ensure commands foster an environment in which individuals feel safe reporting these crimes and believe they will be taken seriously if they do. Sailors often are reticent to report sexual harassment or assault over fear of repercussions or humiliation. Given how realistic deepfake pornography can appear, the fear of seeking help may be further exacerbated: Will people believe me if I tell them this is not real? Who will see these pictures if I report this? Answering these kinds of questions beforehand is vital to getting people to talk.
It is not a question of if deepfake harassment and sextortion will be used against sailors but when. Protecting the force from these types of attacks is not just a matter of mental health and well-being—it is a matter of national security. While the tech industry and Congress are trying to figure out how to prevent these attacks, Navy leaders have the tools available in the SAPR program to minimize the harm they can cause.