In the United States, approximately two-thirds of adults say they get their news from social media platforms. In recent years, however, the inaccuracy of much of these platforms’ information has come to light. As news and information from nontraditional sources becomes more readily accessible, many consumers find themselves unable to differentiate between true news and news that has been created to lead people in a certain direction of thought.
While many know disinformation is an issue in textual stories, few may be aware of the implication of faking techniques in video and audio files. In a world where people rely on social media every day, emerging technologies such as “deepfakes” (a technology that allows a user to superimpose the face of an individual onto a body double, thus creating a digital copy of the individual) could distort news flow or create widespread confusion—distorting day-to-day life, threatening government legitimacy, and showing individuals and institutions in very unflattering and embarrassing—albeit fabricated—ways and situations.
In early 2018, a deepfake of former President Barack Obama surfaced online, attracting the attention of millions when the fake Obama called current President Donald Trump a “total and complete dipsh*t” and warned that “our enemies can make it seem like anyone is saying anything.” While this deepfake was made by the well-known writer and director Jordan Peele, it demonstrated the ease of creating fake audio and visual representations. The Obama fake was created in conjunction with Buzzfeed News.
After the initial reaction, Buzzfeed’s Chief Executive Officer Jonah Peretti commented on how simple it was to create the video and how common the use of such techniques was becoming. “We’ve covered counterfeit news websites that say the Pope endorsed Trump that look kinda like real news,” he said. “Now we’re starting to see tech that allows people to put words into the mouths of public figures.” Both the audio and visual aspect of these videos were created using publicly available computer and video programs such as Adobe’s After Effects and artificial intelligence programs such as FakeApp. This same technique was used by Belgian political party Socialistische Partij Anders to publish a video of President Donald Trump telling the Belgian people that he “had the balls to withdraw from the Paris climate agreement,” and “[they] should, too.” This sparked outrage from Belgian viewers who taunted the President, calling Americans names and insinuating that the President had overstepped his bounds and had no idea what he was talking about. In fact, the tirade did not end until the party publicly claimed the post was meant to be a joke and was obviously a fake. A spokesperson for the party said that, “It is clear from the lip movements that this is not a genuine speech by Trump,” although the video was convincing enough for millions of viewers.
While these are not the only uses of this technology, they—along with revenge porn—represent some of the most common. Because our society values speed and convenience, many people fail to vet information or even look twice at videos to try to spot fakes. Such fake video and audio clips, when used correctly, could be powerful disinformation weapons—used by the nation’s adversaries against us. The repercussions could be innumerable and highly damaging.
As a 17-year-old female, at times I find it hard to differentiate between news sources that claim to be verified and those that are. According to PBS, this is a problem many teens share. In studies done by Stanford University, middle schoolers were unable to tell an advertisement from a factual news source; high school students failed to see the altered results of a graph created by the Minnesota Gun Owners Political Action Committee; and college students willingly cited .org sites without checking further into their references and resources. This is startling and jarring, as these children and young adults soon will be voters and decision makers. As the 2016 national election showed, the power of young adult voters should not be overlooked, and it is for this reason that the military and civilians alike need to be aware of the implications of fabricated news stories and deepfakes.
This form of technology can be a tool for the Department of Defense or law enforcement agencies to engage in face-to-face communication with members of terrorist groups or criminal gangs by imitating the faces and voices of their members. This would allow the military or FBI, for example, to infiltrate organizations while minimizing dangerous in-person meetings. On the other hand, if used against Americans, it could convince voters to make decisions based on false statements, false advertisements, information, and more. Online scams powered by deepfakes could be extremely effective at bilking people out of money or personal information or both.
The military should train its personnel in the risks of social media and ways to distinguish between fake and real news. In addition, federal and state governments should create programs and services to educate people on the importance of checking resources and provide tips to tell the difference between fabricated and real news.
Deepfakes and manufactured news are becoming more widespread and carry significant risks for individuals and society as a whole. It is vital for U.S. military personnel, government leaders, and voters to understand how easy it is to make fake news, videos, and audio files and to have discerning eyes and ears to detect misinformation. The only guaranteed way not to be led astray is to check sources, read critically, and rely on multiple news sources to provide a holistic view of what’s happening in the world. If something doesn’t look or sound right, examine it closely and double- or triple-check it.