This month, Proceedings is dabbling in visual artificial intelligence (AI) tools. The information warfare issue is always difficult to illustrate—sailor at computer; Marine over shoulder of sailor at computer; 0s and 1s overlaid on a picture of a ship; and so on. A member of our editorial board suggested there might be no better way to visualize information warfare than to let AI do it—or at least help. The result is on the cover.
The question of how good tools such as ChatGPT and Midjourney (which was used for illustrations here and on the cover) are more than academic. If an image surfaced tomorrow of a destroyer running over a fishing vessel, many people would immediately trust that it was true—“Seeing is believing.” It would not matter if, six hours later, the photo had been undeniably proved to be fake; many would continue to believe it nonetheless, and real harm would be done.
As you can see from the images here (with more in the online version of this article), publicly available AI artwork tools have some way to go. (Among other things, we spent many hours testing and refining queries to generate the few results worth sharing.) But the day is not far off that any malignant person—or troll farm—with a computer and a Twitter account will be able to generate and propagate images and stories that will convince even a normally skeptical person of untrue things that will do real damage.
Dezinformatsiya is the Russian word that gave rise to the English “disinformation.” The idea of disinformation has been with us much longer than the word, but its execution previously depended mostly on words and persuasion. And other words were the primary defense against it, an idea captured today in our collective obsession with “controlling the narrative.” But pictures do not persuade so much as prove. A corollary to “I cannot unsee it” is “I cannot unbelieve it, either.”
The essential flaw in the images on these pages is not that the Midjourney tool is bad. It is that the tool is untrained. If you want cover art for your next album, Midjourney can handle almost anything you can imagine. For naval matters, many of our queries used “USS Arleigh Burke (DDG-51)” in the prompt because early testing showed that the tool seemed to have learned her look and identity (for aircraft, the F-35 similarly obtained good results). But the AI did not know most other ship classes. Not yet.
For example, in the image above, the prompt “the USS Gerald R Ford composed of computer code, sailing on stormy water, ultrarealistic” generated as one of its initial options a person vaguely resembling the late President Ford at an aircraft carrier bridge console.
Each iteration and variation we asked for helped train the AI. As the tool becomes more familiar with military hardware and people, the images generated will become better. (Prompts involving “sailors of the U.S. Navy” or “U.S. Marines” offered some curious uniform styles; let us hope the people in charge of changing the uniforms every few months do not ask for AI help designing them!)
The AI was very good at other elements, generating items in the style of some famous artists—Vincent van Gogh, for example, but not Claude Monet; and it was somewhat less successful with photographers Ansel Adams and Annie Liebovitz. Forgeries in historical styles could play a useful role in information warfare.
Even with the ship it seemed to know best, Midjourney struggled at times. The two images above came from the initial prompt “a high definition photograph of the USS Arleigh Burke, in which the ship’s hull is made of fiber optic cables [sic].” In the one on the right, the image is very similar to a DDG-51, with some flaws (and no fiber optics). The one on the left, however, has replaced a white receiver above the bridge with a giant clock.
However limited, the public tools suggest there is much to be concerned about with state-owned tools. It may take AI tools to catch AI images, but more important will be teaching people to apply at least as much skepticism to pictures as they (sometimes) do to words. It may be decades—if ever—before an AI could lead a physical war of conquest. But the ability of AI to play a potentially decisive role in information warfare is arguably upon us.