By Ensigns Shane Halton and Chris O’Keefe, U.S. Navy
In 1969, then–Army Chief of Staff General William C. Westmoreland said during an address to the Association of the U.S. Army:
On the battlefield of the future, enemy forces will be located, tracked, and targeted almost instantaneously through the use of data-links, computer-assisted intelligence evaluation and automated fire control . . . I see battlefields that are under real or near-real-time surveillance of all types. I see battlefields on which we can destroy anything we locate through instant communications and almost instantaneous application of highly lethal firepower (Congressional Record, U.S. Senate, 16 October 1969).
It is hard to imagine that General Westmoreland would not be pleased with the ability of the United States to conduct Hellfire strikes from unmanned platforms such as the MQ-1 Predator, operating hundreds of miles from a friendly air base while a warfare commander watches the live feed half a world away. At the tactical level, an unmanned aerial system capable of surveying an area for hours before unleashing its lethal munitions is obviously a huge force multiplier. Strategic and ethical concerns aside, the tactical employment of lethal unmanned aerial vehicles (UAVs) rightly heralded as one of the biggest U.S. military technological innovations of the post-9/11 age.
But the Navy, at its core, is not a tactical service. Ships at sea exist at the operational level of war. When it comes to unmanned-systems employment, the service continues to use tactical techniques that have been borrowed and cannibalized from ground employment. So at the operational level, Westmoreland's vision is far from being realized. As the Navy shifts from tactical to operational-level employment of unmanned systems, it must invest in the tools required to make data generated by these systems timely, relevant, and accessible.
UAVs Can Help with the Big-Picture
The rapidity with which the Navy has cycled through various unmanned platforms illustrates the service’s identity crisis regarding how best to use the power of these systems. Scan Eagle, Fire Scout, BAMS-D (Broad Area Maritime Surveillance system, the naval equivalent of the Global Hawk), all of these unmanned platforms have uncertain and often unhappy development stories. A major reason for which the Navy continues to struggle in unmanned-systems development is the current disconnect between utilization at the tactical and operational levels: even though the technology and capabilities of unmanned systems have evolved and improved tremendously over the past two decades, our fundamental use and application of the data they produce has not. Developers will, and should, continue to improve the design, range, and payload capabilities of unmanned systems. However, successful employment of these systems over the next decade at the operational level will require a paradigm shift from the tactical employment of single UAVs supporting individual counterterrorism missions to one of multiple systems working in tandem to maintain strategic, real-time, and persistent awareness of a much larger battlespace. General Westmoreland’s vision was one of localized, tactical battle awareness. If the Navy is to make the next evolutionary leap in unmanned-systems development, it must leverage UAVs to provide awareness at the operational and strategic levels.
Currently, there are significant technological barriers to using UAVs in support of broad-scale surveillance at the operational level of war. These problems do not stem from any deficiencies in the collection capabilities of the current platforms. The newest generations of UAVs, notably the BAMS-D, are optimized to cover long distances for long periods of time. Rather, the deficiencies in operational-level theater awareness are largely due to a lack of available processes and infrastructure to effectively collate and curate the vast amount of content being produced by the dozens of disparate unmanned systems currently being deployed in support of localized tactical missions.
Each UAV feed is just one stream of data flowing into a command center. Building a real-time strategic picture requires merging this stream quickly and efficiently into the overall operational picture. Unfortunately, as the structures now stand, this requires too much manpower and bandwidth to be practical. Therefore, when looking for avenues to invest in the future of unmanned systems, the focus should be placed on developing new and more robust data-management systems and unmanned (automated) analytical tools. Only then will we actually be able to realize General Westmoreland’s vision of “computer-assisted intelligence evaluation” and thereby utilize our unmanned systems to their fullest potential.
Lessons from the Commercial Sector
To better understand which avenues of development should be pursued in future research and development (R&D), it is helpful to examine the current process of employing UAVs to support naval operations. In order for the information derived from a UAV’s operations to be exploited and disseminated quickly, it first must be converted, analyzed, and then presented in a format that a commander can actually use (on a bandwidth-constrained naval ship, sending raw video format is often out of the question, and raw video provides no value to a commander or other decision maker). This conversion and analysis is a tedious, time-consuming task. Imagine that during a 14-hour mission, a BAMS-D vehicle spends six hours over a target harbor. During this period, it detects one possible fast-attack craft departing the harbor and nothing else of interest. Nevertheless, to glean this nugget of operational wisdom, a ground-based analyst will have to sit through 14 hours of video for those 10 minutes of action. The analyst will then post his report on a website. This report is, in turn, downloaded by the carrier’s analysts, and more than likely disregarded.
Furthermore, nowhere is the data stored as part of a longer-term trend analysis effort, or in a database that can be queried by operators and analysts alike. The reality is that while a massive amount of data is generated by unmanned systems, there is actually very little operational benefit gained, even at the tactical level, and certainly not at the theater commander level. The operational value of the BAMS-D flight information in enhancing the commander’s battlespace awareness is further degraded by the fact that at best, the information is half a day old when the theater commander receives even the first chop of the report—which, in a combat environment, is more than likely too late.
This is why unmanned-systems development in the next decade should focus on the ability to automate video and imaging processing while improving back-end data collation and analysis, rather than on simply improving the capabilities of the collecting platforms themselves. However, there is no need to start the R&D from scratch. In fact, recent developments in knowledge management and data-discovery technologies in the commercial sector offer two ways that could be adapted (relatively) easily to improve today’s inefficient process of exploiting and fusing the unmanned systems’ contribution to the strategic picture.
First, technology that can automatically analyze image files in order to identify objects and activities already exists in the form of Facebook’s facial-recognition software. Once it scans a face, it is capable of recognizing that face in future images. Similarly, video-scanning software could allow analysts to “teach” the software the type of ships being sought so that when a UAV spotted those vessels, it could immediately alert the analyst. This would save analysts time and, once the technology reached a certain level of maturity, it could possibly replace preliminary human analysis of video feeds altogether.
Make Intelligence Accessible
Another challenge is how ensure that finished intelligence derived from UAVs is readily available and accessible to future analysts and theater commanders. In addition to requiring updates on the current disposition of hostile units, the warfare commander is likely going to be interested in their historical disposition in order to gain a deeper understanding of normal operating patterns. Such background makes it easier to identify what type of behavior is out of the ordinary and, if not dangerous, at least interesting and worthy of further scrutiny.
Historical analysis, developing a ‘baseline’ of knowledge about the adversary’s activity, is now almost entirely a function of human analysts. Again, in this regard the Navy today is a long way from General Westmoreland’s vision of computer-assisted intelligence evaluation. For example, once the BAMS-D report described earlier reaches the carrier, it will be read by a dozen analysts and watchstanders as part of their daily digest of information, just one more drop in the ocean of knowledge. Over time these people, supported by the information gleaned from BAMS-D missions, may develop an understanding of how often fast-attack craft operate from that particular port, and they may use that information to improve the commander’s battlespace awareness. A large increase in the number of craft operating from that base may spell danger for the carrier, and if the analysts have built up a baseline, they can understand and report that fact to the commander. However, this method relies on human entry and indexing—and in today’s manpower- and funding-strapped Navy, accurate data entry, especially in the intelligence fields, tends to fall by the wayside.
There is a better way, one that uses existing commercial technology to augment human analysis and indexing. Ultimately all the raw video and collection data generated from the BAMS-D flights is stored somewhere, on a website or in a database. By running automated data-discovery software on this repository, commanders could accurately index all activity collected during UAV flights, both past and present. The commander could then easily query this indexed data to gain critical information at a moment’s notice. For example, an analyst could run a query against the database to learn what type of activity had occurred at a port over a certain period of time.
An activity database is a simple concept, and using databases to support decision making is nothing new. It’s the speed and thoroughness of indexing and data analysis that needs to change, and, more important, the data needs to be made more accessible to a broader audience. Getting information from military databases today often requires human analysts to input a complex set of search queries, typically involving long strings of Boolean logic. There are several data-discovery tools available that would negate the need for complex queries. They would enable a commander to ask questions of the database in everyday language, letting automated data-discovery tools provide the answers. In the same way we ask the iPhone’s electronic assistant Siri where the nearest restaurant is, analysts and theater commanders need to be able to ask in plain English important analytical questions about current and historical activity.
Watson Never Forgets
The highest-profile data discovery and analysis support tool of this kind is IBM’s Watson, which in 2011 made waves in artificial intelligence and data-discovery circles by beating the all-time Jeopardy champion, Ken Jennings, in a televised broadcast.
Watson can answer questions posed in natural language by scanning unstructured and structured data sets. It then processes this data through semantic algorithms, creating an understanding of the relationships between different pieces of data. These algorithms allow Watson to relate a certain time (say 1630) and date (2 August 2012) to the departure of a number of fast-attack craft (five) from a port of interest (port X)—the same information that a human analyst would glean from the same report. The difference is that Watson can process 500 gigabytes, the equivalent of a million books, per second. And Watson never forgets anything.
Watson differs from human analysts in another crucial respect. During the aforementioned Jeopardy match, it rang in with its (usually correct) answer and degree of certainty. It did not ring in if that degree fell below a threshold; Watson only answered if it were, say, 80 percent certain of the answer based on available information. While human analysts may be able to give commanders degrees of certainty regarding their assessments, it seems unlikely they could say with a straight face that it was 86.75 based on available information.
It is also important to highlight the fact that as a system capable of understanding natural language, no special programming skill is required to use Watson. Imagine a team of analysts at the Office of Naval Intelligence who did nothing all day but feed a Watson mainframe the latest UAV reports. That mainframe would then be accessible to all commanders via a website on the SIPRNet; it would crunch all the data and deliver answers immediately, much as Google’s servers do now. Every aircraft carrier would not need its own mainframe. Analysts could ask Watson “when is the last time a submarine left port X” and begin their research based on the answer, as opposed to picking through months of reports one by one.
The next ten years will witness huge advances in unmanned system technologies. Sensors will become more accurate and be able to collect against targets at longer ranges. A combination of improved battery technology coupled with advances in the development of solar panels will give UAVs the ability to operate over longer distances and remain on station for longer. Unmanned systems will gain in lethality as weapon systems continue to improve. “Swarm” algorithms will allow unmanned systems to calculate the optimum way to engage an enemy and perhaps even conduct an attack autonomously.
However, these advances alone do not mean unmanned systems will enhance the warfare commander’s understanding and mastery of the battlespace, particularly at the operational and strategic levels. To ensure that future unmanned systems are utilized to their utmost, we must upgrade our capacity to ingest, process, and disseminate the information they produce.
Perhaps the greatest barriers to adopting unmanned analysis programs Fleet-wide will be cultural. The switch from paper to digital charts a decade ago caused an uproar that still echoes today; it will likely be a while before a theater commander instinctively trusts an automated data-discovery program that gives him a statistically probable but counterintuitive answer that runs against his gut instinct and operational experience. Still, given the expanded analytical capability offered by these tools, we will (eventually) adjust to the fact that unmanned machines have as much to offer us in terms of enhancing our cognition as they do in enhancing our eyes and ears.
Ensign Halton serves as the intelligence officer for VFA-41 and is stationed at Lemoore Naval Air Station, California. He served as an enlisted intelligence specialist before commissioning as an intelligence officer through the STA-21 program.
Ensign O’Keefe serves on the staff of Commander, U.S. Naval Forces Southern Command. He writes and blogs about naval policy and national strategy, and is the producer of the video series A History of the Navy in 100 Objects