Tear Down This Wall
Different machines speak different languages. In the past, hardware acted as the middleman, translating between analog-digital signals or different system outputs. Humanity has always had an “iron curtain” dividing us from our machines. The mouse, keyboard, and aircraft joystick are humanity’s multiplexers, hardware that translates our intent through mechanical input into information the computer can comprehend. Voice recognition comes close to breaching the gap, but it still necessitates the mechanical translation of thoughts into speech. Now computers directly communicate, emulating that physical multiplexer by interpreting one another through “virtual machine” software. With the demise of the hardware multiplexer, there is no division among machines.
For humans, that barrier has been breached not only by BrainGate, but by the research team at the University of California, Berkeley’s, Gallant neuroscience lab. Natural Movies was a program developed in 2011 to translate brainwaves into rough images. In the January edition of PLoS Biology , the same team revealed a program that can translate those waves into words. Far more difficult is the human mind accepting input from computers.
Yet a glimmer of potential can be seen in the case of Jayne Bargent. In December 2011, the University College London Institute of Neurology and National Hospital for Neurology and Neurosurgery in Bloomsbury, England, virtually cured her of uncontrollably violent Tourette’s syndrome tics through deep brain stimulation. In this procedure, two small electrodes are implanted in the brain, releasing tiny shocks that balance out the brain’s electrical field.
Certainly this is far cry from receiving information from a computer, but if a computer can listen to our minds, perhaps it can one day speak back just as directly. This and other technologies offer incredible potential: bionic limbs for amputees, speech for the mute, sight into human dreams. Each has independent military value: control of exoskeletons, voiceless communication, PTSD treatments, brain-scan detainee interrogation. A surreal world awaits us, and its clearest manifestation comes from these systems working in unison.
Considered in union, these technologies offer a glimpse of future direct-brain-control suites for drones, fighter aircraft, warships, and other warfighting platforms. This is the cybernetically unified half of cloud combat. The virtual heads-up display in the F-35 helmet becomes a mere trinket when the pilot’s control surfaces migrate to his mind, becoming an integrated part of the aircraft’s processors. As situations change, the reactions of the pilot are instantly executed. Such mental control can guide other platforms in the field as well. Hundreds of drones could be airborne, controlled wirelessly by the thoughts of operators who will movements, visualize targets and nontargets, and think “fire” without the delay and error of manual controls.
A paperweight-sized package is on its way to the White House with a return address in Tehran. It contains a reminder of a battlefield technical failure, a tiny pink model of one of the United States’ most advanced drones, the RQ-170, lost over Iran in December 2011. The original sits in an undisclosed Iranian facility. This plastic toy is a mocking testament to the limitations of warfare conducted with wireless devices; it sends a signal that drones are limited to irregular conflicts and are unsuited for action against modern militaries with access to an effective cyber- and electronic-warfare suite.
Without robust autonomy, the ability to reason and execute independently, the mandate to be connected wirelessly to a mother system or operator will always be an Achilles’ heel of remote systems. Drones will not always deploy to an interference-free environment such as the Afghanistan-Pakistan region. Even then, in 2009 there was a great controversy when it was discovered that militants in Iraq were tapping unencrypted transmissions from American drones. Iran claims the RQ-170 was isolated from its controllers and spoofed into landing at an Iranian airbase using a fake GPS signal. Pilots and drivers are, barring treason, the only guarantor that platforms do not fall into enemy hands.
Even then, failures such as the Hainan Island Incident can occur. On 1 April 2001, a U.S. EP-3 aircraft was damaged in a collision when Chinese fighters intercepted it. The plane made an emergency landing on Hainan Island, and the 24 crewmembers were detained for ten days. Hijacking is not an innate vulnerability of drone systems; the capacity for this to happen merely indicates the level of development. Commanders will need the guarantee of automated navigation and engagement if they are to rely on drones against conventional forces that sport a reasonable suite of electronic-warfare capabilities.
Autonomy is necessary to provide communication security and mission continuity in an electronic-warfare environment. Currently drones require direct wireless control and have only the most basic of autopilot features. Developments in both hardware and software indicate that the independence and autonomy necessary to free drones from that vulnerable need for constant wireless connection are right around the corner. Soon they will have the ability to independently maneuver and make tactical decisions.
The technology already exists to allow for maneuvering more intense than any human pilot could either execute or survive. The University of Pennsylvania’s General Robotics, Automation, Sensing & Perception (GRASP) lab experiments with quadrotors (four-propeller multi-copters) show how far automated flight can go. The control computer can independently solve and execute operational tasking, even maneuvering drones through vertical slits and onto Velcro landing strips mounted vertically onto a wall. The quadrotors themselves have a bevy of onboard gyroscopes, sensors, and processors to independently develop and execute both subtle and aggressive tactical maneuvers. Working in union, the system can rearrange entire formations of drones through tunnel obstacles, land smaller drones on top of hovering larger ones, and whole squads of quadrotors can build rudimentary structures.
The Eurofighter’s agile “relaxed stability” design makes it impossible for a pilot to fly unless aided by a computer executing millions of minute adjustments. With an automated countermeasure system, the pilot is sometimes a secondary processor for navigation and maneuvering; he is capable of far less than his aircraft alone. Pilots have chutzpah, but silicon can still take more gravity forces than can flesh.
Software is reaching the point where it can understand subtle mission planning as well. The Navy’s Aegis Combat System is a computerized command, decision, and weapon-control program conceived because humans were no longer quick enough to deal with the antiship-missile threat. Constantly improving since 1969, it is now a processing behemoth capable of categorizing, recognizing, designating, prioritizing, and giving engagement recommendations for legions of targets before the operator has determined whether the radar blip is a smudge. On Mars, a different AEGIS program (Autonomous Exploration for Gathering Increased Science) runs on the rover Opportunity . Because of the limited downlink capability, cameras on the vehicle cannot transmit all the images it captures. AEGIS was created to allow the Opportunity to autonomously collect data in real time on targets it considers of interest during long transit periods. This is far more efficient and potent than the ponderous process of pausing and wasting several cycles to receive input from Earthbound operators.
More complicated tactical decisions will require greater levels of processing power as well as effective programming. Miniaturization is already a part of daily life; smart phones have more processing power than a fleet of moon-landing Saturn V rockets. On 20 February 2012, scientists from Australia’s University of New South Wales announced they had developed a transistor consisting of a single atom. Most modern transistors are from 22 to 16 nanometers (nm). One month before the announcement, a 9-nm transistor introduced by IBM was seen as astounding, but the New South Wales transistor is smaller than 1 nm. Using both computers based on 1-nm transistors and thumb-sized solid-state terabyte drives, an Aegis combat system could fit into a large GRASP lab quadrotor.
With miniaturization and increasingly versatile programming, the maneuver, intelligence-gathering, and tactical decision-making of our warfighting platforms could be internally automated. Autonomy prevents the “onset stupidity” that comes from loss of connection with human controllers. These new developments could even allow platforms like tanks, armored personnel carriers, cargo aircraft, small boats, and the like to enact basic automation in the case of a human operator’s death. A pilot could be flanked by a series of deadly computerized wingmen. It is not a question of whether someone will implement this technology, but when it will happen. People may be uncomfortable letting go of the reins, but the software we have created is almost ready to engage with only minimal human involvement.
Machine with a Brain
No matter the reasoning power of the software, machines can still be fooled. The failure of the RQ-170 looms in the background. No pilot would accidentally land a U-2 in Bandar Abbas or accept an invitation from an air-traffic controller with a passable Texas accent to park his F-22 in Pyongyang. Humans can learn; we have a survival instinct and adaptive reasoning that reassures those who send us to accomplish a mission. Software may never have such ability, but it is not the only option available.
Dr. Kevin Warwick of England’s University of Reading, himself a self-experimented cyborg, has developed a tiny Rhoomba-like rolling robot that seems to randomly putter about his laboratory floor. But its movements are not random—they are the exploratory probing of a rat brain. His team has grown a brain from cultured rat neurons and installed it into a multi-electrode array (MEA), essentially creating a neuron circuit-board. As the brain sends impulses to drive about the room, information from the mounted sonar sensors is fed back into it. The device has no microprocessors, just brain.
Over the three months the cells live, the “animal cyborg” is not only taught by researchers, but independently learns to adapt to its surroundings. Warwick has gone on to build more complex MEA brains, attempting to upgrade his 100,000-neuron circuit brain to a 30 million-neuron one. Once the processes of the circuit-brain are understood, a whole suite of sensor-array inputs could be augmented with inputs from tactical-combat processors that understand flight or terrain movements, mission profiles, and rules of engagement.
Integrating biological processors opens the possibility for systems to solve problems in novel and remarkable ways. In that event, the system would become more survivable, adaptable, and unpredictable to the enemy, as well as being harder to deceive. The striking aspect of the brain-based robot is that every brain learns, each one in a different way. Such a multiplicity of “perspectives” would uncover problems that a uniform system would, by its nature, systematically ignore. A totally uniform system is easy to break once it is fully understood. Biological components not only give a system adaptive powers, they also give each device a uniqueness and unpredictability that makes the cloud harder to deceive or destroy.
Let Loose the Drones!
A drone with reasoning skills requires only minimal human direction, as a general guides an army rather than a pilot flies a plane. Motion, images, and words are not just commands by which a drone is directed, they also represent the foundation of a solid mission briefing. A human operator could mentally lay out the mission parameters to a whole host of vehicles. Even if the direct link were lost with the drones, those parameters would be retained and the mission would continue just as it would with an on-scene human operator. Autonomous weapons would have the same, if not more stringent, target-verification and no-go criteria as their human counterparts.
Autonomy vastly increases the number of drones that can run effectively at once, by cutting down numerous operators to a select few overseers. Using GRASP and Aegis (Navy) software, clouds of devices can react to or execute complicated maneuvers and engagements in a combat environment with only minimal human guidance. And using AEGIS (NASA) software, drones can independently recognize and flag items it has observed. The part-biological processors can be taught how to deal with methods of deception and recognize friendly assets and facilities; through this learning, they can enrich combat performance.
In cloud combat, any device or weapon can be “willed” through the shattered hardware wall between mind and machine. Humans will receive inputs back from their platforms, the battlefield no longer consisting of only personnel linked by radios in high-tech helmets, as in 1990s net-centric warfare. The total force will be an Internet of things. The hardware of war will no longer be auxiliaries to the soldier but his extensions, as the force becomes a cybernetic cloud.
A group of soldiers are knocked out when their Humvee collides with another vehicle during an ambush. The Humvee extricates itself, alerting the field commander. As it attempts to rush the soldiers out of fire, an RPG strike throws it to the side. A single soldier, an amputee now twice wounded, awakes to take cover behind his burnt-out Humvee. He lays down suppressive fire with the M240 he wields with his enhanced bionic arm. Drones with modified acoustic targeting systems (which the QINETIQ Group designed for soldiers in the field to detect snipers) fly overwatch for the unit of dismounted soldiers, locating and killing enemy shooters on the rooftops.
The engaged soldier may not be able to verbally communicate his situation, but no lengthy explanation is needed when commanders receive by thought both what he is seeing and the sum of his changing and occasionally contradicting tactical views of the situation. As an assessment is transmitted to select nearby soldiers, drones, and vehicles, the wounded soldier designates enemies hiding in a structure by looking at them. Aerial drones receive and confirm the target, using terrain matching to verify location and communicate to artillery that returns fire. An unmanned medical ground vehicle charges through heavy fighting to the unit’s location; the unmanned vehicle and nearby field hospital have already been informed of vital signs through the implants in each soldier.
In the wide-open spaces, manned and unmanned armor follow a wave of automated fast-moving scouts miles ahead, as mines and personnel are cleared. Tank commanders control maneuver and weapons by thought alone. Above, formations of armed drones prowl the skies, destroying enemy vehicles, providing instant counter-battery, and collecting intelligence. A fighter flies by, its flight officer interfacing with a legion of deadly air-to-air autonomous interceptors. At sea, rather than using precious ship space for hard and soft countermeasures, swarms of aerial and surface drones linked to a part-biological Aegis act as counter-weapon decoys, much like Sea King helicopters sortied as decoys against Exocet antiship missiles during the Falklands War. Simultaneously, the drones collect signals intelligence for the shipboard computers, providing for triangulation of enemy forces and weapons. More than a joint force, this cyber force fluidly joins warfighters with hardware for maximum impact.
The Human Dimension
Naturally, the idea of autonomous machines makes military leaders uncomfortable. Technology constantly fails, disappointing controllers and creators. The RQ-170 is the most glaring recent instance. But people have flaws also. The 2007 death of Blue Angels pilot Lieutenant Commander Kevin Davis was caused by gray-out in a high-G maneuver. In 2011, USS The Sullivans (DDG-68) fired 14 5-inch rounds at a fishing boat because of human error. Investigation determined the 1988 downing of Iran Air flight 655 was partially due to operators ignoring the assessment from Aegis computers. The 2004 Abu Ghraib incident was not a mistake of machines, nor was the 1968 My Lai massacre.
Physical limitations, oversight, and brutality are all within the purview of human decision-making. However, that human condition is also what can prevent war from becoming only a cold and terrible phenomenon devoid of morality. During the My Lai massacre, Warrant Officer Hugh Thompson Jr. showed why people are so essential in war by landing his helicopter between civilians and rampaging troops, defending and evacuating survivors from the mayhem. Thompson risked his life, limb, and career for what was right. Where machines feel no fury, they also show neither pity nor doubt. That Thompson had to act at all illustrates that people are fallible—as are machines. Humans will always be needed in the loop, especially on the ground when dealing with populations. Yet cloud combat will change commons control, force protection, logistics, and the way we fight.
Long ago, the arquebus was mocked by the quick-loading bowman. Cavalry once made sport of the panicked musket man. World War I pilots were seen as dandies, playing with toys that would always be mere novelty to war. Before World War II, French commanders could not imagine their tanks being used for anything other than infantry support. Now the Navy sticks to centuries-old terms for fighting ships such as “destroyer” and “cruiser,” thinking in terms of better ships of the line rather than revolutionizing the concept of the surface combatant itself. Similarly, the Air Force thinks of better fighters, even if for the vast Pacific it is trying to apply mid-range superfighters designed for Cold War Europe.
If we rely on the familiar as a core force-planning concept, we are doomed. We must embrace the new opportunities before us in our ability to exert direct control on equipment while simultaneously developing new levels of autonomy. Enemies of the United States, aware of our strengths, are building saturation and area-denial systems to overwhelm our defenses and lay waste to behemoth U.S. platforms and facilities. Rather than continuing to forward-build a force of stronger versions of what we already have, we need to change the nature of our force. Change is inevitable, and the warfighter who refuses to stop living by the sword will die by the gun.