The Marine Corps has expanded small-arms combat modeling to support decision-making. Small-arms combat modeling converts human-performance measurements into simulated combat engagements, replacing a points-based scoring system that resulted in part from the need to mass evaluate marksmanship for World War II.

Points-based outcomes are largely self-explanatory: Shooter A scores more points than shooter B by putting more rounds more accurately into a paper target. The combat-modeling system replaces that crude system with one in which measures are more applicable to combat: Shooter A wins 63 percent of gunfights against shooter B, or squad A wins 75 percent of engagements against squad B while suffering fewer casualties. Presenting the outcomes this way is a powerful tool for conveying readiness to leaders.

No advanced software is required to run these simulations. Anyone can calculate outcomes for single shooters using simple database programs such as Microsoft Excel. Some complexities arise with squad-level engagements, but with some additional programming knowledge, they can be done, too. Instructors could begin briefing their commands with such information tomorrow.

**Combat Marksmanship and Lethality**

Points are easy to score and manage for large assessments, but points-based scoring systems do not account for all three human-performance elements: accuracy, speed, and variability. Such systems typically set artificial limits on either speed or accuracy while conducting a drill. (Variability may not be measured at all.) For example, shooters may fire four shots (two standing, two kneeling) with a time limit of eight seconds. Points are determined based on holes inflicted on the target, but there are no performance rewards for completing the drill in four seconds vice seven. But speed is a crucial component of lethality, and nearly every points-based system fails to appropriately measure or integrate speed into outcomes. Time limits thus directly contradict a key element of *Force Design 2030*, which states, “The individual/force element which shoots first has a decisive advantage.”^{1} The service relies on points systems for logistical simplicity, not for their ties to lethality.

There have been recent efforts, however, to overhaul the system. A capabilities-based assessment in 2018 identified gaps in the Marine Corps marksmanship program. Using the points-based standard, combat marksmanship was training for a high score rather than for lethality. In response, program managers began a series of intensive experiments and program changes that revised training doctrine. Initial steps were direct and scalable. For example, the Annual Rifle Qualification became the most significant change to primary qualification since 1907.^{2} Bullseye targets were replaced with threat targets with scoring zones aligned to actual human anatomy. Lethal hits were determined by the likelihood of inflicting maximum physiological injury on an enemy combatant. By linking performance to the operational demands of combat marksmanship, these changes created a more realistic test, but they did not solve the points-based problem.

An alternative to points-based scoring is hit-factor scoring, which measures accuracy through points and divides the time spent to complete the drill. Hit-factor scoring incorporates both accuracy and speed into a single measure. Although a step forward, this approach is designed to rank individuals by competitive integrity. In other words, hit-factor scoring does not account for variability, which addresses the expected range of performance. Good outcomes might be a solid representation of individual potential or of sheer luck. Variability helps determine whether the observed outcome is more likely because of skill or luck. For example, someone might score barely enough to be an expert, but is that a sharpshooter having a good day or an elite shooter having an off day? Without some information about the consistency and reliability of marksmen performance, there are limitations as to what should be interpreted from the outcome.

Small-arms combat modeling is an emerging subfield of modeling and simulation with the potential to provide better insight into squad-level lethality. While large-scale combat modeling examines tanks, artillery, or maneuver elements to determine outcomes, small-arms combat modeling endeavors to better represent human performance as integral to the outcome. Small-arms combat modeling focuses on squad-level engagements while still retaining the capability to evaluate multiple factors of a scenario. Rather than points-based or hit-factor scoring, small-arms combat modeling samples from human performance observations (e.g., marksmanship data) to incorporate accuracy, speed, and variability into the measure. Observed data become the model’s input and then go through thousands of simulations to produce a probability of victory. For example, shooters in the 80th percentile would win 80 percent of their engagements.

**The Basics of Small-Arms Combat Modeling**

This approach uses two different evaluation methods: Monte Carlo simulations and Markov Chains. Monte Carlo simulations are a common technique to identify probable outcomes despite random factors. They involve sampling from a range of possibilities rather than assuming identical performance every time. Based on a probabilistic distribution, thousands of samples can be drawn to determine the likelihood of different outcomes.

The Marine Corps has advanced this process by applying Monte Carlo simulations to shooter lethality.^{3} A simulation collects speed, accuracy, and variability data from human performance. Speed is how fast one shoots, and accuracy can be connected to lethality based on what the instructors consider a lethal hit. Further considerations can be made for nonlethal or neutralizing hits, but instructors can define accuracy standards. Variability is the key difference. Standard deviations help create a range of possibilities that define expected performance. A Monte Carlo simulation samples from these observations to identify, for a given shot, whether it would be accurate and how fast the shot would be fired. Even the same shooter can have varying speed and accuracy from shot to shot, just as the shooter would in real life.

Meanwhile, a Markov Chain truly recreates the desired behavior. A Markov Chain is a stochastic model depicting a series of possible outcomes in which the probability of the next event depends on the preceding result.^{4} When employing a Markov Chain to evaluate marksmanship, the goal is to account for multiple independent skills and incorporate them into one potential engagement. For example, time to first shot incorporates different behaviors than reloading, and the Markov Chain can represent them as a sequence. Its complexity depends on data granularity from the marksmanship test. If the test were designed to differentiate time spent aiming the rifle (as might be determined from a one-shot draw) and time spent moving into position (as could be determined through shot-to-shot times for drills), then these behaviors could be represented in a Markov Chain. More important, the chain itself should represent the combat behaviors instructors are trying to train. The sequence becomes more than a modeling advantage—it provides a tangible structure for the discussion and development of more productive marksmanship tests.

When the process is put into action, instructors and data analysts build a Markov Chain based on a marksmanship test. The Markov Chain should have the skills required for combat marksmanship, although instructors should determine the complexity for a given test—data analysts provide feedback on whether the proposed drills would provide enough data segmentation for modeling. Next, human performance observations should be collected as the means and standard deviations of speed and accuracy on given drills; means represent the average of performance, and standard deviations represent variability.

The Markov Chain determines next steps in the process, with time and shots fired accumulating until one shooter neutralizes the other shooter. This represents one outcome in which shooter A defeats shooter B. Monte Carlo simulations sample from these ranges for individual behaviors. The process can be run many thousands of times to determine probability, in which, for example, shooter A wins 63,000 of 100,000 simulations and therefore has a 63 percent chance of victory.

Moreover, the sequence can include multiple shooters. This part gets complicated and usually requires advanced programming to handle the computations. Additional factors also become important, such as how wounded personnel affect the simulation or what serves as a scenario termination rule. Still, the utility of probability-based outcomes becomes even more potent. Squad A, as modeled by marksmanship observations from 14 different Marines, can engage and defeat squad B 75 percent of the time while suffering casualties—4 wounded and 2 killed. Now risk assessment also becomes tangible. Not only how often the squad wins, but also what victory will cost. After all, true lethality is about defeating the enemy *and* returning home alive.

For readers who are interested, our research has much more detailed information about this process.^{5} Perhaps the most important takeaway though is that points-based systems and small-arms combat modeling can yield different assessments.^{6} Points-based systems could indicate that two squads are equal, placing in the 50th percentile when averaged across all drills. Combat modeling, however, might indicate a 2-to-1 advantage between the squads because of differences in variability and how discrete skills affect the outcome. While one squad might excel in reloading, the other squad might excel in first-hit time. The combat modeling approach fully embraces the *Force Design 2030* concept that the faster shooter has a decisive advantage. Ultimately, through this technique, instructors and analysts could work together to enhance the quality of interpretations of marksmanship data.

1. GEN David H. Berger, USMC, *Force Design 2030* (Washington, DC: Headquarters U.S. Marine Corps, 2020).

2. U.S. Marine Corps Training and Education Command makes this claim: “the previous course of fire has been largely unchanged since 1907,” www.tecom.marines.mil/ARQ/.

3. LCDR Adam T. Biggs, USN, and Dale A. Hirsch, “Using Monte Carlo Simulations to Translate Military and Law Enforcement Training Results to Operational Metrics,” *The Journal of Defense Modeling and Simulation* 19, no. 3 (June 2021).

4. Paul A. Gagniuc, *Markov Chains: From Theory to Implementation and Experimentation* (Hoboken, NJ: John Wiley & Sons, 2017).

5. LCDR Adam T. Biggs, USN, et al., “Small Arms Combat Modeling: A Superior Way to Evaluate Marksmanship Data,” *Journal of Defense Analytics and Logistics* 7, no. 1 (September 2023).

6. Biggs, et al., “Small Arms Combat Modeling: A Superior Way to Evaluate Marksmanship Data.”