Student Performance Rating Scale

Play Video

How the SPRS works

Feedback is one of the most critical components of any training program. Without clear, objective, actionable feedback from their instructors a student can only ever hope to be as good as they can make themselves. As an instructor, and more importantly a large training company, employing a strong, effective, and consistent feedback process for student performance evaluations must remain a top priority. The Modern Warrior Project’s continued efforts to improve the quality of training across the entire industry while providing credible standards resulted in the creation of the Student Performance Rating Scale (SPRS). The SPRS has been deliberately designed to eliminate the guesswork for both instructors and students so every training iteration translates into usable feedback.

Some of the key benefits of the Student Performance Rating Scale include:

  • For Students
    • Simplified ratings provide clear feedback for quicker improvement
    • Ratings are based on carefully selected, relevant criteria so feedback is more meaningful
    • Ratings are objective and impersonal so students are more receptive to constructive critiques and less defensive when receiving feedback
    • Provides students with a better expectation of how they will be evaluated so they concentrate more on learning and worry less about being tested
  • For Instructors
    • SPRS can be used to evaluate any skills (shooting, first aid, survival, etc.) so the process is the same regardless of the course content
    • The 3-number scale is easy to use and understand so feedback is significantly more consistent from course to course, student to student, and across large instructional teams/companies
    • Rating criteria assist instructors in focusing on key elements of student performance during evaluations so they are more likely to identify important issues
    • A formalized feedback rating process means quicker train-ups for new instructors since the system is straightforward and easily adopted
  • For Both
    • SPRS seamlessly integrates Errors, Performance Metrics, Situational Awareness, and basic Proficiency Levels into a single Rating, significantly reducing miscommunications and misunderstandings often experienced during dynamic skills based evaluations
    • Ratings carry over for students working with any organizations that use the SPRS, so students and instructors already speak a common language even when training together for the first time

A Simple 1-2-3 Scale

The SPRS is built on the proven 1-2-3 scale used by many of the military programs of instruction our team has attended and/or run. The primary difference is that the MWP has expanded and further defined the meaning of each number to deliver more useful information to students than the traditional “Needs Improvement – Good Enough – Great” approach. This transition came when our Director of Training attended a two-week course at the US Navy Test Pilot School and subsequently revisited the    1-2-3 system. After careful review, and based in part on the 1969 Cooper Harper model used by NASA to evaluate performance, the MWP established the Student Performance Rating Scale and adopted it as the standard for all student training and instructor development reviews.


With the SPRS instructors can quickly communicate incredibly valuable feedback in the form of a single number (1-2-3), or if they choose, a single number/letter combination (1A/B – 2A/B – 3A/B/C) for greater insight.


Modern Warrior Project Student Performance Rating Scale SPRS V1

Defining Key Performance and Proficiency Terms

*All terms used in connection with Modern Warrior training can be found in the Common Course Glossary and on the Resources page of the official MWP website.


Now let’s take a look at some of the terms that make up the scale so students and instructors better understand what each rating means. Since the SPRS is designed to be used in any training capacity the scale is specifically worded to apply to the widest variety of skills.


The first three terms are:

  • Acceptable – a level of performance that is capable of meeting the requirement
  • Tolerable – a level of situational awareness that is sufficient enough to support acceptable performance
  • Optimal – a level of performance that is highly desired and in excess of what is necessary to meet the requirement

The differences and relationships between terms are critically important to recognize when using this approach to grading. As an instructor you must have a clear understanding of what constitutes acceptable performance so that you can articulate and demonstrate it for students as well as coach them to meet this basic threshold. In addition, instructors should constantly challenge students to reach their peak and doing so means having an idea of what “perfect” is. As for situational awareness, instructors are responsible for helping the student understand what applying the skill in the real world looks and feels like so students are prepared to divide their attention effectively between the task at hand and the need to constantly assess their situation for higher priority tasks.


It’s obvious why evaluators rely on these terms as they are the basis for grading students. However, what should be kept in mind is that students are not necessarily being graded on their choice of technique if it meets the requirement. Sometimes evaluators have a personal technique they prefer and students who reach the objective though another valid technique can be arbitrarily, even subconsciously, downgraded by the evaluator. While it’s completely appropriate to address shortcomings of technique selection in context of the overall scenario and adjust student ratings accordingly, personal preference should not be considered by the evaluator during ratings. The only exception is when a particular technique is specified by SOP or other regulatory source.


Example 1: Correct Impact on Rating for Technique Selection
The student attempts to perform an Emergency Reload on a handgun that has entered a slide lock condition and does so using the “Over the Top” method. While the student effectively reloads the firearm, they inadvertently place the weapon on safe causing a dead trigger and a delay before they can reengage the threat. Even though the reload was done correctly meeting the threshold for Acceptable Performance – a 2 rating, the student’s selection of that reload technique demonstrated a lack of understanding of their firearm and therefore their situational awareness should not be considered Tolerable – a 1A rating.


Example 2: Incorrect Impact on Rating for Technique Selection
The student successfully enters cover and fires from the kneeling position with their outside knee in contact with the ground. Both rounds strike the threat in the heart. The student’s has demonstrated Optimal Performance free from errors – a 3A rating. However, the evaluator prefers that students keep their outside knee up when shooting from behind cover and issues a lower performance rating of Acceptable Performance with minor errors – 2B. If there is no technical or tactical difference in the student’s choice of one technique over the other, there should be no difference in rating.


The next two terms are:

  • Accuracy – the measure of being correct (Yes or No)
  • Precision – the degree of accuracy when measured against an exact standard (Sliding Scale)

While these terms are most commonly associated with shooting skills, they can easily apply to any tasks that can be executed correctly without being done “perfectly.”


When pulling a car into a parking spot, you may consider it accurate to be simply in between the two white lines; however, the more evenly spaced you are on the left and right side of the car, the more precise your parking performance is.

Final three terms:

  • Minor Error (ME) – any error that degrades the Student’s performance but does not prevent them from completing the task to standard
  • Critical Error (CE) – any error that prevents the Student from completing the task to standard but does not meet the criteria for a Fatal Error (e.g. failing to properly diagnose and correct a malfunction)
  • Fatal Error (FE) – any error that jeopardizes the safety of the Student performing the task, other Students participating in the training, or the Instructor or that demonstrates a fundamental lack of understanding of the task (e.g. flagging themselves or another individual with the muzzle of their firearm)

How to Use the Scale

As the evaluator makes their way through the process, the SPRS asks three simple questions to help delineate between ratings:

  1. Is the Student’s performance safe?
    • If No, the student cannot continue with the evaluation until they have been re-trained
    • If Yes, the evaluator asks the next question
  1. Is the Student’s performance acceptable and is their situational awareness tolerable?
    • If No, the student requires improvement in order to meet the basic performance threshold
      • The student has not “passed” the evaluation
    • If Yes, the evaluator asks the next question
      • The student has “passed” the evaluation
  1. Is the Student’s performance optimal?
    • If No, the student has demonstrated the ability to meet the requirement but there is room for improvement
      • The student has “passed” the evaluation
    • If Yes, the Student has demonstrated a level of performance that is highly desired and in excess of what is necessary to meet the requirement
      • The student has “passed” the evaluation with distinction

Example Student Performance and Corresponding Rates

The following are meant to help instructors, evaluators, and students understand what may be considered appropriate ratings for certain demonstrated performance. The MWP recognizes that these are simplified examples to demonstrate the SPRS in use and that the final authority for ratings rests with the Lead Instructor.


For shooting based evaluations:

  • 3A/B: Perfect shot placement within a center of gravity and proper use of cover
  • 2A/B: Shot placement within a center of gravity scoring box and proper draw
  • 1A: Perfect shot placement within a center of gravity but failed use of cover
  • 1B: Shots outside of a center of gravity scoring box but a proper emergency reload
  • 1C: Shot placement within a center of gravity scoring box but on a “non-threat” target

For non-shooting based evaluations:

  • 3A/B: Perfect application and placement of a tourniquet under stress
  • 2A/B: Adequate presentation of an arm bar while defending against a gun grab attempt
  • 1A: Proper construction of a survival shelter but within range of deadfall hazards
  • 1B: Failing to reach the correct destination during a land navigation movement
  • 1C: Failing to supply enough oxygen while attempting to build a fire

There is no requirement to use both the number and letter when providing the rating. The use of the number rating alone is enough to communicate the student’s basic performance level. Using the letter, however, will help the student understand what specifically about their performance needs to be improved. Remember also that Instructors do not “give” rating, Students earn them.


Performance Ratings vs. Proficiency Levels

Training programs are designed to allow students to gain proficiency. The MWP structures all training programs around the following five Proficiency Levels, in order of precedence:

  1. Standard [S] – Minor Errors only; the Student can consistently complete the task safely and accurately while maintaining Situational Awareness appropriate to the conditions and without assistance, guidance, or correction from the Instructor
  1. Intermediate [I] – Occasional Critical Errors; the Student exhibits a thorough understanding of the elements and mechanics of the task but cannot complete the task to the required level consistently enough to establish their mastery of the skill
  1. Practiced [P] – the Student understands the elements and mechanics of the task but requires regular assistance, guidance, demonstration, or correction from the Instructor to prevent making Critical Errors
  1. Knowledge [K] – the Student has been taught the elements and mechanics of the task and can generally describe the task
  1. Demonstrated [D] – the Instructor has performed the task in its entirety to standard for the Student by first breaking the task down into its individual steps and then working up to the speed at which the task is normally expected to be conducted. Used to introduce new material, explain how a drill will take place, or when providing Student evaluations and feedback

It is important to note that a student’s Proficiency Level is the most appropriate determination of their ability to reliably accomplish a task. Proficiency Levels are built on the SPRS but are not synonymous with a certain performance rating. The reason for this differentiation is that performance ratings represent a snapshot of a student’s behavior in time. Proficiency is an assessment of a student’s performance over time and as such must include a review of their consistency. Therefore, instructors should be cautious when assigning Proficiency Levels based off one isolated event.


A student struggles throughout the course with many basic elements of the skills developed and needs fairly constant attention and correction by an instructor. The student often makes critical errors and rarely demonstrates acceptable performance while training. During a capstone Basic Skills Evaluation (BSE) the student performs at the 1 level on their first couple of attempts and finally makes it to the 2 level on their final attempt. Although the student has technically “passed” the BSE, and a rating of “2” can indicate the student is performing to Standard [S], their overall conduct in the entire course strongly indicates that their actual Proficiency Level is likely Practiced [P].

Performance Rating to Proficiency Level Crosswalk

Instructors must also remember that a student’s mental and physical state can play a considerable role in their performance. For example, a student that routinely performs at the 3 level and has clearly established their mastery of a particular skill may be distracted by emotional stressors or suffering from an illness and struggle to reach the 2 level for a period of time.


In order to form a more accurate picture of a student’s proficiency, it helps to apply the SPRS to each individual drill or practical exercise throughout the course and then review the picture it paints in conjunction with the outcome of the Basic Skills Evaluations.

Ultimately, there is no universal formula for making the final assessment of a student’s proficiency. That responsibility will always remain in the capable hands of the Lead Instructor and based on their experience and judgment.


One final note – Every student is always capable of making an error which can affect safety! Do not get complacent.

Leave a Reply