Why The Army’s Officer Evaluation System Needs A Complete Overhaul
When the revamped officer evaluation system was introduced last year, Army Human Resources Command was very clear with its intent:...
When the revamped officer evaluation system was introduced last year, Army Human Resources Command was very clear with its intent: eliminate the years of over inflation in officer evaluations. In briefings held all over the world, officers (and now noncommissioned officers) were told there wasn’t a clear way to differentiate between officers and “identify talent.” Indeed, the Army had a problem: It couldn’t tell which officers were better than the others.
Screenshot via U.S. Army “Revised Officer Evaluation Reports 1 APR 14 Implementation” PowerPoint presentation.
In today’s Army, this is a big problem to have. Despite a drawdown and downsizing of the force, the Army is struggling to recruit and retain a “force of the future.” As David Barno and Nora Bensahel note in their recently published Atlantic piece, the Army is suffering from a “brain drain,” losing some of its best talent to the private sector. They argue that a rigid promotion system is outdated for the millennial generation. And the Senate agrees, as it is now reviewing the military promotions and health care systems. It makes sense that the Army would want to know who its high performers are so that it can make targeted efforts to select and retain those individuals.
But knowing that helpful nudges to reign in an inflated system won’t solve the plagued system, the Army decided instead to automate it. In 2014, it restricted raters and senior raters to handing out the top block to no more than 49% of rated officers and 24% of non-commissioned officers in each rank during their entire career. Centralizing all evaluations from the active-duty and reserve forces in one place, it assigned each rater a profile to manage their evaluations, so that raters would not even have the option of going past their given percentages. In essence, the Army created a scoresheet for each evaluator that detailed how many and what kind of grades superiors doled out to create some accountability, and made sure they weren’t giving As to more than half of their rated population.
As anyone with the tiniest background in math could tell you, the numbers game becomes critical. Say you’re a captain being rated by a newly promoted major. If a rater previously had only evaluated one captain during his career, neither captain would have been able to receive the top block, regardless of actual performance due to percentage restrictions (rating even one person Excellent would mean breaking the 49% rule).
Expecting a backlash from rated officers across the force, the Army has said that blocks won’t factor as much when evaluations come in front of boards. Instead, narrative comments will reign supreme. Of course, to prevent subversion of the system, raters are also banned from writing that they would have given an officer a higher rating if they were allowed.
Ultimately, this makes for quite the convoluted system. And, if the true purpose of this evaluation change is to figure out who are the top performers in our Army, it will undoubtedly fail to do so.
The first issue is that it demands that raters manage their profiles well. Some will argue that we should expect our leaders to be able to manage something as “simple” as rating profiles. But when rating profiles are completed over the course of one’s entire career and when it deals with an event that occurs annually, they become tough to manage, and if any issues occur, the evaluated soldier is the one who ends up being punished.
More and more conversations will be had about how a person did an excellent job for their rating period, but that they couldn’t be given a high rating because of the rater’s restrictions. This could be a result from raters who mismanage their profiles and give out top blocks too early or to officers whose evaluations were done earlier. Second, raters who manage their profiles well, thinking in the long term, may choose to reserve top block ratings with the idea that there could be potential leaders worthy of a good evaluation in the future.
These ratings are problematic, especially when the system is analyzed through the lens of social psychologists. Evaluations can serve as a type of reward, or extrinsic motivation for leaders. Researchers have found that these external awards not only decrease intrinsic motivation, but that they are instrumental in the formation of habits as well. Ultimately, if the evaluation system was instilling good habits, this would not be a problem. Instead, what the new system creates is an inability to reward leaders who are producing quality work, which in turn may cause a decrease in work production. In essence, why would you strive for an “A” when you knew you could only get a “B” no matter your performance? Now, some will argue that people should be motivated to serve as good leaders, and I agree, but even with the best intentions, the literature shows us that such evaluations affect our subconscious perceptions whether we like it or not.
Let me be clear. There are quite a few things the new officer evaluation system does right. The forms are now broken down by grade, so that officers can be evaluated at grade-specific expectations and metrics. The new evaluation also now aligns with changes in the leadership doctrine that were made previously.
But even then, the very system of Army evaluations is outdated at its core. Primarily, Army evaluations still follow an apprenticeship model in that one person dictates both the standard and the grade. This is positive in that the evaluated leader has a clear idea on the metric of success, but negative in that it leaves much variability between raters and can often depend on subjective measures.
There must be a better way. In an era of big data, the Army could move to gathering multiple data points from multiple neutral observers. For example, after each exercise, whether that be a convoy or an airborne operation, the officer or noncommissioned officer could be evaluated in those positions by subject matter experts, much how lanes are evaluated at the Joint Readiness Training Center and the National Training Center. We would then be able to see not only what activities leaders were conducting, but also how well they executed it. This would, in turn, provide promotion and selection boards the opportunity to see whether appropriate tasks in key developmental positions were being executed, and how well someone did in them.
Within the current system, we still truly don’t know who the best officers and noncommissioned officers are. If anything, we get a sense of who was the highest performing in the right place, at the right time, in the right rating chain. Speaking in the counterfactual, I expect more inconsistent evaluations than anything else. And until another change comes, we will still be as clueless as before.