There is a lot of excitement these days about “star” employees who have a disproportionate impact on key performance metrics. The idea is based on recent research suggesting that the distribution of job performance in many business units is highly skewed, with a select number of “superstars” (e.g., “The Myth of the Bell Curve: Look for the Hyper-Performers”, and “Making Star Teams out of Star Players”).1, 2, 3 The classic illustration of this phenomenon is in sports, where a small number of players are viewed as difference-makers and highly sought after. But does the shape of performance matter to anyone besides sports teams?
It should! From an applied perspective, administrative decisions such as recruitment, salary, and performance reviews depend on the distribution of employee performance and understanding the value employees bring to the organization. If star employees produce 2, 3, 5, or even 10 times the value of “typical” employees, then an organization should identify who they are and do all that it can to support their continued productivity and entice them to stay.
To identify star performers, it is necessary to define what job performance really means in various business units, and then to measure it at the level of individual employees. There are a number of factors to consider:
- Performance definitions and metrics should reflect behavior, not situational factors. Everyone has heard the classic example of sales success being a function of territory covered in addition to the sales behaviors of the salesperson, but what about a manager whose success is influenced by greater access to organizational resources than others? Are your star performers really just benefitting from situational advantages?
- Performance should be defined broadly. Many indicators of performance should be considered. Although it may be tempting to focus exclusively on bottom-line metrics, it is important to consider the totality of relevant employee performance. Do you really want a star performer who is highly productive but may also cause conflict or dissent in the work group?
- The full range of performance should be included, and not just extreme events. Employees behave in ways that contribute to (or detract from) organizational goals every day. Counting only rare outcomes/events (e.g., awards, clients signed) as performance may provide a different picture than a focus on what employees are typically doing. A corollary to this idea is that particularly exceptional instances of performance may have been as much a product of luck and circumstance as they are attributable to the individual, and the star employee one year on a given metric may not be the star employee next year. Including the full range of performance may help build a more consistent picture.
- Performance metrics should reflect a common opportunity to perform. For instance, if all employees are evaluated on the number of team projects they lead, but only a few are actually given the opportunity to lead a team project, this may lead to skewed performance metrics. Alternatively, a broader focus on activities that every employee performs may present a different picture of performance, more of an “apples to apples” comparison. By extension, this also suggests that comparisons of performance metrics should focus on individuals who are performing comparable jobs.
So, what does all this mean?
Rather than taking a firm stance on whether star performance is a widespread phenomenon, we advocate deep thought about what performance is for your organization.4, 5 It is also important to keep in mind that the shape of the performance curve in your organization will likely be highly related to how performance is defined and measured. If your system measures and recognizes only stellar, but relatively rare events, then you are bound to identify some employees as stars. Conversely, if your system reflects the totality of what counts as a contribution to the organization, you may find that many people are highly competent; they just exhibit different patterns of strengths and weaknesses.
About the Authors
Dr. Adam Beatty is a Research Scientist in HumRRO's Assessment Research and Analysis Program.