By Dan Gephart, November 13, 2019
John Horton knows which rating you gave your last Uber driver.
Horton isn’t a mind reader. He’s a professor at New York University, and he studies online marketplaces. His research found that most Uber/Lyft customers give their drivers a 5-star rating, regardless of the quality of the ride, the choice of music, or the stink of the car. Online publisher Mic recently broke the study down: “(P)eer-to-peer apps are designed to induce customer guilt and thus promote rating inflation. The act of sitting in a car with your service provider, the study found, humanizes them.”
I have a similar penchant for overly positive ratings when it comes to Goodreads. The social media app allows you to track the books you’re reading, have read, or want to read, and to rate and review those books, then share that information with others. Like Uber and Lyft, it uses a five-star rating system. As the husband of an author, I know how much work goes into researching, writing, editing, and revising a book, and it sways my Goodreads ratings. Absorbing, engaging, well-written page-turners that I want to read again and share with the world? That’s easy – five stars. Books that weren’t bad, but quickly forgettable? Five stars. Did the book fall flat, put me to sleep, or take a huge effort to even finish? Five stars, five stars, and five stars. Basically, if your book got four stars from me, you might want to put down your pen before you hurt somebody.
Economists, social scientists, and tech experts have been raising concerns about five-star rating systems for the last several years. A recent Harvard Business Review article stated :“(W)hile simple five-star systems are good enough at identifying and weeding out very low-quality products or suppliers, they do a poor job of separating good from great products.”
There is an inherent problem with five-star ratings, whether they are being used to select restaurants or book travel. And that glitch is heightened when the reviewer and reviewed have formed a human connection. Yet when it comes to measuring the work of federal employees, many agencies still use a five-step performance rating system. And there are few humanizing situations like a performance review.
It’s not as if we don’t have a problem with poor performance in the federal workplace. In the latest Federal Employee Viewpoint Survey, only 36 percent of non-supervisory employees believed that appropriate steps are taken to deal with poor performers. And this problem has been around a lot longer than the FEVS. When looking through old MSPB reports for my recent And a Word With … interview with James Read, I came across the agency’s Federal Supervisors and Poor Performers, submitted to the President and Speaker of the House in 1999. The report’s executive summary states the following:
Federal employee surveys and other indicators over at least the last 18 years suggest that most employees, including supervisors themselves, judge the response to poor performance to be inadequate.
FELTG training attendees know that in any given year only three or four percent of removal actions (aside from suitability, probationary, or other less-common removals) are performance-based, while the remaining removals are conduct-related. Even a math-challenged Training Director can tell you those statistics are out of whack with 40 years of concern about performance.
There are dozens of reasons why poor performance problems continue to flourish seemingly unabated, and five-step performance systems probably won’t make anyone’s top five of those reasons. Not that there aren’t others reason to oppose five-step performance systems. They can even lead to overturned performance actions, as FELTG Past President Bill Wiley has explained previously.
Here’s the issue: The five-step performance systems offer sympathetic supervisors a gray area, giving them an out on a tough decision, and allowing performance problems to linger. In the tech world, the five-star rating systems fail to separate the good and great. In the federal workplace, those systems also fail to separate those successfully meeting their job requirements from those who aren’t.
In a five-step system, the third level is usually “fully successful” and the second level is usually “minimally successful.” As Barbara Haga points out during the Performance Management portion of her three-day Advanced Employee Relations course, an employee can be rated at Level 2 for his entire federal career and a performance-based action cannot be taken. [Side note: Don’t miss Barbara’s upcoming Advanced ER sessions in New Orleans or Atlanta.]
Why should we allow an employee who is not fully successful to continue working at that level with no apparent end? Oh, he won’t get any step increases. And he may lose some retreat rights in RIFs. Do you think that matters to the coworker watching this minimal performer do the absolute minimum?
By the way, if you did not give your Uber/Lyft driver a 5, kudos to you. You are not an uncaring human. In fact, the most altruistic customers are the ones who give honest feedback, according to Horton. As a result, they improve the rides for everyone.
So supervisors: Next time you’re making a decision on performance ratings, be as honest as possible, and improve the employment ride for the rest of your employees. Gephart@FELTG.com