Measuring employee performance is one of the classic challenges of being a manager. How do you evaluate job performance? More importantly, how do you quantify and defend that evaluation to internal stakeholders?

For years, programmers have struggled with this question. There are plenty of examples of companies measuring employees on the wrong thing, such as hours in the office. (And that doesn’t even take into effect “Conway’s Law.”)

“Virtually any objective measurement you can apply to the software development process can be ‘gamed’ into submission,” writes developer and blogger Nick Hodges.

What you need, apparently, is a “Lime Equation.”

As described by SumAll CEO Dane Atkinson, he developed the “Lime Equation” while he was managing a bar, to help him determine which employees might be stealing. As it turned out, he learned that there was a correlation between the amount of money the bar brought in during a shift, and the number of limes used.

“So if the sales to limes ratio didn’t match up, that meant something was wrong with the theory, or something was wrong with the bar,” Atkinson explains. By moving staffers around between shifts, he could discover which shifts didn’t have a proper lime correlation, and so identify the culprit.

The “Lime Equation” also helped Atkinson avoid having to be too heavy-handed in terms of security, which doesn’t work too well anyway. “These invasive measures have two consequences,” he writes. “First, employees figure out a way around all of them. Second, these half-baked security schemes tell bar staff that you don’t trust them, and it becomes a self-fulfilling prophecy. They think, ‘Well, if you think I’m stealing anyway, I’m going to do it and get away with it.’”

Of course, in a who-bells-the-cat kind of way, Atkinson admits that the hard part of the “Lime Equation” for IT development is actually coming up with a relevant, useful proxy. (Number of pizza boxes and energy drinks in the trash?) That part is pretty much up to you, but here are some common IT productivity metrics—and what’s wrong with them:

  • Lines of code written per day? Okay, but doesn’t that reward the person who writes verbose, buggy, spaghetti code?
  • Number of working routines written? Well, it certainly helps to have the “working” part in there, but again, it penalizes the people who write large routines or the people who happen to be in the design or debugging phases. Besides which, how easy is the software going to be to maintain in the future?
  • Lines of documentation written? This would hopefully help with the maintenance issue, but doesn’t guarantee you get any working code.
  • Smallest number of errors and crashes per day? Possibly, and there certainly are organizations that grade their IT staff by the number of complaints they get. But what about the people whose code isn’t used often or who have particularly clueless users?

Atkinson mentions using, at one startup, the metric of total email volume and the ratio of internal to external email messages. “If email volume for an employee dipped from 500 in March to, say, 200 in April, I knew something was up,” he writes. “The change in numbers was cause for a conversation in which I could diagnose the problem and help the person get back on track.”

Actually, Atkinson writes that he prefers to use as a measure something that isn’t a desirable outcome in itself.

“The secret metric could change over time,” he explains. “A company struggling with retention could use LinkedIn page refreshes to gauge morale. If people are constantly using the site, they’re probably looking for a new job. Alternatively, you could look at your job referral program. If job referrals are high, current employees likely have confidence in your company. If it’s low, people might not want to draw their friends into the hell they see around themselves.”

Moreover, Atkinson writes, it’s important to keep the metric, whatever you end up using, a secret. “Instead of asking employees to tell me how they were being productive—a charade of a conversation that encourages exaggerations—I could just look at the hard email volume captured in numbers,” he writes. “Instead of hearing people’s stories about what was happening, I saw what was happening.”

That’s because, as Hodges describes, people have this natural tendency to change their behavior based on how they’re being measured, whether you’re using a “Lime Equation” or a more direct key performance indicator (KPI).

There’s even a name for it: the Cobra Effect, after the time that India supposedly offered a reward to people to encourage them to bring in dead cobras. The government found that people were actually breeding cobras so they’d have more to bring in.

“A salesman judged on quarterly sales volume, for example, may be tempted to book large orders on the last day of the reporting period that he knows will be canceled or returned later just to boost numbers,” writes Craig Berman in AZ Central. “Others may pressure customers in ways that reduce the chances of repeat business and hurt future growth. Make sure your metrics reflect how your business is doing and not how well your employees can guess what you’re looking for and manipulate the data accordingly.”

It’s also important to make sure that the metric you’re watching is actually correlated with success, not just one that’s easy to get, warns Michael Maubussin in Harvard Business Review. He describes a four-step process for finding a useful metric:

  1. Define your business’ governing objective.
  2. Develop a theory of cause and effect to assess presumed drivers of the objective.
  3. Identify the specific activities that employees can do to help achieve the governing objective.
  4. Evaluate your statistics.

Whether you’re using a “Lime Equation” proxy or a KPI, be sure to include the last step, because you may be measuring the wrong thing.

Meanwhile, you can try to figure out what “Lime Equation” your bosses are using for you.

Related Posts