ManagementSpeak: To get a fresh perspective, our IT department is starting a Best Practices review of unrelated industries.
Translation: Once our employees see how burger-flippers work, we won’t have to worry about work environment or employee retention.
KJR club member Steve Johnson provides a fresh perspective.
Month: February 2004
Measuring service
Despite the best (or perhaps worst) efforts of the business and IT press, words do have meaning and, depending on the word, at least a small population of loyalists who remember the meaning and cringe when it’s ignored in favor of some other, vaguer usage.
Take “Return on Investment” — ROI. Originally a collection of related mathematical formulas designed to take into account the impact of interest rates, depreciation, taxes and other factors on a financial investment over time to establish its monetary value in current dollars, ROI is now usually used to mean “good.” What had been a precise meaning has become indistinct.
In IT we have our own culprits. This week’s stalking horse is “service level.” Because there are those among us who equate additional syllables to increased impressiveness, service level is used far too often to mean “service,” as in, “we need to improve our service levels.”
Stop. Please.
Services are what you provide. If you need to improve a service you provide, please do so. Service levels are a technique for measuring your success at providing them. When you say you need to improve service levels you’re saying the measure is what matters rather than the mission.
Not that service level measurement is a bad technique. Quite the opposite: It’s a workhorse method for assessing how well an organization meets its commitments in delivering a service. So just in case you haven’t been inculcated into the mysteries of service measurement, here’s a quick tutorial.
A service level is a two-part measure. That’s what makes it so useful. The first part defines some service target you’re trying to achieve. Perhaps it’s the response time for a call to the help desk that’s been escalated. You might set a service target of two hours.
It isn’t reasonable to imagine you’ll meet or beat that target every time. The world happens and events intervene; the demand for escalation services is determined stochastically, not deterministically. Which leads to the second part of service level definition: How often you meet or beat the target, established as a second target. Which is to say, you might establish a service level for help desk escalation that says users will get a callback or visit within two hours 95% of the time.
This is a useful way to measure quality of service delivery. The problem is that the establishment of service levels has become an unquestioned tradition in IT: That we should define service delivery goals in terms of a service target and a commitment on how often we’ll meet or exceed it is assumed rather than decided upon. Which is to say, IT now starts with the measure and works backward to establish the organizational goal. And once you establish service levels, what could be more natural than to establish a service level agreement (SLA) … a contract … with your business counterparts, formalizing your commitment.
SLAs are popular among IT practitioners. Every survey I’ve seen assessing their utility among business users of IT services suggests they’re pointless — something IT wants to do and business users go along with, right up until they don’t get what they want when they want it. Also … note that when you establish a formal contract of any kind with your business counterparts you’re defining your relationship with them to be that of a service provider with internal customers, which is an invitation to be outsourced.
Should you be sufficiently adventurous that you don’t want to automatically define your goals in terms of service levels just because that’s what everyone else does, here’s an alternative: Define your goals in terms of continuous improvement. That leads to a different, statistical pair of measures: The mean and standard deviation. Which is to say, measure response and calculate both the average response time, and the extent to which responsiveness varies from instance to instance. Establish this as a baseline and decide how much the average should improve and variability diminish over time.
I don’t know if that’s the right goal for you or not. Perhaps it makes more sense for you to define a target and the frequency with which you’ll meet or exceed it.
What I do know is this: You have to start with the goal, not the measure. Otherwise you aren’t leading, just following tradition.