Image source: Pixabay.com
Last week, I shared how early on in my career I managed a technical support and customer service center and service was considered a cost center and a necessary evil. How times have changed, and how far most companies have come in changing that perception!
Continuing in that vein, this week I will share our different approach to measuring customer service at that time, and how there are better means of gauging various aspects of service today. (Note: I’m limiting this to live interactions because my time running a service center only saw the beginnings of self-service.)
I’m going to organize this around points in the typical customer service process:
- Accessibility (or how quickly a customer can connect with someone to solve their problem)
- Efficiency of the workforce
- Service quality
Each of these has a distinct important in the customer’s service journey and provides an opportunity to measure service performance.
Accessibility
My technical support and customer service teams only had two channels of engagement: telephone and email. We found ourselves regularly negotiating the funding needed to provide different service levels with the business unit; as a result, those service levels could change on a periodic basis. We primarily monitored two KPIs here: average hold time and abandonment rate (for telephone) and average response time (for email).
Not surprisingly, how quickly a customer is acknowledged by ANY channel–email, chat, or social media–is still a critical success factor for customer service. However, service levels must remain consistent (unlike the variations my team delivered). When service levels vary wildly, this is effectively punishing the customers who happen to call when the service center has consciously reduced them. Strive for consistency. And remember that 60% of customers report that even a one minute hold time is too long.
Efficiency
Recommended for You
Thinking back to my days of managing a call center, this was probably the area we were most focused on. Greater agent efficiency, after all, would help drive greater accessibility for customers with lower call hold times and abandonment rates and faster email response rates. We focused on two metrics: the number of calls taken or emails responded to per day and talk time for telephone agents. Secondary metrics included talk time, idle time, and wrap-up time.
Efficiency is still a critical factor in determining how effective an agent is. But the approach now to use those numbers in conjunction with other data to measure the more important outcome: how quickly was the customer’s situation resolved? One method is to measure First Call or Contact Resolution (FCR), or the percentage of times customer cases are closed during initial contact. FCR can be measured not only at the agent level but also aggregated by team, topic, or for the entire service center. Unlike the singular measurements of calls handled or talk time, a higher FCR rate helps to drive greater customer satisfaction (since customers want issues resolved quickly) and can also help to pinpoint particularly challenging topic areas for staff or individual agents in need of additional training.
When resolving an issue at the initial time of contact is not possible, the next important measure is TTR or Time To Resolution. For simple issues, this might be the actual talk time or response time over email or chat; for more complex issues, it might be hours or days from the original customer contact through to case closure. Similar to FCR, TTR rate can also be measured at both the agent level as well as topic, team, or department; also like FCR, it has a positive impact on customer satisfaction and helps identify where complex issues are causing agent slowdowns due to lack of knowledge. It can also be an indicator of lower productivity if an agent has the same level of skill and knowledge as peers but a longer TTR rate.
Service Quality
I am embarrassed to admit we did nothing to measure the customer satisfaction back in my day. Absolutely nothing. We might receive an occasional email or letter commending an agent, but these were uncommon occurrences. Our method of validating service quality was to monitor agents’ call and email interactions–listening to recorded calls and auditing email and chat transcripts for quality. That said, supervisors were lucky to perform one monitoring session for each of their agents of the more than 100 interactions each agent had weekly (or less than 1%).
Agent monitoring remains important today. It’s a great indicator of product or service knowledge, customer service skills, and troubleshooting abilities. But it pales in comparison to the importance of the voice of the customer.
Today there are many standard means of validating customers’ perceptions of service quality as well as that of the overall business: among them, CSAT and NPS, and Customer Effort has also joined those ranks. Consider the merits of each; it might be that you use different surveys for different points in the customer journey. Can’t decide on which to use? It’s okay. The bottom line is to regularly survey your customers and give them the opportunity to provide feedback. Otherwise, you are missing out on valuable input necessary to improve your service and business.
The Right Measurement For The Situation
In the modern service center, customers issues flow in by telephone, email, chat, and social media. As a result, there are many possible data points to measure throughout the course of customer engagement. It would be very easy to suffer from analysis paralysis; you must cut through the noise and focus on the important points.
Service can be divided into three critical areas: how quickly customers can reach the service center and engage; the overall efficiency of the service center; and the quality of service delivered. Most service centers are already monitoring these areas in some fashion. Take a moment to review exactly which KPIs are being scrutinized and if they are still giving you the critical information you need to drive service to the next level.