Customer satisfaction measurement (CSM or CSat)
Customer satisfaction is the most common form of market research in business-to-business markets and for measuring service and product quality. It is often connected to quality and production measurement such as TQM, six-sigma and process control, rather than as straight marketing based research. Many companies now just use the simplistic Net Promoter Score (NPS) as an alternative.
Before setting up a customer satisfaction programme, it is necessary to ensure that the organisation has the will to actually make changes for improvement, otherwise you will simply be annoying customers by taking their time to collect information, then not doing anything with it.
Quality of Design versus Quality of Delivery
Customer satisfaction measurement originally has roots in ideas about quality that come from an operational/production view of the business. The first idea is that quality is measured as the gap between customers' expectations and their perceptions of the product or service they receive. This gap-based view of quality says that if you match or beat customers' expectations, and so leave them satisfied, you have achieved the necessary product quality. Some companies go beyond this to look at examples of 'customer delight' aiming to exceed customer expectations.
The second idea from operations management is that quality is about conformance to a standard or specification. Within this idea is that once the design is set, quality is about ensuring that the end deliverable to the customer meets this design. Consequently from a production/operations point of view, customer satisfaction is about monitoring the quality of delivery of the product and service. The aim being to minimise production errors so saving money and making customers happy (see Quality is Free by Crosby).
Marketing and customer focused researchers who carry out satisfaction research on the other hand will tend to look to measure not just the quality of the delivered product or service, but also the whole relationship. This can lead to a gap between an operational perspective focused on conformance, and research focused on broad halo issues around feeling and perceptions of the brand.
In practice, customers are rarely able to separate out the general feeling that they have towards a product or service based on brand expectations and marketing and the actual delivery of the product in terms of its functional performance. So some care is needed in really defining what it is that you are looking for from a customer satisfaction study, what you are going to use the information for and whether alternative approaches might be better for you.
It has become common to short-circuit customer satisfaction measurement into a single Net Promoter Score measurement - "So, out of ten, how likely are you to recommend this product or service to friends or colleagues?". Scores of 9 or 10 are counted as promoters and scores of 6 or less are counted as detractors. The net score (promoters - detractors) is then used to indicate company performance. In practice, this is not necessarily the best measurement tool and other forms of satisfaction measurement will give better diagnostic information.
Samples and customers
The background to satisfaction comes into particular focus when you look at who and what should be sampled. For statistical process control (SPC) of production under total quality measurement (TQM) programme, the focus is on the output of the machine (or factory, or service operation) with a need to take a sample of items that have been produced and test each for conformance.
For customer satisfaction, the equivalent is sampling by product purchased, or for service industries, by service event. However, particularly in business markets, this can cause sampling problems in that a single customer may buy several products or make use of a service several times. If you sample these customers regularly, you are likely to face rapidly diminishing response rates and soon have no data.
The compromise is to take a sample of customers and to ask about their experiences over a certain period of time. Unfortunately for process control, this feedback typically just reflects the average view and will miss any key extremes that are important from an operational view. This then matches much more with the marketing-approach to customer satisfaction as a relationship measurement.
For businesses, it is also very likely that you will have a small number of large customers and a large number of small customers. As you are trying to judge quality of delivery, clearly interviews with larger customer are of more importance than smaller customers, yet typically satisfaction is biased toward the views of the many and not the few.
Indeed, it is also likely that the way in which you deliver to your largest customers is different to the way in which you service smaller customers. For instance, an account team, specialist logistics, and custom builds. For this reason we typically advise that a relationship approach to customer satisfaction such as One-to-One Research or win-loss analysis working on a bid-by-bid approach to measuring business success.
Scales and measurement
Most customer satisfaction measurement is conducted using a fairly standard 4 or 5 point scales from Very Satisfied, Satisfied, (Neither), Dissatisfied, Very Dissatisfied.
Typically satisfaction is reported as the percentage of customers rating you as either Satisfied or Very Satisfied. Unfortunately, this tends to be quite a crude measurement. Most companies will be scoring around 75-85%. Even poorly performing local government can still easily reach 60%. The maximum we have seen is 92%.
The difficulty is that within such studies is that year-on-year improvements are very hard to spot. The accuracy of the study often means that changes of 1 or 2% points are within the statistical tolerance of the design and show no real change, yet most companies would struggle to see changes beyond 1-2%.
You then have the second question, which is if you are at 80%, how do you get to 82%? Some clever statistical techniques might show that improving delivery from 60% to 65% say, would improve satisfaction, but then how do you measure and improve delivery?
For these reasons, we favour using attribute and level style approaches (see conjoint analysis) in conjunction with satisfaction measures to get a more actionable set of data, and if pursued fully this leads to techniques such as Quality of Service Review as a mechanism for monitoring customer satisfaction within customer relationships.
A second major factor here, is that satisfaction questionnaires are really about measuring and controlling for dissatisfaction. If you have a satisfied customer then the only thing you need to understand is that they are happy. Many satisfaction surveys fail customers because they ask happy customers lots of irrelevant questions. In a good questionnaire, we measure which standard the service reaches and whether the customer is happy with that standard. If they are not, we then ask for and analyse the reasons for dissatisfaction. Consequently happy customers have a shorter questionnaire and a better questionnaire experience.
For help and advice on carrying out customer satisfaction projects contact firstname.lastname@example.org