How to Measure Customer Satisfaction
When to trigger CSAT surveys, how to calculate your score, and what to do with the results
Customer satisfaction is measurable at every stage of the customer journey - if you ask the right question at the right moment. This playbook covers the CSAT method, when to trigger surveys, how to segment results by user type, and how to turn low scores into retention wins.
No credit card required
Step-by-step process
Follow these steps in order for the best results.
Understand what CSAT measures (and what it does not)
CSAT (Customer Satisfaction Score) measures satisfaction with a specific interaction - a support resolution, an onboarding step, a feature experience. It is transactional and moment-in-time. It does not measure overall loyalty (that is NPS) or effort (that is CES). Use CSAT when you want to know: "How did customers feel about this specific experience?"
Pick your CSAT question and scale
The standard CSAT question is: "How satisfied were you with [experience]?" on a 1-5 scale where 1 = Very dissatisfied and 5 = Very satisfied. Your CSAT score is the percentage of respondents who rated 4 or 5. Always include one open-text follow-up: "What could we have done better?" - this is where you find the actionable information behind the score.
Identify your measurement moments
CSAT is most accurate when triggered immediately after a specific event. The four highest-value measurement moments for SaaS teams are: (1) After support ticket resolution - measures support quality. (2) After onboarding completion - catches early friction before it becomes churn. (3) After first use of a key feature - measures feature satisfaction at the moment of truth. (4) After a major product release - measures whether the update improved or degraded the experience.
Calculate your CSAT score
CSAT = (number of responses rated 4 or 5) / (total responses) × 100. Only top-box responses count - 4 and 5 are "satisfied," 1-3 are not. A score of 80% means 80% of respondents were satisfied. Track this per touchpoint (support CSAT, onboarding CSAT, feature CSAT) rather than as one blended number - blended CSAT hides where the experience is breaking down.
Segment by user type
A CSAT of 80% overall is meaningless without segmentation. Break down your score by plan tier, user role, company size, and account age. A common pattern: new users give high CSAT during onboarding (the product is novel) but older users give low CSAT on the same step (they have encountered the friction repeatedly). Different segments need different interventions.
Read open-text responses by theme
The CSAT score ranks users by satisfaction. The open-text response explains why. Tag responses from your lowest-scoring segment by theme: slow performance, unclear UI, missing feature, billing issue, support quality. Count themes. The most common theme among low scorers is your highest-priority improvement. Fix it before the next quarter's measurement.
Follow up with every low scorer
A low CSAT score is not a data point - it is an intervention opportunity. When a user gives a 1 or 2, reach out personally within 24-48 hours. Acknowledge their experience, ask what happened, and explain what you are doing about it. Personal follow-up after low CSAT responses is one of the highest-ROI retention actions a SaaS team can take.
Key metrics to track
CSAT Score
% of respondents rating 4-5 on a 5-point scale. SaaS benchmark: 75-85% is good, 85%+ is excellent, below 60% needs immediate action.
CSAT by touchpoint
Satisfaction score broken down by interaction type (support, onboarding, feature). Track each separately - blending them hides where you are failing.
Low-score rate
% of respondents rating 1 or 2. Even if your overall CSAT is high, a rising low-score rate is a leading indicator of churn.
CSAT trend
Score over time. Flat CSAT while growing rapidly is a warning sign - satisfaction often drops as you scale if quality is not maintained.
Response rate
In-product CSAT targets 30-60%. Email CSAT targets 15-30%. Below 10% means wrong timing, wrong channel, or survey fatigue.
Common mistakes to avoid
Sending CSAT as a batch email on a monthly schedule instead of triggering it immediately after specific interactions.
Using a 1-10 scale for CSAT - a 5-point scale produces more consistent, comparable results.
Tracking one blended CSAT score instead of tracking separately by touchpoint - the blend hides where the experience is broken.
Not following up with low scorers - a 1 or 2 rating is a retention alert, not just a data point.
Reading the score but not the open-text responses - the score tells you who is unhappy, the text tells you why.
Measuring satisfaction without acting on it - customers who give feedback and see no change become detractors.
Ready to run the survey?
Mapster has a template and question library ready for this playbook.
Frequently asked questions
What is a good customer satisfaction score?
For B2B SaaS, a CSAT of 75-85% (percentage of positive ratings - 4 or 5 on a 5-point scale) is typical. Above 85% is excellent. Below 60% signals systemic satisfaction problems requiring immediate action. Compare against SaaS-specific benchmarks rather than cross-industry averages - consumer CSAT benchmarks are not comparable.
What is the difference between CSAT and NPS?
CSAT measures satisfaction with a specific interaction - transactional and immediate. NPS measures overall loyalty and likelihood to recommend - relational and long-term. Use CSAT after each customer touchpoint to diagnose what is driving satisfaction or dissatisfaction. Use NPS quarterly to track overall relationship health. They are complementary - CSAT tells you where to fix the experience, NPS tells you whether you are retaining trust overall.
How often should I send CSAT surveys?
Trigger-based, not calendar-based. CSAT should fire within 24 hours of a specific interaction: after support resolution, after onboarding, after feature use, after a product update. Avoid sending CSAT on a monthly schedule to your full user list - this produces low response rates and inaccurate data because users cannot connect the survey to a specific experience.
What is the difference between CSAT and CES?
CSAT asks "How satisfied were you with [experience]?" - measuring the outcome. CES (Customer Effort Score) asks "How easy was it to [complete task]?" - measuring the process. CES is a stronger predictor of churn because customers will tolerate an imperfect outcome if the process was effortless, but will churn over high-effort processes even when the outcome was acceptable. Use both: CSAT measures satisfaction with the result, CES measures friction in the journey.
How do I increase customer satisfaction?
Improve customer satisfaction by: (1) Measuring CSAT at specific touchpoints so you know exactly where satisfaction is lowest. (2) Reading open-text responses and tagging by theme - the most common theme is your highest-priority fix. (3) Following up personally with every low scorer within 24-48 hours. (4) Fixing the highest-impact issues and communicating the change to affected users. (5) Reducing effort with CES measurement - high-effort interactions depress satisfaction even when the outcome is acceptable.
More product playbooks
Product Strategy
How to Measure Product Market Fit
Learn how to measure product market fit using the Sean Ellis test, Superhuman framework, and survey-based scoring. Includes benchmarks, survey questions, and what to do at each score range.
Customer Loyalty
How to Improve NPS Score
Learn how to improve your Net Promoter Score with a step-by-step process - close the loop with detractors, act on passives, and turn promoters into advocates.
Retention
How to Reduce SaaS Churn
A step-by-step playbook for reducing SaaS churn - identify churn reasons with exit surveys, fix onboarding gaps, segment at-risk users, and build a retention system.
Onboarding
User Onboarding Best Practices
A step-by-step user onboarding playbook for SaaS - measure activation, identify drop-off with surveys, reduce time to first value, and build an onboarding that retains.
Product Strategy
How to Prioritize Product Features
A step-by-step playbook for feature prioritization - collect user feedback systematically, score features by impact and effort, and align your roadmap to your ICP.
Research
How to Do User Research
A practical user research playbook for product teams - when to use surveys vs. interviews, how to write unbiased questions, how to analyze results, and how to act on findings.
Run the surveys from this playbook
Mapster connects every survey response to a real user -- plan, role, company size, and activity. Segment your results without a manual data import.
Get Started FreeNo credit card required