Survey type guide
Survey Types: 6 Kinds of Surveys and When to Use Each
The type you choose determines the question, the scale, and the moment you send it.
Different types of surveys measure different things. Using the wrong kind gives you a real number that answers the wrong question. This guide covers the 6 main survey types used by SaaS teams - what each measures, what question to ask, when to send it, and what the result actually tells you.
No credit card required
6 survey types at a glance
Each kind of survey answers a distinct question. Match the type to the question - not to habit.
| Type | Measures | Scale | Cadence |
|---|---|---|---|
NPS Net Promoter Score | Overall loyalty and likelihood to recommend | 0–10 numeric scale | Quarterly (relationship survey). Also triggered at 30 days post-onboarding and 60 days before renewal. |
CSAT Customer Satisfaction Score | Satisfaction with a specific interaction or touchpoint | 1–5 scale. CSAT score = % of respondents rating 4 or 5. | Triggered immediately after: support ticket resolution, onboarding completion, first use of a key feature. |
CES Customer Effort Score | How much effort a task required - friction and difficulty | 1–7 agreement scale. CES score = average across all responses. | Triggered after: setup wizard, support ticket, checkout, any multi-step task where friction is a known risk. |
PMF Product-Market Fit Survey | Whether your product is essential to users | 3 options: Very disappointed / Somewhat disappointed / Not disappointed | Once you have 40+ active users. Repeat after major product changes, pivots, or new segment entry. |
Exit Exit / Churn Survey | Why users cancel - the specific reason, not a satisfaction score | Multiple choice (pricing / missing feature / switching to X / no longer needed) + optional open-ended follow-up | Triggered at the moment of cancellation, before the account is closed. Not sent in a follow-up email days later. |
UX UX Research Survey | Friction and confusion in a specific flow or feature | Open-ended, qualitative. Sometimes paired with a single 1–5 rating for the experience. | After completing a specific task for the first time. Or sent to users who started a flow but did not complete it. |
Each survey type in depth
Question, scale, trigger, output, and benchmark for each of the 6 kinds.
NPS
Net Promoter Score
Measures: Overall loyalty and likelihood to recommend
Standard question
"How likely are you to recommend us to a friend or colleague?"
Scale
0–10 numeric scale
When to send
Quarterly (relationship survey). Also triggered at 30 days post-onboarding and 60 days before renewal.
Benchmark
B2B SaaS median: 31–40. Above 50 is excellent.
What the result tells you: Promoters (9–10) are your referral engine. Detractors (0–6) are churn risk. The gap between segments tells you where to act.
Complete NPS guide →CSAT
Customer Satisfaction Score
Measures: Satisfaction with a specific interaction or touchpoint
Standard question
"How satisfied were you with your [support / onboarding / feature] experience?"
Scale
1–5 scale. CSAT score = % of respondents rating 4 or 5.
When to send
Triggered immediately after: support ticket resolution, onboarding completion, first use of a key feature.
Benchmark
75–85% is good for SaaS. Below 70% warrants immediate investigation.
What the result tells you: Identifies which specific touchpoints are failing. A low CSAT after support signals a process problem, not a product problem.
Complete CSAT guide →CES
Customer Effort Score
Measures: How much effort a task required - friction and difficulty
Standard question
"The company made it easy to handle my issue." (Strongly disagree → Strongly agree)
Scale
1–7 agreement scale. CES score = average across all responses.
When to send
Triggered after: setup wizard, support ticket, checkout, any multi-step task where friction is a known risk.
Benchmark
5.5+ is good. Below 5.0 is a red flag for that interaction.
What the result tells you: High effort predicts churn better than low satisfaction. A CES below 5.0 identifies friction that will drive cancellations.
Complete CES guide →PMF
Product-Market Fit Survey
Measures: Whether your product is essential to users
Standard question
"How would you feel if you could no longer use this product?"
Scale
3 options: Very disappointed / Somewhat disappointed / Not disappointed
When to send
Once you have 40+ active users. Repeat after major product changes, pivots, or new segment entry.
Benchmark
40% very disappointed is the threshold. Superhuman reached 58%.
What the result tells you: 40%+ "very disappointed" = product-market fit. Below 40% = keep iterating. Segment by user type to find where PMF exists.
Complete PMF guide →Exit
Exit / Churn Survey
Measures: Why users cancel - the specific reason, not a satisfaction score
Standard question
"What is the main reason you decided to cancel today?"
Scale
Multiple choice (pricing / missing feature / switching to X / no longer needed) + optional open-ended follow-up
When to send
Triggered at the moment of cancellation, before the account is closed. Not sent in a follow-up email days later.
Benchmark
No standard benchmark. Track the top reason over time - if it changes, your product or pricing changed.
What the result tells you: The single highest-signal data source for product and pricing decisions. Identifies pattern cancellation reasons by segment.
Complete Exit guide →UX
UX Research Survey
Measures: Friction and confusion in a specific flow or feature
Standard question
"What, if anything, felt confusing or unclear when you [completed X]?"
Scale
Open-ended, qualitative. Sometimes paired with a single 1–5 rating for the experience.
When to send
After completing a specific task for the first time. Or sent to users who started a flow but did not complete it.
Benchmark
No benchmark. Track themes across 20–30 responses - patterns emerge quickly.
What the result tells you: Specific language users use to describe confusion - invaluable for rewriting UI copy. Reveals friction invisible in analytics.
Complete UX guide →How to choose the right survey type
Start with the question you need to answer - not the survey type you are most familiar with.
“Do users love us enough to recommend us?”
Run quarterly. Score below 20 = urgent retention problem.
“Was this specific interaction satisfying?”
Trigger immediately after support, onboarding, or feature use.
“Was this task easy or was it frustrating?”
Trigger after any multi-step task. High effort predicts churn.
“Do we have product-market fit?”
Run at 40+ active users. 40%+ very disappointed = PMF.
“Why are users cancelling?”
Trigger at cancellation. The highest-signal data you can collect.
“Where is this specific flow confusing?”
Trigger after task completion or after an abandoned flow.
Running multiple survey types: frequency rules
The goal is the right survey at the right moment - not comprehensive coverage of every user in a single week.
One relationship survey per quarter
NPS is a quarterly signal - sending it more frequently produces noise, not insight. If NPS drops quarter over quarter, investigate with CSAT and CES at specific touchpoints rather than re-running NPS.
Trigger transactional surveys by event
CSAT and CES should fire automatically after the relevant event occurs - not on a schedule. A user who contacts support three times in a month should receive CSAT three times, at each resolution. That is not over-surveying - it is the right signal at the right moment.
Cap to one survey per 30 days per user
Apply a frequency cap so the same user cannot receive more than one survey per 30-day window across all types. This prevents a single user from getting NPS on Monday and CSAT on Wednesday. The cap protects response rates without reducing signal quality.
Frequently asked questions
Run NPS, CSAT, CES, and PMF surveys - all from one place
Every survey type in one tool. In-product triggers, every response linked to a real user, segmented from the first result. Free to start.
Start FreeNo credit card required
Related guides
← Back to the complete survey creation guide