NPS · CSAT · CES · PMF · Qualitative · Behavioral

Types of Customer Feedback

The 6 types of customer feedback every SaaS team should know - what each one measures, when to use it, and what it misses.

No credit card required

The 6 types of customer feedback

Each type answers a different question. Knowing which to use - and when - is more valuable than running all of them at once.

01

NPS - Net Promoter Score

Core question:

"How likely are you to recommend us to a friend or colleague?" (0–10)

Measures

Long-term loyalty and word-of-mouth potential

When to use

Every 90 days as a relationship survey, or triggered at 30 days post-onboarding and before renewal

What it tells you

Whether your users would recommend you - the best leading indicator of organic growth

Scale

0–10 · Detractors (0–6), Passives (7–8), Promoters (9–10)

Blind spot

Why they gave that score without a follow-up open-ended question

02

CSAT - Customer Satisfaction Score

Core question:

"How satisfied were you with [interaction]?" (1–5)

Measures

Satisfaction with a specific touchpoint - feature, support interaction, onboarding step

When to use

Within minutes of a specific interaction - support resolution, feature completion, onboarding milestone

What it tells you

Whether a specific touchpoint is meeting expectations - much more precise than overall satisfaction

Scale

1–5 · % who rate 4–5 = your CSAT score

Blind spot

Long-term relationship health - a user can be satisfied with support but still churn

03

CES - Customer Effort Score

Core question:

"How easy was it to [complete task]?" (1–7)

Measures

Friction in specific workflows - the biggest driver of churn you can actually fix

When to use

After high-effort interactions: onboarding setup, support ticket resolution, import/export tasks, billing changes

What it tells you

Where users are working too hard - friction that accumulates into churn before users tell you

Scale

1–7 · Gartner research shows CES is the strongest predictor of churn among the three core scores

Blind spot

Whether users find the product valuable - a frictionless experience of the wrong thing still churns

04

PMF - Product-Market Fit

Core question:

"How would you feel if you could no longer use this product?" (Very / Somewhat / Not disappointed)

Measures

How essential your product is to users - the leading indicator of retention and organic growth

When to use

After 30+ days of active use (minimum 40 responses needed), and after major product changes

What it tells you

Whether you've found product-market fit - the most important question for early-stage teams

Scale

3 choices · 40%+ 'Very disappointed' = product-market fit. Below 40%: iterate before scaling.

Blind spot

Why users feel that way - always pair with 'What would you use instead?' and 'What do you love most?'

05

Open-Ended Qualitative

Core question:

"What's the one thing we could do to make this product better for you?"

Measures

The why behind quantitative scores - motivations, friction, desires, use cases

When to use

As a follow-up to any quantitative survey, in user interviews, at onboarding, and at cancellation

What it tells you

What your users are actually experiencing - often reveals problems you didn't know to ask about

Scale

No scale - qualitative text. Group into themes: feature requests, usability issues, pricing, praise, churn signals

Blind spot

Prevalence - you need 20+ responses to know if a theme is widespread or just one vocal user

06

Behavioral Signals

Core question:

What users do - not what they say

Measures

Actual usage patterns: feature adoption, login frequency, session length, drop-off points, export behavior

When to use

Continuously - behavioral data is always flowing from your product. Review it alongside survey data.

What it tells you

What users actually value (vs. what they say they value) - adoption data is more honest than stated preferences

Scale

No scale - usage metrics from your product analytics (Mixpanel, Amplitude, PostHog)

Blind spot

The why - a user who stops using a feature could be satisfied (found a workaround) or frustrated (gave up)

Feedback collection methods

The channel matters as much as the question type. In-product surveys consistently outperform email on response rate and data quality.

In-product (in-app)

Best for: Authenticated users in the product
Typical response rate: 20–40%

Strengths:

  • Highest response rate - reaches users in context
  • Tied to specific user actions and attributes
  • Can segment by plan, role, tenure

Limitations:

  • Interrupts the product experience if overused
  • Only reaches logged-in users

Email surveys

Best for: Churned users, post-interaction, relationship surveys
Typical response rate: 5–20%

Strengths:

  • Reaches users outside the product
  • Good for churn exit surveys
  • Can include longer survey formats

Limitations:

  • Lower response rate than in-product
  • No behavioral context at time of completion

Website widget / feedback button

Best for: Anonymous visitors, marketing pages, documentation
Typical response rate: 1–5% of page visitors

Strengths:

  • Captures feedback from anonymous visitors
  • Always-on - no scheduling needed
  • Good for pricing page and docs friction

Limitations:

  • No user identity - can't segment by attributes
  • Often skewed toward frustrated users who seek out feedback buttons

Link surveys

Best for: Broad distribution, non-users, external audiences
Typical response rate: 2–10%

Strengths:

  • Works outside your product - Slack, email, social
  • Good for non-customer research
  • No installation required

Limitations:

  • No identity linking - anonymous by default
  • Self-selection bias - who clicks the link shapes the results

User interviews

Best for: Deep qualitative discovery, new feature validation, ICP research
Typical response rate: N/A - scheduled 1:1

Strengths:

  • Deepest qualitative insights available
  • Can follow up on any answer in real time
  • Builds customer relationships

Limitations:

  • Time-intensive - 30–60 min per session
  • Small sample size - not representative
  • Requires scheduling and recruitment effort

Which type to use for your goal

Pick the feedback type by goal - not by which one you've heard of most.

Your goalRecommended typeFrequencyNotes
Track loyalty over timeNPSQuarterlyYour primary relationship health metric
Improve onboardingCSAT + CESAfter onboarding completesCSAT for overall satisfaction, CES for ease
Reduce support churnCSATWithin 5 min of ticket resolutionSupport CSAT is a direct churn predictor
Validate product-market fitPMFAt 40+ active users, then quarterly40%+ 'very disappointed' = PMF
Understand why users leaveExit survey (open-ended)At cancellation triggerMost critical feedback - highest urgency to act
Find feature usability issuesCES + open-endedAfter first use of a featureCES finds friction; open-ended names it
Discover unknown problemsOpen-ended qualitativeUser interviews monthlyCan't find unknown unknowns with surveys alone

Frequently asked questions

Run all 4 survey types from one platform

NPS, CSAT, CES, and PMF surveys - in-product or via email. Every response linked to the real user so you can segment by plan, role, or any custom attribute.

Start Free

No credit card required