Research methods explained

Qualitative vs Quantitative

Qualitative is descriptive. Quantitative is numerical. Both answer different questions, and most good research uses both.

The difference between qualitative and quantitative research, data, and surveys - with examples from product, customer feedback, and market research.

Qualitative

The "why" and "how"

Qualitative data is descriptive and non-numerical. It captures meaning, context, and experience through words, themes, and observations. You cannot average it - but you can analyze it for patterns.

Open-text survey responses

User interview transcripts

Usability test observations

Support ticket themes

Session recording notes

Quantitative

The "how many" and "how much"

Quantitative data is numerical and measurable. It captures counts, scores, and percentages that can be aggregated, compared, and tracked over time. You can calculate averages, run statistics, and identify trends.

NPS and CSAT scores

Response counts and percentages

Feature usage rates

Churn and conversion rates

Rating scale averages

The survey question

Are surveys qualitative or quantitative?

The answer depends entirely on the question types you use. Most surveys are both.

Q

Quantitative survey questions

Rating scales, multiple choice options, and yes/no questions produce quantitative data. The responses are numerical or categorical - you can count them, average them, and compare them across time, segments, or cohorts.

Q

"On a scale of 0-10, how likely are you to recommend us? (NPS)"

Q

"How satisfied are you with this experience? (1 Unsatisfied - 5 Very Satisfied)"

Q

"How often do you use this feature? (Daily / Weekly / Monthly / Rarely)"

Q

"Did this solve your problem? (Yes / No)"

Q

Qualitative survey questions

Open-text questions produce qualitative data. Responses are descriptive - you cannot average them, but you can read them for themes, categorize them, and surface patterns using manual analysis or AI tagging.

Q

"What is the main reason for your score?"

Q

"What would you change about this experience?"

Q

"What problem were you trying to solve when you signed up?"

Q

"Is there anything we could do to serve you better?"

M

Mixed surveys (most common)

Most effective surveys combine both. A rating question measures the outcome. An open-text follow-up explains why. The NPS survey is the canonical example: the 0-10 score is quantitative data; the "what is the main reason?" follow-up is qualitative data. Together they tell a complete story.

Quantitative part

"On a scale of 0-10, how likely are you to recommend us to a friend or colleague?"

Result: NPS score, promoter/detractor split, trend over time

Qualitative part

"What is the main reason for your score?"

Result: themes, pain points, specific feature mentions, churn signals

Qualitative vs quantitative research methods

The methods differ in how they collect data, how many participants they need, and what kind of conclusions they support.

Qualitative research methods

User interviews

One-on-one conversations that explore motivations, mental models, and decision-making. Typically 5-15 participants. Reveals depth, not breadth.

Focus groups

Group discussions that surface shared attitudes and reactions. Useful for early concept testing and discovering language users use to describe problems.

Usability testing

Observing users attempting tasks to identify where they get confused, stuck, or frustrated. Generates observational data, not scores.

Open-text surveys

Surveys with free-response questions. Scales to hundreds of respondents while preserving the descriptive richness of user language.

Ethnographic research

Observing users in their natural environment to understand context that they would not report in a survey or interview.

Quantitative research methods

Structured surveys

Surveys with rating scales, multiple choice, and closed questions. Produces numerical data that can be compared across segments and tracked over time.

A/B testing

Controlled experiments that compare two variants to measure which performs better on a defined metric. Statistically rigorous at scale.

Usage analytics

Event tracking and product analytics that measure what users do - feature adoption, session length, funnel progression, retention curves.

Cohort analysis

Comparing groups of users who share a characteristic (signup month, plan, acquisition channel) to identify patterns in behavior over time.

Benchmarking

Comparing your scores (NPS, CSAT, CES) against industry benchmarks or your own historical data to assess relative performance.

Qualitative vs quantitative data

The type of data determines how you analyze it and what conclusions you can draw.

Qualitative data examples

NPS comment

"The product is great but onboarding is confusing"

Churn reason

"Too expensive for what we get at our stage"

Feature request

"I need to export responses to Notion, not just CSV"

Support ticket theme

"Users keep asking how to connect Slack"

Interview insight

"I switched from Typeform because of the pricing jump"

These cannot be averaged. They are analyzed by reading for themes, tagging by topic, and identifying patterns across many responses.

Quantitative data examples

NPS score

42

CSAT rating

4.1 / 5.0 average across 1,240 responses

Response rate

31% of triggered surveys completed

Detractor share

18% of respondents scored 0-6

Feature adoption

64% of Pro users activated Slack integration

These can be averaged, trended, segmented, and compared. They tell you how big the problem is and how it changes over time.

The combination rule

Qualitative data without quantitative context makes everything sound equally important. Quantitative data without qualitative context tells you something is wrong but not what to do about it. The most useful product research pairs a score with an explanation: NPS dropped 8 points (quantitative) + "32% of detractors mentioned onboarding confusion" (qualitative analysis of open text). Now you know what to fix.

Qualitative vs quantitative at a glance

Dimension
Qualitative
Quantitative
Data type
Descriptive, non-numerical
Numerical, measurable
Answers
"Why?" and "How?"
"How many?" and "How much?"
Sample size
Small (5-30 typically)
Large (50+ for significance)
Analysis method
Thematic coding, reading
Statistical, mathematical
Output
Themes, insights, hypotheses
Scores, trends, benchmarks
Survey question type
Open-text, free response
Rating scale, multiple choice
NPS application
Open-text follow-up ("why?")
The 0-10 score itself
Strength
Depth and context
Scale and comparability
Weakness
Hard to generalize to all users
Tells you what, not why
Best for
Discovery, hypothesis generation
Validation, measurement, tracking

When to use qualitative vs quantitative research

The choice is not about which is better. It is about which question you are trying to answer.

Use qualitative research when

You do not know what to measure yet

If you are entering a new market, building a new feature, or investigating an unexpected churn spike, qualitative research helps you understand the problem before you try to measure it.

You need to understand the "why" behind a number

Your NPS dropped. Your churn rate increased. Qualitative research - interviews, open-text analysis - tells you what is actually driving the number so you know what to fix.

You are designing something new

Concept testing, prototype feedback, and messaging research benefit from qualitative approaches. You need user reactions, not just ratings.

Your sample is too small for statistics

If you have fewer than 50 responses, statistical analysis loses meaning. Qualitative reading and thematic analysis is more honest about what you actually know.

Use quantitative research when

You need to measure at scale

Tracking NPS across thousands of users, measuring CSAT across all support interactions, or calculating PMF across your entire active base requires quantitative measurement.

You need to compare or segment

Comparing NPS between Free and Pro users, comparing CSAT before and after an onboarding change, or identifying which cohorts show higher churn all require numbers.

You need to track a metric over time

Monthly NPS trend, weekly response rate, quarterly CSAT benchmark - tracking requires a consistent numerical metric you can plot and compare.

You need to validate a hypothesis

You hypothesize that users on the Pro plan are more satisfied. You need a CSAT score by plan tier to confirm or reject that hypothesis at a statistically meaningful scale.

The research loop

Step 1

Quantitative signal

NPS drops. Churn spikes. A metric moves.

Step 2

Qualitative investigation

Open-text analysis, interviews, session recordings explain why.

Step 3

Quantitative validation

You fix the identified issue and measure whether the metric recovers.

Common surveys classified: qualitative or quantitative?

Real examples from product and customer research.

Quantitative

NPS survey (0-10 rating question)

The score itself is a number. You calculate a net promoter score, track it over time, segment by plan or cohort, and compare against industry benchmarks.

Qualitative

NPS follow-up ("Why did you give that score?")

Open-text responses are descriptive. You read them for themes: pricing complaints, missing features, onboarding friction, or positive product moments.

Quantitative

CSAT survey (1-5 rating)

CSAT produces a numerical score (average rating or percentage satisfied) that you can compare across support agents, features, or time periods.

Quantitative

PMF survey ("How disappointed if you could not use this?")

The PMF benchmark is the percentage who answer "Very disappointed." That percentage is a quantitative metric with a defined threshold (40% = product-market fit).

Qualitative

User interviews

Free-form conversations generate descriptive data about motivations, workarounds, and mental models. You cannot aggregate this into a score.

Mixed

Exit survey with rating + open text

The primary churn reason (multiple choice) is quantitative - you can calculate that 42% cite "too expensive." The open text is qualitative - it tells you what they mean by that.

Quantitative

Onboarding questionnaire (role, company size, use case)

Structured multiple choice fields produce categorical data you can count and segment by. 38% of signups are in the "Product" role. That is a quantitative insight.

Qualitative

Usability test observations

Observational notes about where users struggled, what they said aloud, and what they expected are descriptive. They reveal friction but do not produce a score.

Both

Likert scale survey (agree/disagree)

Technically, Likert scales produce ordinal data. In practice, most teams treat averaged Likert responses as quantitative for trend tracking, and add open-text questions for qualitative depth.

Collect qualitative and quantitative feedback in one place

Mapster links every response - the score and the open-text explanation - to a real user so you know who said what.

Quantitative surveys

NPS, CSAT, CES, and PMF surveys with automatic scoring, trend tracking, and segmentation by plan, role, or cohort.

Qualitative open text

Add open-text follow-up questions to any survey. Every response is tagged to the user who gave it so you can filter qualitative feedback by segment.

Identity-linked analysis

Every response - score and open text - is linked to a real user with their plan, role, and behavior. Segment qualitative themes by what users actually do.

Frequently Asked Questions

Get started today

Run qualitative and quantitative surveys - linked to real users

NPS scores, CSAT ratings, open-text feedback, and segmentation by plan, role, or cohort. Free to start.

Start Free

No credit card required