Research

How to Do User Research

A practical process for product teams - from question design to acting on findings

User research does not require a research team or a six-week study. This playbook covers the practical methods product teams use to collect, analyze, and act on user insights - using surveys, interviews, and behavioral data in combination.

No credit card required

Step-by-step process

Follow these steps in order for the best results.

1

Define the research question first

Do not start with a survey - start with the decision you need to make. "Should we build feature X?" is a better research trigger than "let us ask users what they think." Every research method you choose should be designed to answer a specific decision, not to generate general insights.

2

Choose the right method for the question

Surveys answer "how many" and "how often" - they are quantitative at scale. Interviews answer "why" and "how" - they are qualitative and deep. Use surveys when you need to validate a hypothesis across a large sample. Use interviews when you need to understand behavior and motivation with depth.

Mapster tip: Mapster is best for quantitative user research - measuring sentiment, feature usage, friction, and segment-level patterns across your full active user base.
3

Write unbiased survey questions

Leading questions produce misleading data. "How much do you love feature X?" assumes they love it. "How would you rate feature X?" is neutral. Use single-concept questions (not "what did you like and dislike?"), neutral language, and randomized answer order for choice questions to eliminate position bias.

4

Target the right users

Who you survey matters as much as what you ask. Active users give different answers than churned users. Enterprise gives different answers than SMB. Power users give different answers than occasional users. Define the specific segment your research question applies to - then survey only that segment.

Mapster tip: Use Mapster to target surveys by user attributes - plan, activity level, signup date, or feature usage - so your sample matches your research question.
5

Analyze patterns, not outliers

One user saying something memorable is an anecdote. Ten users independently raising the same theme is a finding. When analyzing survey results, look for patterns - which answers appear most frequently, which segments diverge from the average, and where responses cluster. Individual responses inform context; patterns inform decisions.

6

Share findings and drive action

Research that sits in a doc is wasted. Translate findings into specific product decisions or hypotheses to test. Share a one-page summary with the team: what you asked, who you asked, what you found, and what you recommend doing. Research without action is expensive and demoralizing for users who participated.

Key metrics to track

Survey response rate

In-product surveys typically get 5-15% response rates. Email surveys get 1-5%. Low rates can bias results toward engaged users.

Sample size vs. confidence

For surveys with 4-5 answer options, 100+ responses gives reliable patterns. For binary questions, 50+ is sufficient.

Research-to-decision time

How long from research completion to a documented decision. Research that takes longer than 4 weeks to influence a decision loses relevance.

Decision hit rate

Were product decisions made from research validated by outcomes? Track this retroactively to improve research quality over time.

Common mistakes to avoid

Starting with the survey instead of the decision you need to make.

Writing leading or double-barreled questions that bias responses.

Surveying the wrong segment - convenient samples (whoever responds) are rarely the right samples.

Treating qualitative interviews and quantitative surveys as interchangeable - they answer different questions.

Doing research and then not acting on it - this destroys user trust and future response rates.

Ready to run the survey?

Mapster has a template and question library ready for this playbook.

Frequently asked questions

What is the difference between user research and customer feedback?

Customer feedback is reactive - users tell you what went wrong or what they want. User research is proactive - you design specific questions to answer a decision you need to make. Both are valuable, but research is more rigorous: it defines the question first, then selects the right method and sample.

How many users do I need to survey for valid results?

For finding themes and patterns, 50-100 survey responses are usually sufficient. For statistically significant segment comparisons, you need 100+ per segment. For qualitative interviews, 5-8 interviews per user segment typically reveals the most common themes - after that, you start hearing the same things repeatedly.

When should I use surveys vs. user interviews?

Use surveys when you need to measure something at scale (how many users feel this way?) or validate a hypothesis (do most users prefer A or B?). Use interviews when you need to understand behavior and motivation in depth (why do users do X instead of Y?). The best research programs use both - surveys for breadth, interviews for depth.

How do I increase survey response rates?

In-product surveys (shown while users are active in your app) get dramatically higher response rates than email surveys. Keep surveys short (2-4 questions). Show them at a relevant moment (after completing a workflow, not randomly). Explain why you are asking. In-product surveys with Mapster typically see 8-15% response rates vs. 1-3% for email.

Run the surveys from this playbook

Mapster connects every survey response to a real user - plan, role, company size, and activity. Segment your results without a manual data import.

Get Started Free

No credit card required