How to Write Survey Questions
The difference between a survey people answer honestly and one they rush through is how the questions are written
Most survey questions - and survey questionnaires - are written in a way that produces biased, vague, or unusable data without anyone realising it. Leading questions, double-barrelled questions, and jargon all push respondents toward the wrong answer. This playbook covers how to write a good survey or questionnaire: the question types, wording principles, and structural rules that produce honest, actionable data for customer satisfaction surveys, research surveys, NPS, CSAT, CES, and PMF.
No credit card required
Step-by-step process
Follow these steps in order for the best results.
Start with the decision you need to make
Before writing a single question, define what decision the survey will inform. Are you trying to find out why users churn in month two? Whether a new feature landed well? Which segment has the strongest product-market fit? Every question should connect directly to that decision. If you cannot explain how a question will change what you do, remove it. Generic satisfaction surveys produce generic data that cannot drive specific actions.
Use validated question formats for quantitative measures
For satisfaction, effort, loyalty, and essentialness, use the established question formats rather than inventing your own. The NPS question ("How likely are you to recommend us on a scale of 0-10?"), CSAT ("How satisfied are you with your experience today?" on a 1-5 scale), CES ("How easy was it to accomplish what you came to do?" on a 1-7 scale), and PMF ("How would you feel if you could no longer use this product?") are validated because they produce comparable, benchmarkable data. Changing the wording changes the score and makes it incomparable.
Write one idea per question
Double-barrelled questions ask about two things at once and produce uninterpretable answers. "How easy and useful was the onboarding?" is asking two separate questions - a user might find it easy but not useful, or useful but hard. They cannot answer both at once. Split every double-barrelled question into two separate questions, or choose the one dimension that matters most for your decision.
Avoid leading and loaded language
Leading questions steer the respondent toward the answer you want, which produces data that confirms your assumptions rather than challenges them. "How much did you enjoy the new dashboard?" assumes they enjoyed it. "What did you think of the new dashboard?" does not. Loaded language ("our industry-leading feature") creates a halo effect that inflates positive scores. Write questions that a critic of your product would consider fair.
Use open-ended follow-ups after every rating question
A rating score tells you what happened. An open-ended follow-up tells you why. After every NPS, CSAT, CES, or PMF question, add one open-ended question: "What is the main reason for your score?" or "What could we do to improve your experience?" Keep the follow-up optional to protect response rates, but make the prompt specific to the rating the user gave - detractors and promoters should see different prompts.
Keep the survey to 3 questions maximum
Response quality drops sharply after the third question. Users begin rushing, selecting random options, or abandoning the survey entirely. For in-product surveys, one rating question plus one open-ended follow-up is the ideal format. If you need to ask more than three questions, split the survey into separate surveys triggered at different moments - do not stack them all into one session. Every question you add is a trade-off against data quality on all the other questions.
Test your questions on a small group before full launch
Send the survey to five to ten users internally or a small cohort of trusted users before full launch. Ask them to think aloud as they answer. Look for questions they pause on, interpret differently than intended, or find confusing. A question that seems clear to the person who wrote it often means something different to the person reading it cold. Catching one ambiguous question before launch saves an entire survey cycle of bad data.
Key metrics to track
Response rate
The percentage of users who complete the survey. Below 20% suggests the survey is too long, poorly timed, or the questions feel irrelevant to the user.
Completion rate
The percentage of users who start the survey and finish it. High start-low completion means questions mid-survey are causing drop-off - likely too long or too demanding.
Open-text response rate
The percentage of respondents who answer open-ended questions. Below 40% on optional open-text suggests the prompt is too vague or users do not believe their feedback will be used.
Score distribution
The spread of ratings across the scale. A distribution heavily skewed to one end may indicate a leading question or a sample that is not representative of your full user base.
Response time
Average time to complete the survey. Very fast average completion (under 30 seconds for a 3-question survey) suggests users are rushing - a sign of survey fatigue or poor question relevance.
Common mistakes to avoid
Writing questions before defining the decision the survey will inform - resulting in data that cannot drive any specific action.
Using double-barrelled questions ("How easy and helpful was the experience?") that ask about two things at once and produce uninterpretable answers.
Writing leading questions that assume a positive experience ("How much did you enjoy...?") and produce inflated, biased scores.
Adding too many questions to a single survey - every question beyond three reduces response quality on all the others.
Changing the wording of standard validated questions (NPS, CSAT, CES, PMF) - altered wording produces scores that cannot be benchmarked against industry data.
Skipping the open-ended follow-up question - the rating score tells you what happened, but without the follow-up you never know why.
Launching without testing - questions that seem clear to the author are often interpreted differently by users reading them cold.
Ready to run the survey?
Mapster has a template and question library ready for this playbook.
Frequently asked questions
How do you write good survey questions?
Write good survey questions by: (1) Starting with the decision you need to make and working backwards to the question. (2) Using one idea per question - no double-barrelled questions. (3) Avoiding leading language that steers the respondent toward a preferred answer. (4) Using validated standard wording for NPS, CSAT, CES, and PMF questions. (5) Adding an open-ended follow-up after every rating question. (6) Keeping the survey to three questions maximum.
What are the most common survey question mistakes?
The most common survey question mistakes are: (1) Double-barrelled questions that ask two things at once. (2) Leading questions that assume a positive experience. (3) Jargon or technical language that users interpret differently. (4) Too many questions in a single survey, causing quality to drop on all of them. (5) Changing standard validated question wording (NPS, CSAT) in ways that make the score incomparable to benchmarks. (6) No open-ended follow-up to explain the reason behind the rating.
What is a leading question in a survey?
A leading question steers the respondent toward a specific answer, usually a positive one. Examples: "How much did you enjoy the new feature?" assumes they enjoyed it. "How helpful was our support team?" assumes they were helpful. Non-leading versions: "What did you think of the new feature?" and "How would you rate your support experience?" Use neutral language so responses reflect the user's actual view, not the framing of the question.
How many questions should a survey have?
In-product surveys should have one to two questions: one rating scale question and one optional open-ended follow-up. Email surveys can extend to three to five questions if they are sent infrequently. Beyond five questions, response quality drops significantly as users begin rushing or drop off entirely. If you need more data, split the questions across two separate surveys triggered at different moments rather than stacking them into one long session.
Should I write my own NPS question or use the standard wording?
Use the standard wording: "How likely are you to recommend [product] to a friend or colleague?" on a 0-10 scale. The standard wording is validated against thousands of surveys and produces scores that can be compared to industry benchmarks. Changing the wording - even slightly - changes the distribution of responses and makes your score incomparable to external benchmarks or your own historical data.
How do you write a customer satisfaction survey?
To write a good customer satisfaction survey: (1) Use the standard CSAT question - "How satisfied are you with your experience today?" on a 1-5 scale - as your primary rating question. (2) Follow it with one open-ended question: "What is the main reason for your score?" (3) Trigger the survey immediately after a specific interaction (support resolution, onboarding completion, feature use) rather than sending it on a general schedule. (4) Keep it to two questions. A customer satisfaction survey sent at the right moment with two focused questions outperforms a five-question survey sent on a monthly batch schedule.
How do you write survey questions for research?
Survey questions for research require stricter wording standards than product feedback surveys because the goal is to understand behaviour and attitudes, not just satisfaction. Key rules: (1) Use neutral, non-leading language - research questions should not imply a preferred answer. (2) Ask about behaviour and past actions ("How often do you...") rather than hypothetical intent ("Would you...") - stated intent is a poor predictor of actual behaviour. (3) Use consistent scales across related questions so responses are comparable. (4) Pilot the survey on a small sample first and check for questions that produce clustered responses (everyone answering the same way may mean the question is leading or the options are too narrow). (5) Randomise the order of answer options where possible to prevent order bias.
What is the difference between a survey and a questionnaire?
Survey and questionnaire are often used interchangeably, but technically a questionnaire is the set of written questions, while a survey is the full data-collection process - including the questionnaire, the sample, the distribution method, and the analysis. In practice, writing a good questionnaire and writing a good survey follow the same rules: one idea per question, neutral language, validated formats for rating scales, and open-ended follow-ups after closed questions.
More product playbooks
Product Strategy
How to Measure Product Market Fit
Learn how to measure product market fit using the Sean Ellis test, Superhuman framework, and survey-based scoring. Includes benchmarks, survey questions, and what to do at each score range.
Customer Loyalty
How to Improve NPS Score
Learn how to improve your Net Promoter Score with a step-by-step process - close the loop with detractors, act on passives, and turn promoters into advocates.
Retention
How to Reduce SaaS Churn
A step-by-step playbook for reducing SaaS churn - identify churn reasons with exit surveys, fix onboarding gaps, segment at-risk users, and build a retention system.
Onboarding
User Onboarding Best Practices
A step-by-step user onboarding playbook for SaaS - measure activation, identify drop-off with surveys, reduce time to first value, and build an onboarding that retains.
Product Strategy
How to Prioritize Product Features
A step-by-step playbook for feature prioritization - collect user feedback systematically, score features by impact and effort, and align your roadmap to your ICP.
Research
How to Do User Research
A practical user research playbook for product teams - when to use surveys vs. interviews, how to write unbiased questions, how to analyze results, and how to act on findings.
Run the surveys from this playbook
Mapster connects every survey response to a real user -- plan, role, company size, and activity. Segment your results without a manual data import.
Get Started FreeNo credit card required