Survey creation guide

How to Create a Survey: 7 Steps That Get Real Responses

Most surveys fail before the first response. Here's why - and how to fix it.

Making a survey is easy. Building one that gets responses, links every answer to a real user, and produces data you can act on - that requires a specific process. These 7 steps cover the full survey creation workflow: from question design and scale choice to trigger timing and segmentation setup.

No credit card required

The 7 steps to create a survey

In order. Each step depends on the one before it.

01

Define the decision this survey will inform

Before writing a single question, name the specific decision your survey results will drive. "Understand customer satisfaction" is not a decision - it is a category. "Decide whether to rebuild our onboarding flow" is a decision. Your survey question flows directly from this: if you need to know whether onboarding is broken, send CES after onboarding, not a general NPS.

Rule: If you cannot name a specific action you would take based on the results, do not send the survey.

02

Choose the right survey type

Different survey types answer different questions. NPS (0–10): "Do users love us enough to recommend us?" - run quarterly. CSAT (1–5): "Were users satisfied with this specific interaction?" - trigger after support, onboarding, or feature use. CES (1–7): "Was this task easy or hard?" - trigger after high-effort moments. PMF: "Would users miss us if we disappeared?" - run once you have 40+ active users.

Rule: Using the wrong type gives you a real number that answers the wrong question. A quarterly NPS cannot tell you why onboarding is failing.

03

Write one focused question - not five

One rating question per survey. Every question you add reduces completion rate by roughly 10–15%. The optimal structure: one standardized rating question, one optional open-ended follow-up ("What is the main reason for your score?"). That is the whole survey. If you find yourself writing a third question, you are designing two surveys, not one.

Rule: Vague questions produce vague data. "How are we doing overall?" is not a survey question - it is a conversation opener.

04

Use a standardized scale

NPS uses a 0–10 scale. CSAT uses 1–5. CES uses a 1–7 agreement scale ("The company made it easy to handle my issue"). Do not invent custom scales. Standardized scales let you benchmark against industry data, compare results quarter over quarter, and avoid the scale-selection bias that comes from arbitrarily choosing 1–10 for a CSAT question.

Rule: Changing your scale mid-program breaks comparability with your own historical data. Decide once and stick to it.

05

Choose your delivery method and trigger

In-product surveys triggered immediately after the relevant experience get 20–40% response rates. Email surveys sent in a batch days later get 5–15%. The trigger is as important as the question: after onboarding completion, after support ticket resolution, after first feature use, or after checkout. The survey should fire within minutes of the event - not in a weekly digest.

Rule: If you cannot trigger in-product, send a post-interaction email within 2 hours. After 24 hours, recall accuracy drops sharply.

06

Link every response to a real user

This is the step most survey tools skip - and the one that determines whether your data is actionable. When initializing your survey tool, pass the user's ID and key attributes: plan tier, role, signup date, company size. This means every response is attached to a real person, and you can filter your NPS score to see that enterprise users score 12 while free users score 78 - instead of looking at an average of 42 that tells you nothing.

Rule: Anonymous responses cannot be segmented. If you cannot tie a response to a user, you cannot act on it beyond making generic product changes.

07

Set up segmentation before you launch

Decide how you will cut the data before the first response arrives, not after. The most useful segments: plan tier (free vs. paid vs. enterprise), role (admin vs. end user), cohort (signup month), and feature usage (power user vs. occasional). Build these filters into your survey setup so results come pre-segmented - not as a raw export you have to clean in a spreadsheet.

Rule: A survey without segmentation is a vanity metric. The only number that drives action is a segmented one.

Which survey type should you create?

Step 2 expanded. The type determines the question, scale, and trigger - get this wrong and the rest doesn't matter.

NPS

Net Promoter Score

"How likely are you to recommend us to a friend or colleague?" (0–10)

When: Quarterly, 30 days post-onboarding, 60 days before renewal

Output: Loyalty trend. Identifies detractors (churn risk) and promoters (referral engine).

NPS complete guide →

CSAT

Customer Satisfaction Score

"How satisfied were you with [interaction]?" (1–5)

When: After support, onboarding completion, first feature use

Output: Touchpoint satisfaction. Identifies which specific interactions are failing.

CSAT complete guide →

CES

Customer Effort Score

"The company made it easy to handle my issue." (1–7)

When: After support, setup wizard, checkout, any high-effort task

Output: Friction measurement. High effort predicts churn better than low satisfaction.

CES complete guide →

PMF

Product-Market Fit Survey

"How would you feel if you could no longer use this product?" (Very / Somewhat / Not disappointed)

When: Once you have 40+ active users. Repeat after pivots.

Output: 40%+ "very disappointed" = product-market fit. Below = keep iterating.

PMF complete guide →

5 mistakes that kill survey response rates

Most survey programs fail for the same reasons.

Too many questions

One rating question plus one open-ended follow-up. Every additional question reduces completion rate. If you have five questions, you have five surveys - send them separately at different moments.

Sending too late

Send within minutes of the experience, not in a weekly digest. Recall accuracy drops sharply after 24 hours - users cannot accurately rate a support interaction they had three days ago.

No user identity linking

Anonymous responses cannot be segmented. Pass a user ID and attributes (plan, role) when initializing your survey tool. Without this, you get an average that tells you nothing about which segment to fix.

Wrong survey type for the moment

CES after onboarding, CSAT after support, NPS quarterly. Using NPS to measure whether onboarding is easy produces a loyalty score, not an effort score - and you will draw wrong conclusions.

Surveying too frequently

One relationship survey per user per quarter. Use frequency caps to prevent the same user being hit by NPS, CSAT, and CES in the same week. Over-surveying causes survey fatigue and trains users to ignore your surveys.

Frequently asked questions

Get started today

Create your first survey in minutes

NPS, CSAT, CES, and PMF surveys - triggered in-product, linked to real users, segmented from the first response. Free to start.

Start Free

No credit card required