Read our research on: Congress | Economy | Gender
Regions & Countries
Writing survey questions.
Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.
Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.
Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.
For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.
There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.
At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as focus groups , cognitive interviews, pretesting (often using an online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.
Measuring change over time
Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.
When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see question wording and question order for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.
The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.
Open- and closed-ended questions
One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.
For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.
When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see “High Marks for the Campaign, a High Bar for Obama” for more information.)
Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.
When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.
In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.
In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).
Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.
Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.
The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.
An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.
There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:
First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions. Based on that research, the Center generally avoids using select-all-that-apply questions.
It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.
In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose not allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.
Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”
We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two forms of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.
One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.
One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).
Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.
Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).
One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.
For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).
An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.
Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.
Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).
Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.
Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).
The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see measuring change over time for more information).
A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.
Other research methods.
About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .
Create & send surveys with the world’s leading online survey software
Empower your organization with our secure survey platform
Bring survey insights into your business apps
- Specialized products
Collect survey responses from our global consumer panel
Understand & improve customer experience (NPS®)
Understand & increase employee engagement
Create marketing content from customer feedback
Collect, review & manage applications online
Gather data & payments with online forms
Customer feedback for Salesforce
- Customer Satisfaction Customer Loyalty Event Surveys
- Employee Engagement Job Satisfaction HR Surveys
- Market Research Opinion Polls Concept Testing
- People Powered Data for business
Win more business with Customer Powered Data
Build a stronger workforce with Employee Powered Data
Validate business strategy with Market Powered Data
- Solutions for teams
Delight customers & increase loyalty through feedback
Improve your employee experience, engagement & retention
Create winning campaigns, boost ROI & drive growth
Elevate your student experience and become a data-driven institution
Best practices for using surveys & survey data
Our blog about surveys, tips for business, & more
Tutorials & how-to guides for using SurveyMonkey
Best practices for writing good survey and poll questions
Getting insightful and actionable answers to your surveys and polls starts with asking the right questions.
If you take the time to write good survey questions , you’ll be well on the path to getting the reliable responses you need to reach your goals. Writing good survey questions isn’t difficult. We’ve created this guide so you can understand best practices and get quick tips to help you create good survey and poll questions—ones that generate useful insights and data.
Using templates that include survey questions can speed up your survey creation process, ensuring you are asking good questions that elicit useful answers from the audiences and demographics you are targeting. You can then analyze and present your survey results in a variety of formats, such as a word cloud, that creates a visual representation of the most common words and phrases in your responses. Success in your surveys first depends on the types of survey questions to use.
Need respondents for your survey?
SurveyMonkey Audience uses its trusted panel of respondents from around the world to power your surveys.
Open-ended questions ask respondents to add personal comments in the form of a text box, whereas close-ended questions give respondents a fixed set of options to choose from. These closed-ended response choices can be simple yes/no options, multiple choice options, Likert rating scales, and more.
Get a deep dive on the difference between open-ended and closed-ended questions , so you can use them with confidence.
Within this guide, you’ll learn how to ask your questions to elicit the most useful responses. To help you write a top-notch questionnaire, we’ll cover:
- Ways to write great survey questions using neutral answer options
- Examples of ensuring your surveys have a balanced set of answer options
- How to avoid asking for two things at once
- Creating good survey questions that are closed-ended
- Writing a survey that uses a diverse set of questions
- How to ensure you’re sending a good survey
How weak questions impact poll results
Good survey questions can help you achieve your goals, but poorly written questions can undermine your efforts and potentially skew your results, especially in single-question polls. Weak questions can range from those that confuse respondents to questions that are hampered by bias, or lead respondents toward a particular response.
Weak questions can reduce survey participation and make it more difficult to capture reliable data. Relying on straightforward multiple choice questions can serve as a strong foundation for crafting good survey questions that generate solid data. Bottom line is, if you’re launching an online poll that only has one question, you’ve got to get it right.
Pro tip: Use customization features to brand your polls to add credibility for the respondents. Designing polls that include your logo, brand colors, or a custom theme ensure the questionnaire is recognizable to your target audience.
7 tips for writing a great survey or poll
Whether you are conducting market research surveys , gathering large amounts of data to analyze, collecting feedback from employees, or running online polls—you can follow these tips to craft a survey, poll, or questionnaire that gets results.
1. Ask more closed-ended questions instead than open-ended questions
If you are looking for data that is easy to capture and analyze, closed-ended questions can be your ticket to success. Closed-ended questions generate quantitative data that can be used to measure variables. The answers to closed-ended questions are always objective and conclusive. Another benefit, the data derived from this question type can be presented in very accessible formats showing overall percentages to how respondents answered—graphs and charts are best.
Open-ended questions generate qualitative data, which requires more effort and time for respondents to answer compared to closed-ended questions. Qualitative data is often more time consuming to analyze because it does not generate clear-cut numerical results. When thinking about how to write a great survey, you should consider minimizing the use of open-ended questions. This will also help increase your completion rates as if respondents feel like they have to spend too much time writing in their answers, they’ll leave your survey early.
In general, when writing a survey, you should try to avoid asking more than two open-ended questions per survey or poll. If possible, put them on a separate page at the end. That way, even if a respondent drops out of the survey, you’re able to collect their responses from the questions on previous pages. No doubt, open-ended questions can generate extremely useful insights, but it’s important to be strategic in the ways you use them to get the maximum benefit.
Get more survey guidelines to help you on survey creation.
2. Ensure your survey questions are neutral
Putting an opinion in your question prompt is asking a leading question. This can damage your survey data because it can influence respondents to answer in a way that doesn’t reflect how they really feel. Say you asked the leading question :
“We think our customer service representatives are really awesome. How awesome do you think they are?”
The question seems to convey an opinion that you want respondents to agree with. Do you know if your respondents actually feel like your customer service representatives are awesome? If you’re looking to get feedback on your customer service representatives, then this can be a serious problem because you’re not giving respondents the opportunity to refute the fact that reps are awesome.
You can make the tone of your survey question more objective by editing it as follows:
“How helpful or unhelpful do you find our customer service representatives to be?”
Learn more about how to prevent bias from impacting your surveys .
3. Keep a balanced set of answer choices
Respondents need a way to provide honest and thoughtful feedback. Otherwise, the credibility of their responses is at risk.
The answer choices you include can be another potential source of bias. Let’s assume we included the following as answer options when asking respondents how helpful or unhelpful your customer service reps are:
- Extremely helpful
- Very helpful
You’ll notice that there isn’t an opportunity for respondents to say that the reps aren’t helpful. Writing good survey questions involve using an objective tone. This means adopting a more balanced set of answer options, like the following:
- Neither helpful nor unhelpful
- Very unhelpful
4. Don’t ask for two things at once
Confusing respondents is equally as bad as influencing their answers. In both cases, they’ll choose an answer that doesn’t reflect their true opinions and preferences.
A common culprit in causing confusion is the double-barreled question . It asks respondents to assess two different things at the same time. For example:
“How would you rate our customer service and product reliability?”
Customer service and product reliability are two separate topics. Including both in the same question can push the respondent to either evaluate one or to skip the question altogether. Either way, you will be hard-pressed to get an answer that is useful or relevant. Your products may be extremely reliable, but what is weighing on a respondent’s mind is a recent bad customer service experience.
Fortunately, there’s an easy fix here. Simply separate these two topics into their own closed-ended questions:
- “How would you rate our customer service?”
- “How would you rate our product’s reliability?”
This approach helps you pinpoint problem areas while also getting a clear sense of where you are meeting or exceeding customer expectations.
5. Keep your questions different from each other
Imagine if someone asked you the same question over, and over, and over again. You’d probably get annoyed, right? That’s how respondents may feel if you repeatedly ask questions that use the same question prompt or answer choices. It leads respondents to either leave your survey or engage in straightlining , which is answering your questions without putting much thought into them.
A thoughtless answer can be more damaging than no answer at all, as it does not represent the true feelings of the respondent. You can proactively address this by varying the types of questions you ask, how you ask them, and by spacing out questions that look similar. Using one of our expert-written survey templates can help you present a variety of questions posed in different ways to avoid this pitfall.
6. Let most of your questions be optional to answer
Respondents may not know the answers to all of your questions. And there may be some questions they simply don’t feel comfortable answering. But, you still want them to take the survey, and provide valuable feedback.
Keep both of these things in mind when deciding which questions to require answers to. And when you’re unsure whether to make a certain question optional or required, lean on making it optional. We’ve found that forcing respondents to answer your questions makes them more likely to quit your survey or select an answer at random.
7. Do a test drive
As a survey creator, there’s no worse feeling than finding mistakes in your survey once its already sent to respondents. In some instances, this may require you to scrap the survey altogether and start anew. Another option might be to send a revised survey, but this can reduce trust and participation among respondents, and can create a scenario in which some people complete the original survey while others respond to the revised version.
Prevent the situation from happening to you by sharing your survey in advance with colleagues, friends, and anyone else that can be a fresh set of eyes for you. An objective opinion of a reviewer can be all takes to spot mistakes in your survey. Having others review the survey can also weed out any potential bias that might be offensive or off-putting to a particular demographic.
Get your surveys in best shape when you work on them as a team
Our collaboration tools ensure you can access surveys asynchronously for reviewing, sending, or analyzing surveys.
Bonus: Writing poll questions for Zoom
You can make the most of your Zoom video conferences by adding poll questions to engage participants and capture valuable feedback. You can conduct icebreaker polls to get your audience quickly engaged, as well as multiple choice questions and quizzes.
An icebreaker poll is a simple, fun and engaging question that helps get people engaged from the start of your meeting. Icebreaker polls often feature “What’s your favorite …” questions that focus on favorite foods, activities, movies, etc.
You can also give participants options to help guide the meeting, asking them “Would you rather … ?” and then provide some choices. The key to writing good Zoom poll questions is to keep the questions brief and snappy. Don’t go overboard on polling, but include enough within your call to create an interactive environment and gather information that can be useful to you moving forward.
Ultimately, polling is a simple yet powerful way to gather attendee sentiment, and give everyone an equal voice. As responses come in, they will be displayed within Zoom chat so you can gauge their experiences instantly.
Learn more about how you can pair SurveyMonkey with Zoom to lead more productive virtual meetings.
Gain confidence writing good survey questions
Writing a good survey means asking questions in a way that lets respondents answer truthfully. At the same time, it means providing respondents with a quick and easy survey-taking experience. The better your surveys get, the better your responses become.
Explore our resources for creating and analyzing surveys , no matter who you’re trying to reach.
Get a quick start with our survey templates
Search our gallery of 150+ expert-written surveys for any project—customer satisfaction, employee engagement, market research, education, and more.
Filter by survey type
See how surveymonkey can power your curiosity.
Board of Directors
California Privacy Notice
Acceptable Uses Policy
Google Forms vs. SurveyMonkey
Employee Satisfaction Surveys
Free Survey Templates
How to Improve Customer Service
AB Test Significance Calculator
Sample Size Calculator
Writing Good Surveys
360 Degree Feedback
Customer Satisfaction Survey Questions
Agree Disagree Questions
Create a Survey
Qualitative vs Quantitative Research
Market Research Surveys
Survey Design Best Practices
Margin of Error Calculator
360 Review Template
- Experience Management
- Market Research
- What is a survey?
- Survey Design
Try Qualtrics for free
Survey design your respondents will love.
13 min read When your survey meets only your needs and not those of your respondents, it can be a frustrating experience. Here’s how you can design surveys to be enjoyable to answer, and improve response rates.
What is survey design?
Businesses use surveys for a whole lot of reasons, such as gauging customer satisfaction , employee engagement or the reach of a new marketing campaign.
A survey needs to be carefully tailored to:
- engage respondents
- motivate them to finish it
- deliver valuable feedback data
Good survey design is key to these three things. It’s the process of deciding what a survey wants to achieve, devising suitable questions , and formatting an attractive style and branding, before it goes out to respondents.
Why is good survey design important?
Creating surveys is as much an art as it is a science. It involves attention to detail in its design and flow. And a survey is only as good as the responses it gets. If it looks confusing, or long and boring to your target audience, they probably won’t complete it. Or, worse, won’t even bother to start it in the first place.
So, taking the time to design your survey well is an investment that will give you a decent completion rate and good quality data.
What does good survey design look like?
A good survey will have a clear purpose set out right at the beginning. Questions will be easy to read, understand and respond to, encouraging a variety of answers – closed-ended (tick boxes, selection) and open-ended (boxes for open text comments), arranged in a logical order. You can use images to enhance or clarify questions, and company branding gives a sense of identity and trust. And a survey mustn’t take too long – no more than 15 minutes.
Start creating surveys now with a free Qualtrics account
Let’s look at these in more detail:
1. Set your goal
Right from the very start, ask yourself: “What is the purpose of my survey? What are my main objectives?” and set an attainable goal. For example, one department has a high staff turnover after two years’ service, and you want to know why. Rather than a generic employee satisfaction survey , make your inquiries more specific: Is there career progression beyond two years? Is there sufficient industry training? Can people stand a culture or individual for only so long? Make the goal clear at the top of the survey, perhaps: The purpose of this survey is to understand why people leave our company.
Tip : Unless respondents know what their survey information is going to be used for, they won’t be able to give meaningful responses.
2. Choose your survey question types
There are more than 100 different ways to ask a question, and the type of question has a direct impact on the survey results.
Closed-ended questions have pre-populated answers to choose from, such as multiple choice or tick boxes. They are simple to answer and provide quantitative data.
Open-ended questions , also known as open text, ask for feedback in the respondent’s own words. They provide qualitative data . They are the most reliable, but they also lead to survey fatigue faster, so you should limit those types.
These are the most common question types:
Closed-ended question types
- Multiple choice questions form the basis of most research. They can be displayed as a traditional list of choices or as a dropdown menu, tick box, radio buttons, etc.
- Multi-select is used when you want participants to select more than one answer from a list.
- Ranking order is used to determine a respondent’s order of preference for a list of items. These questions are best used when you want to measure your respondents’ attitude toward something. These can be time-consuming for respondents.
- Rating order questions are asked to indicate their personal levels on things such as agreement, satisfaction, or frequency.
- Scale questions . Rather than asking respondents a basic yes or no question, use scales that measure both the direction and the intensity of opinions.
This is critical for survey research. Someone who “strongly supports” a decision is very different from someone who “slightly supports” it. Scales extend the power of analysis from basic percentages to high-level analyses based on means and variance estimates (like t-test , ANOVA , regression, and the like).
Scales make it easier for respondents to answer and for you to conduct your analysis. If scales have the same scale of points, you can quickly compare responses to different questions.
- Matrix Tables are used to collect multiple pieces of information in one question. They provide an effective way to condense your survey or to group similar items into one question. An example is the Likert scale that is useful for measuring more nuanced attitudes and opinions than a simple yes/no. It’s a reliable way to measure perceptions, opinions and behaviors. For example:
How satisfied or dissatisfied are you with our new office?
- Sliders let respondents indicate their level of preference with a draggable bar rather than a traditional button or checkbox.
- Side-by-side questions let you ask multiple questions in one condensed table and provides an effective way of shortening your survey while collecting the same amount of data.
- Personal/demographic questions – leave these to the end of your survey, putting the more friendly, engaging questions first.
Open-ended question types
Text entry is used to gather open-ended feedback from respondents. These responses can be lengthy essays, standard form information such as name and email address, or anything in between.
Tip : Open text questions take longer to answer, so limit them to one or two, at or near the end of your survey.
3. Write the survey questions
There’s a simple approach to this – write the questions that you yourself would want to answer. Chances are, these questions will follow these guidelines:
- Keep them short and easy to understand . Use plain English for your question wording so that the respondent understands them on first reading. Avoid unnecessary adjectives and vague adverbs such as ‘frequently’ and ‘usually’ so there’s no chance that the question will mean different things to different people.
Tip : Write so that school students between 9th and 11th grade would understand the questions.
- Speak your respondents’ language . Asking about caloric content, bits, bytes, and other industry-specific jargon and acronyms is confusing. Make sure your audience understands your language level and terminology and above all, that they understand what you are asking. The best move is to write to your least-informed respondent. If a respondent won’t understand an acronym, either define it or don’t use it.
- Explain unexpected questions . For instance, if it’s important for you to ask toy store customers what time of the day they usually visit the store, briefly explain why that’s relevant.
- Don’t use leading questions . These introduce bias into your survey that influences respondents’ selections. Bias skews useful data. Compare:
Did you enjoy our delicious new sandwich range? (biased)
What did you think of our new sandwich range? (unbiased)
- Avoid double-barreled questions . These ask two questions at once, and are impossible to answer usefully, e.g.:
How satisfied or dissatisfied were you with your hotel suite and breakfast on your recent stay with us?
Better to ask: a. How satisfied or dissatisfied were you with your hotel suite…? b. How satisfied or dissatisfied were you with your breakfast…?
- Avoid prestige bias. This is when you associate your question with a person or group that enjoys prestige, and it nudges the respondent to agree, e.g.:
Most doctors say we should include more whole grains in our diet. Do you agree?
It’s hard to disagree with a doctor.
- Don’t use absolutes such as ‘always’, ‘every’ and ‘all’. These force your respondent to either agree or disagree, with no room for nuanced, informative answers. For example:
Do you always eat breakfast cereal? a. Yes b. No
Someone who eats breakfast cereal six mornings out of seven would be forced to answer ‘no’. Better data comes from:
How many times a week do you eat breakfast cereal? None 1-2 times 3-4 times 5-6 times Every day
- Don’t mix your question types . Look at this example:
During your most recent visit to our store, how satisfied were you with the cleanliness of our restrooms?
Awkward, isn’t it? If it says ‘satisfied’ in the question, you’ll need to have ‘satisfied’ in the answer.
- Justify requests for sensitive information . This is particularly true with any information about ethnicity, family status, income, education, etc. You can diffuse respondents’ concerns by telling them how it is going to be used. For instance, you can explain that purchasing habits will only be analyzed in aggregate for benchmarking purposes or that survey results will not be shared outside your organization.
Tip : Don’t forget to include a PNA – ‘Prefer Not To Answer’ option.
Learn more about writing survey questions in our ultimate guide.
4. Decide your question sequence
Make your survey easier for respondents by keeping questions in their logical order and avoiding changing topics unnecessarily. The funnel approach is a survey flow technique that makes the respondent’s job easier. This approach suggests that you:
- Follow a logical order
- Start with broad and general questions that qualify the respondent and introduce the topic
- Move into more specific questions
- Finish with general, easy-to-answer questions (like demographics)
This approach allows respondents to warm up with broad and general questions, work into more specific and in-depth questions, and cool down at the end. This turns the survey into a smooth road for respondents, which decreases drop-out rates and may even increase the quality of answers you receive.
Read on about perfecting your survey question sequence here.
5. Design the look of your survey
Use images where appropriate.
Humans usually respond more favorably to images than to text, and often understand concepts more quickly with an image or an icon. Breaking up long blocks of text with images, header bars, and icons make reading your survey more pleasant. Your software platform should offer question types that include graphics such as smile/frown faces, graphic sliders, and the capability to upload your own images or video. Images should add meaningful context to a question but should not bias responses by making one option seem more attractive than another.
Include your branding
The more you include your branding in all consumer interactions, the more consumers will trust your brand. This extends to surveys. They are more impactful when they contain your logo , color palette, custom theme and brand-associative images. Whether customers or employees, your respondents will feel more connection with a branded survey, and more inclined to answer the questions thoughtfully.
Tip : Other ways you can extend your branding with a survey include:
- Adding a custom ‘thank you’ to the end of your survey that pops up after completion
- Adding a custom url for your survey link
- Redirecting the respondents to your website when they’ve completed the survey
6. Test your survey design
At its simplest, ask 5 people who match your sample to take your survey, then ask them:
- How long did it take?
- Were any questions confusing?
- Were there any other problems while taking the survey?
This allows you to quickly correct lingering problems before distribution.
We recommend running your surveys through a series of tests to check for potential problems and to make sure you get the data you want. Different strategies include: respondent debriefing, cognitive interviewing, expert evaluation, focus groups, experiments and pilot studies.
Find out about 6 survey pretest methods to consider.
Get the handbook of survey question design
Choosing the best survey tools 16 min read, close-ended questions 7 min read, survey vs questionnaire 12 min read, response bias 14 min read, double barreled question 11 min read, likert scales 14 min read, survey research 15 min read, request demo.
Ready to learn more about Qualtrics?
- Hubspot Blog
Oh no! We couldn't find anything like that.
Try another search, and we'll give it our best shot.
28 Questionnaire Examples, Questions, & Templates to Survey Your Clients
Published: May 23, 2022
The adage "the customer is always right" has received some pushback in recent years, but when it comes to conducting surveys , the phrase is worth a deeper look. In the past, representatives were tasked with solving client problems as they happened. Now, they have to be proactive by solving problems before they come up.
Salesforce found that 63% of customers expect companies to anticipate their needs before they ask for help. But how can a customer service team recognize these customer needs in advance and effectively solve them on a day-to-day basis?
A customer questionnaire is a tried-and-true method for collecting survey data to inform your customer service strategy . By hearing directly from the customer, you'll capture first-hand data about how well your service team meets their needs. In this article, you'll get free questionnaire templates and best practices on how to administer them for the most honest responses.
Table of Contents:
Survey vs. questionnaire, questionnaire templates.
- Examples of Good Survey Questions
- Questionnaire Examples
How to make a questionnaire.
A questionnaire is a research tool used to conduct surveys. It includes specific questions with the goal to understand a topic from the respondents' point of view. Questionnaires typically have closed-ended, open-ended, short-form, and long-form questions.
The questions should always remain as unbiased as possible. For instance, it's unwise to ask for feedback on a specific product or service that’s still in the ideation phase. To complete the questionnaire, the customer would have to imagine how they might experience the product or service rather than sharing their opinion about their actual experience with it.
Ask broad questions about the kinds of qualities and features your customers enjoy in your products or services and incorporate that feedback into new offerings your team is developing.
What makes a good questionnaire?
A good questionnaire seeks to determine what you need versus what you want. It should be valuable and come from the respondent’s point of view. It should also be specific to the topic and have open-ended, long-form, or short-ended questions. Questionnaires should be concise and simple while offering the respondent’s experience with your business.
In-Depth Interviews vs. Questionnaire
Questionnaires can be a more feasible and efficient research method than in-depth interviews. They are a lot cheaper to conduct. That’s because in-depth interviews can require you to compensate the interviewees for their time and provide accommodations and travel reimbursement.
Questionnaires also save time for both parties. Customers can quickly complete them on their own time, and employees of your company don't have to spend time conducting the interviews. They can capture a larger audience than in-depth interviews, making them much more cost-effective.
It would be impossible for a large company to interview tens of thousands of customers in person. The same company could potentially get feedback from their entire customer base using an online questionnaire.
When considering your current products and services (as well as ideas for new products and services), it's essential to get the feedback of the existing and potential customers. They are the ones who have a say in purchasing decisions.
A questionnaire is a tool that’s used to conduct a survey. A survey is the process of gathering, sampling, analyzing, and interpreting data from a group of people.
The confusion between these terms most likely stems from the fact that questionnaires and data analysis were treated as very separate processes before the internet became popular. Questionnaires used to be completed on paper, and data analysis occurred later as a separate process. Nowadays, these processes are typically combined since online survey tools allow questionnaire responses to be analyzed and aggregated all in one step.
However, questionnaires can still be used for reasons other than data analysis. Job applications and medical history forms are examples of questionnaires that have no intention of being statistically analyzed. The key difference between questionnaires and surveys is that they can exist together or separately.
Below are some of the best free questionnaire templates you can download to gather data that informs your next product or service offering.
What makes a good survey question?
To make a good survey question, you have to choose the right type of questions to use. Include concise, clear, and appropriate questions with answer choices that won’t confuse the respondent and will clearly provide data on their experience.
A good survey happens when good questions can give a business good data to examine. A good survey has:
- A goal in mind
- Clear and distinct answers and questions
- Separate questions
1. A Goal in Mind
To make a good survey, consider what you are trying to learn from it. Understanding why you need to do a survey will help formulate clear and concise questions that need to be asked to complete your goal. The more your questions focus on one or two objectives, the better your data will be.
2. Clear and Distinct Answers and Questions
You have a goal in mind for your survey. Now you have to write the questions and answers depending on the form you’re using.
For instance, if you’re using ranks or multiple choice options in your survey, be clear. Here’s an example of a good and poor multiple choice answer:
- Contains the tallest mountain in the United States.
- Has an eagle on its state flag.
- Is the second largest state in terms of area.
- Was the location of the Gold Rush of 1849.
What is the main reason so many people moved to California in 1849?
- California land was fertile, plentiful, and inexpensive.
- Gold was discovered in central California.
- The east was preparing for a civil war.
- They wanted to establish religious settlements.
In the poor example, the respondent would be confused about what is being asked. The survey didn’t fully explain the question, and the options are also confusing. Whereas in the good example, the question doesn’t confuse the respondent, and they know how to answer. Always make sure answers and questions are clear and distinct to give the respondent the best outcome when completing the survey.
3. Separate questions
A good survey asks one question at a time. For example, a bad survey question would read, “ What is your favorite sneaker and clothing apparel brand?” This is bad because you’re asking two questions at once, not separately. Each question should focus on getting specific pieces of information.
By asking two questions simultaneously, you may confuse your respondents and get unclear answers. Instead, ask, “ What is your favorite sneaker brand?” then, “What is your favorite clothing apparel brand?” By separating the questions, you allow your respondents to give separate and precise answers.
1. Free HubSpot Questionnaire Template
HubSpot offers a variety of free customer surveys and questionnaire templates to analyze and measure customer experience. Choose from five templates: net promoter score, customer satisfaction, customer effort, open-ended questions, and long-form customer surveys.
5 Customer Survey Templates
Featured resource, 2. client questionnaire template.
It's a good idea to gauge your clients' experiences with your business to uncover opportunities to improve your offerings. That will, in turn, better suit their lifestyles. You don't have to wait for an entire year to pass before polling your customer base about their experience either. A simple client questionnaire, like the one below, can be administered as a micro survey several times throughout the year. These types of quick survey questions work well to retarget your existing customers through social media polls and paid interactive ads.
1. How much time do you spend using [product or service]?
- Less than a minute
- About 1 - 2 minutes
- Between 2 and 5 minutes
- More than 5 minutes
2. In the last month, what has been your biggest pain point?
- Finding enough time for important tasks
- Delegating work
- Having enough to do
3. What's your biggest priority right now?
- Finding a faster way to work
- Staff development
3. Website Questionnaire Template
Whether you just launched a brand new website or you're gathering data points to inform a redesign, you'll find customer feedback to be essential in both processes. A website questionnaire template will come in handy to collect this information using an unbiased method.
1. How many times have you visited [website] in the past month?
- More than once
2. What is the primary reason for your visit to [website]?
- To make a purchase
- To find more information before making a purchase in-store
- To contact customer service
3. Are you able to find what you're looking for on the website homepage?
4. Customer Satisfaction Questionnaire Template
If you've never surveyed your customers and are looking for a template to get started, this one includes some basic customer satisfaction questions. These will apply to just about any customer your business serves.
1. How likely are you to recommend us to family, friends, or colleagues?
- Extremely unlikely
- Somewhat unlikely
- Somewhat likely
- Extremely likely
2. How satisfied were you with your experience?
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10
3. Rank the following items in terms of their priority to your purchasing process.
- Helpful staff
- Quality of product
- Price of product
- Ease of purchase
- Proximity of store
- Online accessibility
- Current need
- Appearance of product
- Family member
- On behalf of a business
5. Please rate our staff on the following terms:
- Friendly __ __ __ __ __ Hostile
- Helpful __ __ __ __ __ Useless
- Knowledgeable __ __ __ __ __ Inexperienced
- Professional __ __ __ __ __ Inappropriate
6. Would you purchase from our company again?
7. How can we improve your experience for the future?
5. Customer Effort Score Questionnaire Template
The following template gives an example of a brief customer effort score (CES) questionnaire. This free template works well for new customers to measure their initial reaction to your business.
1. What was the ease of your experience with our company?
- Extremely difficult
- Somewhat difficult
- Somewhat easy
- Extremely easy
2. The company did everything they could to make my process as easy as possible.
- Strongly disagree
- Somewhat disagree
- Somewhat agree
- Strongly agree
3. On a scale of 1 to 10 (1 being "extremely quickly" and 10 being "extremely slowly"), how fast were you able to solve your problem?
4. How much effort did you have to put forth while working with our company?
- Much more than expected
- Somewhat more than expected
- As much as expected
- Somewhat less than expected
- Much less than expected
6. Demographic Questionnaire Template
Here's a template for surveying customers to learn more about their demographic background. You could substantiate the analysis of this questionnaire by corroborating the data with other information from your web analytics, internal customer data, and industry data.
1. How would you describe your employment status?
2. How many employees work at your company?
3. How would you classify your role?
4. How would you classify your industry?
Good Survey Questions
- What is your favorite product?
- Why did you purchase this product?
- How satisfied are you with [product]?
- Would you recommend [product] to a friend?
- Would you recommend [company name] to a friend?
- If you could change one thing about [product], what would it be?
- Which other options were you considering before [product or company name]?
- Did [product] help you accomplish your goal?
- How would you feel if we did not offer this product, feature, or service?
- What would you miss the most if you couldn't use your favorite product from us?
- What is one word that best describes your experience using our product?
- What's the primary reason for canceling your account?
- How satisfied are you with our customer support?
- Did we answer all of your questions and concerns?
- How can we be more helpful?
- What additional features would you like to see in this product?
- Are we meeting your expectations?
- How satisfied are you with your experience?
1. "What is your favorite product?"
This question is a great starter for your survey. Most companies want to know what their most popular products are, and this question cuts right to the point.
It's important to note that this question provides you with the customer's perspective, not empirical evidence. You should compare the results to your inventory to see if your customers' answers match your actual sales. You may be surprised to find your customers' "favorite" product isn't the highest-selling one.
2. "Why did you purchase this product?"
Once you know their favorite product, you need to understand why they like it so much. The qualitative data will help your marketing and sales teams attract and engage customers. They'll know which features to advertise most and can seek out new leads similar to your existing customers.
3. "How satisfied are you with [product]?"
When you have a product that isn't selling, you can ask this question to see why customers are unhappy with it. If the reviews are poor, you'll know that the product needs reworking, and you can send it back to product management for improvement. Or, if these results are positive, they may have something to do with your marketing or sales techniques. You can then gather more info during the questionnaire and re-strategize your campaigns based on your findings.
4. "Would you recommend [product] to a friend?"
This is a classic survey question used with most NPS® surveys. It asks the customer if they would recommend your product to one of their peers. This is extremely important because most people trust customer referrals more than traditional advertising. So, if your customers are willing to recommend your products, you'll have an easier time acquiring new leads.
5. "Would you recommend [company name] to a friend?"
Similar to the question above, this one asks the customer to consider your business as a whole and not just your product. This gives you insight into your brand's reputation and shows how customers feel about your company's actions. Even if you have an excellent product, your brand's reputation may be the cause of customer churn . Your marketing team should pay close attention to this question to see how they can improve the customer experience .
6. "If you could change one thing about [product], what would it be?"
This is a good question to ask your most loyal customers or ones that have recently churned. For loyal customers, you want to keep adding value to their experience. Asking how your product can improve helps your development team identify flaws and increases your chances of retaining a valuable customer segment.
For customers that have recently churned, this question provides insight into how you can retain future users that are unhappy with your product or service. By giving these customers a space to voice their criticisms, you can either reach out and provide solutions or relay feedback for consideration.
7. "Which other options were you considering before [product or company name]?"
If you're operating in a competitive industry, customers will have more than one option when considering your brand. Additionally, if you sell variations of your product or produce new models periodically, customers may prefer one version over another.
For this question, you should provide answers to choose from in a multiple-selection format. This will limit the types of responses you'll receive and help you obtain the exact information you need.
8. "Did [product] help you accomplish your goal?"
The purpose of any product or service is to help customers accomplish a goal. Therefore, you should be direct and ask them if your company steered them toward success. After all, customer success is an excellent retention tool. If customers are succeeding with your product, they're more likely to remain loyal to your brand.
9. "How would you feel if we did not offer this product, feature, or service?
Thinking about discontinuing a product? This question can help you decide whether or not a specific product, service, or feature will be missed if you were to remove it.
Even if you know that a product or service isn't worth offering, it's important to ask this question anyway because there may be a certain aspect of the product that your customers like. They'll be delighted if you can integrate that feature into a new product or service.
10. "If you couldn't use your favorite product from us, what would you miss the most about it?"
This question pairs well with the one above because it frames the customer's favorite product from a different point of view. Instead of describing why they love a particular product, the customer can explain what they'd be missing if they didn't have it at all. This type of question uncovers "fear of loss," which can be a very different motivating factor than "hope for gain.”
11. "What word best describes your experience using our product?"
Your marketing team will love this question. A single word or a short phrase can easily sum up your customers’ emotions when they experience your company, product, or brand. Those emotions can be translated into relatable marketing campaigns that use your customers’ exact language.
If the responses reveal negative emotions, it's likely that your entire customer service team can relate to that pain point. Rather than calling it "a bug in the system," you can describe the problem as a "frustrating roadblock" to keep their experience at the forefront of the solution.
12. "What's the primary reason for canceling your account?"
Finding out why customers are unhappy with your product or service is key to decreasing your churn rate . If you don't understand why people leave your brand, it's hard to make effective changes to prevent future turnover. Or worse, you might alter your product or service in a way that increases your churn rate, causing you to lose customers who were once loyal supporters.
13. "How satisfied are you with our customer support?"
It's worth asking customers how happy they are with your support or service team. After all, an excellent product doesn't always guarantee that customers will remain loyal to your brand. Research shows that one in three customers will leave a brand they love after just one poor service experience.
14. "Did we answer all of your questions and concerns?"
This is a good question to ask after a service experience. It shows how thorough your support team is and whether or not they're prioritizing speed too much over quality. If customers still have questions and concerns after a service interaction, your support team is focusing too much on closing tickets and not enough on meeting customer needs .
15. "How can we be more helpful?"
Sometimes it's easier to be direct and simply ask customers what else you can do to help them. This shows a genuine interest in your buyers' goals which helps your brand foster meaningful relationships with its customer base. The more you can show that you sincerely care about your customers' problems, the more they'll open up to you and be honest about how you can help them.
16. What additional features would you like to see in this product?
With this question, your team can get inspiration for the company's next product launch. Think of the responses as a wish list from your customers. You can discover what features are most valuable to them and whether they already exist within a competitor's product.
Incorporating every feature suggestion is nearly impossible, but it's a convenient way to build a backlog of ideas that can inspire future product releases.
17. "Are we meeting your expectations?"
This is a really important question to ask because customers won't always tell you when they're unhappy with your service. Not every customer will ask to speak with a manager when they're unhappy with your business. In fact, most will quietly move on to a competitor rather than broadcast their unhappiness to your company. To prevent this type of customer churn, you need to be proactive and ask customers if your brand is meeting their expectations.
18. "How satisfied are you with your experience?"
This question asks the customer to summarize their experience with your business. It gives you a snapshot of how the customer is feeling in that moment and their perception of your brand. Asking this question at the right stage in the customer's journey can tell you a lot about what your company is doing well and where you can stand to improve.
Below, we have curated a list of questionnaire examples that do a great job of gathering valuable qualitative and quantitative data.
4 Questionnaire Examples
1. customer satisfaction questions.
Here are a few more types of questions you can use in your questionnaire to collect different types of data.
Multiple-choice questions offer respondents several options of answers to choose from. This is a popular choice of questionnaire formats since it's simple for people to fill out and for companies to analyze. Multiple-choice questions can be in single-answer form (respondents can only select one response) or multiple-answer form (respondents can select as many responses as necessary).
4. Rating Scale
Rating scale questions offer a scale of numbers (typically one 10) and ask respondents to rate various items based on the sentiments assigned to that scale. This is effective when assessing customer satisfaction.
3. Likert Scale
Likert scale questions assess whether or not a respondent agrees with the statement, as well as the extent to which they agree or disagree. These questions typically offer five or seven responses, with sentiments ranging from items such as "strongly disagree" to "strongly agree."
Open-ended questions ask a broader question or possibly elaboration on a particular response to one of the close-ended questions above. They are accompanied by a text box that leaves room for respondents to write freely. This is particularly important when asking customers to expand on an experience or recommendation.
3. Keep it brief, when possible.
Most questionnaires don't need to be longer than a page. For routine customer satisfaction surveys, it's unnecessary to ask 50 slightly varied questions about a customer's experience when those questions could be combined into 10 solid questions.
The shorter your questionnaire is, the more likely a customer will complete it. In addition, a shorter questionnaire means less data for your team to collect and analyze. Based on the feedback, it will be a lot easier for you to get the information you need to make the necessary changes in your organization and products.
4. Choose a simple visual design.
There's no need to make your questionnaire a stunning work of art. As long as it's clear and concise, it will be attractive to customers. When asking questions that are important to furthering your company, it's best to keep things simple. Select a font that’s common and easy to read, like Helvetica or Arial. Use a text size that customers of all abilities can navigate.
A questionnaire is most effective when all the questions are visible on a single screen. The layout is important. If a questionnaire is even remotely difficult to navigate, your response rate could suffer. Ensure that buttons and checkboxes are easy to click and that questions are visible on both computer and mobile screens.
5. Use a clear research process.
Before planning questions for your questionnaire, you'll need to have a definite direction for it. A questionnaire is only effective if the results answer an overarching research question. After all, the research process is an important part of the survey, and a questionnaire is a tool that's used within the process.
In your research process, you should first come up with a research question. What are you trying to find out? What's the point of this questionnaire? Keep this in mind throughout the process.
After coming up with a research question, it's a good idea to have a hypothesis. What do you predict the results will be for your questionnaire? This can be structured in a simple "If … then …" format. A structured experiment — yes, your questionnaire is a type of experiment — will ensure that you're only collecting and analyzing data necessary to answer your research question. Then, you can move forward with your survey .
6. Create questions with straightforward, unbiased language.
When crafting your questions, it's important to structure them to get the point across. You don't want any confusion for your customers because this may influence their answers. Instead, use clear language. Don't use unnecessary jargon, and use simple terms in favor of longer-winded ones.
You may risk the reliability of your data if you try to combine two questions. Rather than asking, "How was your experience shopping with us, and would you recommend us to others?" separate it into two separate questions. Customers will be clear on your question and choose a response most appropriate for each one.
Additionally, you should always keep the language in your questions unbiased. You never want to sway customers one way or another because this will cause your data to be skewed. Instead of asking, "Some might say that we create the best software products in the world. Would you agree or disagree?" it may be better to ask, "How would you rate our software products on a scale of 1 to 10?" This removes any bias and ensures that all of the responses are valid.
7. Ask only the most important questions.
When creating your questionnaire, keep in mind that time is one of the most valuable commodities for customers. Most aren't going to sit through a 50-question survey, especially when they're being asked about products or services they didn't use. Even if they do complete it, most of these will be half-hearted responses from fatigued customers who simply want to be finished with it.
If your questionnaire has five or 55 questions, make sure each has a specific purpose. Individually, they should be aimed at collecting certain pieces of information that reveal new insights into different aspects of your business. If your questions are irrelevant or seem out of place, your customers will be easily derailed by the survey. And, once the customer has lost interest, it'll be difficult to regain their focus.
8. Ask one question at a time.
Since every question has a purpose, ask them one at a time. This lets the customer focus and encourages them to provide a thoughtful response. This is particularly important for open-ended questions where customers need to describe an experience or opinion.
By grouping questions together, you risk overwhelming busy customers who don't have time for a long survey. They may think you're asking them too much, or they might see your questionnaire as a daunting task. You want your survey to appear as painless as possible. Keeping your questions separated will make it more user-friendly.
9. Order your questions logically.
A good questionnaire is like a good book. The beginning questions should lay the framework, the middle ones should cut to the core issues, and the final questions should tie up all loose ends. This flow keeps customers engaged throughout the entire survey.
When creating your questionnaire, start with the most basic questions about demographics. You can use this information to segment your customer base and create different buyer personas.
Next, add in your product and services questions. These are the ones that provide insights into common customer roadblocks and where you can improve your business's offerings. Questions like these guide your product development and marketing teams looking for new ways to enhance the customer experience.
Finally, you should conclude your questionnaire with open-ended questions to understand the customer journey. These questions let customers voice their opinions and point out specific experiences they've had with your brand.
10. Consider your target audience.
Whenever you collect customer feedback, you need to keep in mind the goals and needs of your target audience. After all, the participants in this questionnaire are your active customers. Your questions should be geared towards the interests and experiences they've already had with your company.
You can even create multiple surveys that target different buyer personas. For example, if you have a subscription-based pricing model, you can personalize your questionnaire for each type of subscription your company offers.
11. Test your questionnaire.
Once your questionnaire is complete, it's important to test it. If you don't, you may end up asking the wrong questions and collecting irrelevant or inaccurate information. Start by giving your employees the questionnaire to test, then send it to small groups of customers and analyze the results. If you're gathering the data you're looking for, then you should release the questionnaire to all of your customers.
How Questionnaires Can Benefit Your Customer Service Strategy
Whether you have one customer or 1000 customers, their opinions matter when it comes to the success of your business. Their satisfaction with your offerings can reveal how well or how poorly your customer service strategy and business are meeting their needs. A questionnaire is one of the most powerful, cost-effective tools to uncover what your customers think about your business. When analyzed properly, it can inform your product and service launches.
Use the free questionnaire templates, examples, and best practices in this guide to conduct your next customer feedback survey.
Now that you know the slight difference between a survey and a questionnaire, it’s time to put it into practice with your products or services. Remember, a good survey and questionnaire always start with a purpose. But, a great survey and questionnaire give data that you can use to help companies increase the way customers respond to their products or services because of the questions.
Net Promoter, Net Promoter System, Net Promoter Score, NPS, and the NPS-related emoticons are registered trademarks of Bain & Company, Inc., Fred Reichheld, and Satmetrix Systems, Inc.
Editor's note: This post was originally published in July 2018 and has been updated for comprehensiveness.
Don't forget to share this post!
Everything You Need to Get Started With Concept Testing
Top 10 Survey Email Subject Lines To Maximize Your Results
What Is a Likert Scale? [Examples & Templates]
The 18 Best Totally Free Online Survey Makers & Tools
Nonresponse Bias: What to Avoid When Creating Surveys
Leading Questions: What They Are & Why They Matter [+ 7 Examples]
14 of the Best Survey Templates to Put in Front of Your Customers
What's Survey Fatigue & How to Avoid It
How to Create a Survey in Excel, Word, Google, Facebook, & SurveyMonkey
What are Survey Sample Sizes & How to Find Your Sample Size
5 free templates for learning more about your customers and respondents.
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices. -->
Chapter 9: Survey Research
Constructing Survey Questionnaires
- Describe the cognitive processes involved in responding to a survey item.
- Explain what a context effect is and give some examples.
- Create a simple survey questionnaire based on principles of effective item writing and organization.
The heart of any survey research project is the survey questionnaire itself. Although it is easy to think of interesting questions to ask people, constructing a good survey questionnaire is not easy at all. The problem is that the answers people give can be influenced in unintended ways by the wording of the items, the order of the items, the response options provided, and many other factors. At best, these influences add noise to the data. At worst, they result in systematic biases and misleading results. In this section, therefore, we consider some principles for constructing survey questionnaires to minimize these unintended effects and thereby maximize the reliability and validity of respondents’ answers.
Survey Responding as a Psychological Process
Before looking at specific principles of survey questionnaire construction, it will help to consider survey responding as a psychological process.
A Cognitive Model
Figure 9.1 presents a model of the cognitive processes that people engage in when responding to a survey item (Sudman, Bradburn, & Schwarz, 1996)  . Respondents must interpret the question, retrieve relevant information from memory, form a tentative judgment, convert the tentative judgment into one of the response options provided (e.g., a rating on a 1-to-7 scale), and finally edit their response as necessary.
Consider, for example, the following questionnaire item:
How many alcoholic drinks do you consume in a typical day?
- _____ a lot more than average
- _____ somewhat more than average
- _____ average
- _____ somewhat fewer than average
- _____ a lot fewer than average
Although this item at first seems straightforward, it poses several difficulties for respondents. First, they must interpret the question. For example, they must decide whether “alcoholic drinks” include beer and wine (as opposed to just hard liquor) and whether a “typical day” is a typical weekday, typical weekend day, or both . Even though Chang and Krosnick (2003)  found that asking about “typical” behaviour has been shown to be more valid than asking about “past” behaviour, their study compared “typical week” to “past week” and may be different when considering typical weekdays or weekend days) . Once they have interpreted the question, they must retrieve relevant information from memory to answer it. But what information should they retrieve, and how should they go about retrieving it? They might think vaguely about some recent occasions on which they drank alcohol, they might carefully try to recall and count the number of alcoholic drinks they consumed last week, or they might retrieve some existing beliefs that they have about themselves (e.g., “I am not much of a drinker”). Then they must use this information to arrive at a tentative judgment about how many alcoholic drinks they consume in a typical day. For example, this mental calculation might mean dividing the number of alcoholic drinks they consumed last week by seven to come up with an average number per day. Then they must format this tentative answer in terms of the response options actually provided. In this case, the options pose additional problems of interpretation. For example, what does “average” mean, and what would count as “somewhat more” than average? Finally, they must decide whether they want to report the response they have come up with or whether they want to edit it in some way. For example, if they believe that they drink much more than average, they might not want to report th e higher number for fear of looking bad in the eyes of the researcher.
From this perspective, what at first appears to be a simple matter of asking people how much they drink (and receiving a straightforward answer from them) turns out to be much more complex.
Context Effects on Questionnaire Responses
Again, this complexity can lead to unintended influences on respondents’ answers. These are often referred to as context effects because they are not related to the content of the item but to the context in which the item appears (Schwarz & Strack, 1990)  . For example, there is an item-order effect when the order in which the items are presented affects people’s responses. One item can change how participants interpret a later item or change the information that they retrieve to respond to later items. For example, researcher Fritz Strack and his colleagues asked college students about both their general life satisfaction and their dating frequency (Strack, Martin, & Schwarz, 1988)  . When the life satisfaction item came first, the correlation between the two was only −.12, suggesting that the two variables are only weakly related. But when the dating frequency item came first, the correlation between the two was +.66, suggesting that those who date more have a strong tendency to be more satisfied with their lives. Reporting the dating frequency first made that information more accessible in memory so that they were more likely to base their life satisfaction rating on it.
The response options provided can also have unintended effects on people’s responses (Schwarz, 1999)  . For example, when people are asked how often they are “really irritated” and given response options ranging from “less than once a year” to “more than once a month,” they tend to think of major irritations and report being irritated infrequently. But when they are given response options ranging from “less than once a day” to “several times a month,” they tend to think of minor irritations and report being irritated frequently. People also tend to assume that middle response options represent what is normal or typical. So if they think of themselves as normal or typical, they tend to choose middle response options. For example, people are likely to report watching more television when the response options are centred on a middle option of 4 hours than when centred on a middle option of 2 hours. To mitigate against order effects, rotate questions and response items when there is no natural order. Counterbalancing is a good practice for survey questions and can reduce response order effects which show that among undecided voters, the first candidate listed in a ballot receives a 2.5% boost simply by virtue of being listed first  !
Writing Survey Questionnaire Items
Types of items.
Questionnaire items can be either open-ended or closed-ended. Open-ended items simply ask a question and allow participants to answer in whatever way they choose. The following are examples of open-ended questionnaire items.
- “What is the most important thing to teach children to prepare them for life?”
- “Please describe a time when you were discriminated against because of your age.”
- “Is there anything else you would like to tell us about?”
Open-ended items are useful when researchers do not know how participants might respond or want to avoid influencing their responses. They tend to be used when researchers have more vaguely defined research questions—often in the early stages of a research project. Open-ended items are relatively easy to write because there are no response options to worry about. However, they take more time and effort on the part of participants, and they are more difficult for the researcher to analy z e because the answers must be transcribed, coded, and submitted to some form of qualitative analysis, such as content analysis. The advantage to open-ended items is that they are unbiased and do not provide respondents with expectations of what the researcher might be looking for. Open-ended items are also more valid and more reliable. The disadvantage is that respondents are more likely to skip open-ended items because they take longer to answer. It is best to use open-ended questions when the answer is unsure and for quantities which can easily be converted to categories later in the analysis.
Closed-ended items ask a question and provide a set of response options for participants to choose from. The alcohol item just mentioned is an example, as are the following:
How old are you?
- _____ Under 18
- _____ 18 to 34
- _____ 35 to 49
- _____ 50 to 70
- _____ Over 70
On a scale of 0 (no pain at all) to 10 (worst pain ever experienced), how much pain are you in right now?
Have you ever in your adult life been depressed for a period of 2 weeks or more?
Closed-ended items are used when researchers have a good idea of the different responses that participants might make. They are also used when researchers are interested in a well-defined variable or construct such as participants’ level of agreement with some statement, perceptions of risk, or frequency of a particular behaviour. Closed-ended items are more difficult to write because they must include an appropriate set of response options. However, they are relatively quick and easy for participants to complete. They are also much easier for researchers to analyze because the responses can be easily converted to numbers and entered into a spreadsheet. For these reasons, closed-ended items are much more common.
All closed-ended items include a set of response options from which a participant must choose. For categorical variables like sex, race, or political party preference, the categories are usually listed and participants choose the one (or ones) that they belong to. For quantitative variables, a rating scale is typically provided. A rating scale is an ordered set of responses that participants must choose from. Figure 9.2 shows several examples. The number of response options on a typical rating scale ranges from three to 11—although five and seven are probably most common. Five-point scales are best for unipolar scales where only one construct is tested, such as frequency (Never, Rarely, Sometimes, Often, Always). Seven-point scales are best for bipolar scales where there is a dichotomous spectrum, such as liking (Like very much, Like somewhat, Like slightly, Neither like nor dislike, Dislike slightly, Dislike somewhat, Dislike very much). For bipolar questions, it is useful to offer an earlier question that branches them into an area of the scale; if asking about liking ice cream, first ask “Do you generally like or dislike ice cream?” Once the respondent chooses like or dislike, refine it by offering them one of choices from the seven-point scale. Branching improves both reliability and validity (Krosnick & Berent, 1993)  . Although you often see scales with numerical labels, it is best to only present verbal labels to the respondents but convert them to numerical values in the analyses. Avoid partial labels or length or overly specific labels. In some cases, the verbal labels can be supplemented with (or even replaced by) meaningful graphics. The last rating scale shown in Figure 9.2 is a visual-analog scale, on which participants make a mark somewhere along the horizontal line to indicate the magnitude of their response.
What is a Likert Scale?
In reading about psychological research, you are likely to encounter the term Likert scale . Although this term is sometimes used to refer to almost any rating scale (e.g., a 0-to-10 life satisfaction scale), it has a much more precise meaning.
In the 1930s, researcher Rensis Likert (pronounced LICK-ert) created a new approach for measuring people’s attitudes (Likert, 1932)  . It involves presenting people with several statements—including both favourable and unfavourable statements—about some person, group, or idea. Respondents then express their agreement or disagreement with each statement on a 5-point scale: Strongly Agree , Agree , Neither Agree nor Disagree , Disagree , Strongly Disagree . Numbers are assigned to each response (with reverse coding as necessary) and then summed across all items to produce a score representing the attitude toward the person, group, or idea. The entire set of items came to be called a Likert scale.
Thus unless you are measuring people’s attitude toward something by assessing their level of agreement with several statements about it, it is best to avoid calling it a Likert scale. You are probably just using a “rating scale.”
Writing Effective Items
We can now consider some principles of writing questionnaire items that minimize unintended context effects and maximize the reliability and validity of participants’ responses. A rough guideline for writing questionnaire items is provided by the BRUSO model (Peterson, 2000)  . An acronym, BRUSO stands for “brief,” “relevant,” “unambiguous,” “specific,” and “objective.” Effective questionnaire items are brief and to the point. They avoid long, overly technical, or unnecessary words. This brevity makes them easier for respondents to understand and faster for them to complete. Effective questionnaire items are also relevant to the research question. If a respondent’s sexual orientation, marital status, or income is not relevant, then items on them should probably not be included. Again, this makes the questionnaire faster to complete, but it also avoids annoying respondents with what they will rightly perceive as irrelevant or even “nosy” questions. Effective questionnaire items are also unambiguous ; they can be interpreted in only one way. Part of the problem with the alcohol item presented earlier in this section is that different respondents might have different ideas about what constitutes “an alcoholic drink” or “a typical day.” Effective questionnaire items are also specific , so that it is clear to respondents what their response should be about and clear to researchers what it is about. A common problem here is closed-ended items that are “double barrelled.” They ask about two conceptually separate issues but allow only one response. For example, “Please rate the extent to which you have been feeling anxious and depressed.” This item should probably be split into two separate items—one about anxiety and one about depression. Finally, effective questionnaire items are objective in the sense that they do not reveal the researcher’s own opinions or lead participants to answer in a particular way. Table 9.2 shows some examples of poor and effective questionnaire items based on the BRUSO criteria. The best way to know how people interpret the wording of the question is to conduct pre-tests and ask a few people to explain how they interpreted the question.
For closed-ended items, it is also important to create an appropriate response scale. For categorical variables, the categories presented should generally be mutually exclusive and exhaustive. Mutually exclusive categories do not overlap. For a religion item, for example, the categories of Christian and Catholic are not mutually exclusive but Protestant and Catholic are. Exhaustive categories cover all possible responses.
Although Protestant and Catholic are mutually exclusive, they are not exhaustive because there are many other religious categories that a respondent might select: Jewish , Hindu , Buddhist , and so on. In many cases, it is not feasible to include every possible category, in which case an Other category, with a space for the respondent to fill in a more specific response, is a good solution. If respondents could belong to more than one category (e.g., race), they should be instructed to choose all categories that apply.
For rating scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be “balanced” around a neutral or modal midpoint. An example of an unbalanced rating scale measuring perceived likelihood might look like this:
Unlikely | Somewhat Likely | Likely | Very Likely | Extremely Likely
A balanced version might look like this:
Extremely Unlikely | Somewhat Unlikely | As Likely as Not | Somewhat Likely | Extremely Likely
Note, however, that a middle or neutral response option does not have to be included. Researchers sometimes choose to leave it out because they want to encourage respondents to think more deeply about their response and not simply choose the middle option by default. Including middle alternatives on bipolar dimensions is useful to allow people to genuinely choose an option that is neither.
Formatting the Questionnaire
Writing effective items is only one part of constructing a survey questionnaire. For one thing, every survey questionnaire should have a written or spoken introduction that serves two basic functions (Peterson, 2000)  . One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because participants do not know they are in a study (as in naturalistic observation) or because they are part of a subject pool and have already shown their willingness to participate by signing up and showing up for the study. Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail—and the researcher must make a good case for why they should agree to participate. Thus the introduction should briefly explain the purpose of the survey and its importance, provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates), acknowledge the importance of the respondent’s participation, and describe any incentives for participating.
The second function of the introduction is to establish informed consent. Remember that this aim means describing to respondents everything that might affect their decision to participate. This includes the topics covered by the survey, the amount of time it is likely to take, the respondent’s option to withdraw at any time, confidentiality issues, and so on. Written consent forms are not typically used in survey research, so it is important that this part of the introduction be well documented and presented clearly and in its entirety to every respondent.
The introduction should be followed by the substantive questionnaire items. But first, it is important to present clear instructions for completing the questionnaire, including examples of how to use any unusual response scales. Remember that the introduction is the point at which respondents are usually most interested and least fatigued, so it is good practice to start with the most important items for purposes of the research and proceed to less important items. Items should also be grouped by topic or by type. For example, items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together if possible to make things faster and easier for respondents. Demographic items are often presented last because they are least interesting to participants but also easy to answer in the event respondents have become tired or bored. Of course, any survey should end with an expression of appreciation to the respondent.
- Responding to a survey item is itself a complex cognitive process that involves interpreting the question, retrieving information, making a tentative judgment, putting that judgment into the required response format, and editing the response.
- Survey questionnaire responses are subject to numerous context effects due to question wording, item order, response options, and other factors. Researchers should be sensitive to such effects when constructing surveys and interpreting survey results.
- Survey questionnaire items are either open-ended or closed-ended. Open-ended items simply ask a question and allow respondents to answer in whatever way they want. Closed-ended items ask a question and provide several response options that respondents must choose from.
- Use verbal labels instead of numerical labels although the responses can be converted to numerical data in the analyses.
- According to the BRUSO model, questionnaire items should be brief, relevant, unambiguous, specific, and objective.
- Discussion: Write a survey item and then write a short description of how someone might respond to that item based on the cognitive model of survey responding (or choose any item on the Rosenberg Self-Esteem Scale .
- How much does the respondent use Facebook?
- How much exercise does the respondent get?
- How likely does the respondent think it is that the incumbent will be re-elected in the next presidential election?
- To what extent does the respondent experience “road rage”?
Figure 9.1 long description: Flowchart modelling the cognitive processes involved in responding to a survey item. In order, these processes are:
- Question Interpretation
- Information Retrieval
- Judgment Formation
- Response Formatting
- Response Editing
[Return to Figure 9.1]
Figure 9.2 long description: Three different rating scales for survey questions. The first scale provides a choice between “strongly agree,” “agree,” “neither agree nor disagree,” “disagree,” and “strongly disagree.” The second is a scale from 1 to 7, with 1 being “extremely unlikely” and 7 being “extremely likely.” The third is a sliding scale, with one end marked “extremely unfriendly” and the other “extremely friendly.” [Return to Figure 9.2]
Figure 9.3 long description: A note reads, “Dear Isaac. Do you like me?” with two check boxes reading “yes” or “no.” Someone has added a third check box, which they’ve checked, that reads, “There is as yet insufficient data for a meaningful answer.” [Return to Figure 9.3]
- Study by XKCD CC BY-NC (Attribution NonCommercial)
- Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology . San Francisco, CA: Jossey-Bass. ↵
- Chang, L., & Krosnick, J.A. (2003). Measuring the frequency of regular behaviors: Comparing the ‘typical week’ to the ‘past week’. Sociological Methodology, 33 , 55-80. ↵
- Schwarz, N., & Strack, F. (1990). Context effects in attitude surveys: Applying cognitive theory to social research. In W. Stroebe & M. Hewstone (Eds.), European review of social psychology (Vol. 2, pp. 31–50). Chichester, UK: Wiley. ↵
- Strack, F., Martin, L. L., & Schwarz, N. (1988). Priming and communication: The social determinants of information use in judgments of life satisfaction. European Journal of Social Psychology, 18 , 429–442. ↵
- Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54 , 93–105. ↵
- Miller, J.M. & Krosnick, J.A. (1998). The impact of candidate name order on election outcomes. Public Opinion Quarterly, 62 (3), 291-330. ↵
- Krosnick, J.A. & Berent, M.K. (1993). Comparisons of party identification and policy preferences: The impact of survey question format. American Journal of Political Science, 27 (3), 941-964. ↵
- Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology,140 , 1–55. ↵
- Peterson, R. A. (2000). Constructing effective questionnaires . Thousand Oaks, CA: Sage. ↵
Being tested in one condition can also change how participants perceive stimuli or interpret their task in later conditions.
The order in which the items are presented affects people’s responses.
A questionnaire item that allows participants to answer in whatever way they choose.
A questionnaire item that asks a question and provides a set of response options for participants to choose from.
An ordered set of responses that participants must choose from.
A guideline for questionnaire items that suggests they should be brief, relevant, specific, and objective.
Research Methods in Psychology - 2nd Canadian Edition by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
Share This Book
The Essential Guide to Writing Effective Survey Questions
User surveys are popping up on websites and mobile apps everywhere. Well-designed ones yield helpful data. Poorly executed ones are a waste of time for users and researchers. Crafting a good questionnaire is a science, but luckily, substantial published research is available on the topic. It’s easy to write good questions when “survey best practices” are just a Google search away…
Indeed. There are about 68,300,000 results . All that information can be overwhelming to those new to survey writing. UX Booth is here to help.
In this article, I will focus solely on writing effective questions and answers—the critically important and challenging part between a proper introduction and conclusion. Skipping the basics of question types , I’ll dive into proven techniques researchers rely on to craft effective inquiries and response options to gather useful data and results.
(If you need help with survey planning, defining goals, or understanding the basics of structure and question types, check out How to Create an Effective Survey .)
Question Wording and Structure
The creation of effective survey questions is essential to accurately measure the opinions of the participants. If the questions are poorly worded, unclear or biased, the responses will be useless. A well-written question will mean the same thing to all respondents. It will communicate the desired information so that all participants interpret it the same way and understand the expected type of response.
Use these guidelines for writing survey questions to yield informative and accurate information.
Be clear, specific, and direct
Failure to clearly explain the intent of the question can lead to confusion and misinterpretation. Be very specific and avoid imprecise or vague words. Present the topic and define the behaviors, events, or timeframe. This will help ensure every participant is providing the same type of response.
Vague: What is your income?
For what time period? For just the respondent or the entire household? Before or after taxes?
Specific: What was your household’s yearly income before taxes in 2016?
Use the participants’ vocabulary
Consider the education level of the survey audience, and use words that will will be easily understood. Avoid jargon, complex terms, undefined abbreviations and acronyms. Use simple language and never assume knowledge; always provide the necessary information for the respondent to understand what is being asked. Define any concepts or terms that the respondent needs to understand in order to answer. If referencing something participants might not be familiar with, be sure to add details to help explain it.
Unclear: How likely would you be to subscribe to UIE’s library?
Whose library? The International Union for Electricity? What kind of library–documentation, podcasts, ebooks?
Clear: User Interface Engineering’s library offers online seminars by experts in UX design. You can access the recordings anytime for only $25 a month. How likely would you be to subscribe?
Tip: If the question requires a lengthy explanation, consider separating it from the question itself to help make the information easier to digest.
Talk like a real person and treat the questions like a conversation
Group similar topics together and order the questions in a logical way to create a natural flow as if having a conversation. The voice and tone of the survey should match who it is from and being designed for. The writing can be friendly and familiar but don’t sacrifice clarity for cutesy. Consider the MailChimp writing tone guideline , “It’s always more important to be clear than entertaining.”
Formal: Would you be willing to participate in our 10-question feedback survey? Your responses to this questionnaire will help us improve your experience with Corporation ABC’s website.
Informal: Hi! Are you willing to answer a few quick questions? It’s won’t take more than five minutes. (And there’s a really good prize!)
Tip: Although I’m focusing on introductions and not question writing, it’s worth noting that being up front about the time-investment and offering incentives can also help with response rates.
Ask only one question at a time
Each question should focus on a single item or concept. This generally means that questions should have one subject and verb. Double-barrel questions ask a respondent to evaluate more than one thing in a question yet only allow for a single response.
Double-barrel: Was the content marketing seminar worthwhile and entertaining?
What if the seminar was educational but the presenter was a dreadful bore, and the response options are Yes or No? A double-barrel question is also known as a compound question. This is a common mistake, which can be corrected by breaking questions into two. Let’s look at an example with how to correct it:
Double-barrel: How satisfied are you with your work environment and compensation?
Single and separate:
- How satisfied are you with your work environment?
- How satisfied are you with your compensation?
By breaking the double-barrel question into two questions, one about satisfaction with the work environment and another question about pay, the participant is now able to provide a response to both inquiries separately.
Practice good grammar
Keep the questions simple and grammatically correct. Maintaining a parallel structure and consistently using the same words and phrases improves respondents’ comprehension. Avoid two-part or complex questions which can be hard to interpret, as can double negatives .
Double Negative: Do you agree or disagree that user interface designers should never not know how to code?
Better: User interface designers should know how to code.
An agreement scale goes well with this reworked question—more on that later.
Avoid bias and loaded words
A biased question will lead participants in the direction of a particular answer. Some phrases, particularly adjectives and adverbs, may add bias to questions. Depending on how a question is presented, people can react in different ways (for example, asking a question using the word “loss” versus “gain”). The use of emotional, politically-charged, or sensitive words can also trigger a different response. Remain neutral regardless of topic and watch for wording that skews positive or negative.
Biased: We think this UX Booth article on Survey Question Writing is very helpful. How helpful do you think this article is?
Unbiased: What do you think of this UX Booth article on Survey Question Writing?
Start with broad, general questions and progress to specific and harder ones
Beginning with basic, easier questions can encourage a respondent to continue. When possible, try to balance simple and complex inquiries. Interspersing easier questions among more challenging ones can make it seem less burdensome and help reduce abandonment. And remember to save sensitive questions like income for the end and make them optional.
Keep the survey short and don’t be greedy!
Don’t waste people’s time–only ask for what you really need. (Requiring answers to questions will slow people down, but it won’t necessarily get you want you and will increase drop off rates.) If there aren’t too many questions, and respondents can immediately understand what is being asked, they are more likely to be willing and able to provide useful information. If the answers are also well-crafted…
Answer Wording and Structure
Since this article is concentrated on writing, I’ll focus on answers to closed questions , where responses need to be crafted by the survey designer. When providing responses to closed-ended questions, how each answer is described, the number of options, and the order can all influence how people respond. Whether presented as multiple choice, checklists, or in scales, just like when writing questions, the answers should use precise, clear wording. Here’s how to make that happen.
Present all the possibilities
The number of answers should be kept relatively small but include all the possible choices. Answers need to be balanced both ways (e.g. positive to negative, high to low frequency).
All respondents need to be able to find an answer that fits their situation—including opting out. If there could be a situation where none of the answers apply, provide the option to select “don’t know,” “not applicable” or “prefer not to answer” for sensitive questions. Including an “Other,” with a free-form text field to provide a custom answer, is a great way to learn about alternative responses not provided in the defined answer set.
Incomplete and Unbalanced:
- Very Important
- Moderately important
- Slightly important
What if it is not important at all? Or not even applicable to the participant?
Complete and Balanced:
- Extremely important
- Very important
- Not at all important
- Not applicable
Say “no” only when necessary
Dichotomous questions present only two options and are clearly distinct. These answers, like yes/no and true/false, can produce less helpful data because they don’t provide context or specificity. (Though when using skip logic , these responses can often appropriate.) Formatting responses to use scales that measure things like attitudes or frequency yield more information-rich results. These answers can make a single question work harder.
Yes/No: Do you use the mobile app?
Frequency: How often do you use the mobile app?
Tip: These answers also follow the first guideline to cover all the possibilities in a balanced way, ranging from high to low or not at all. An even stronger set of choices would include references for the time period to clearly define what “sometimes” is versus “rarely.” Check out the UX Booth blog example below.
Keep answers mutually exclusive
If a participant can only select one response than each answer should be distinct and not cross-over. For example, options might be 0-5 or 6-10 rather than 0-5 or 5-10. Having the “5” in both answers makes them not mutually exclusive:
Not Distinct: Where is your work location?
- In an office building.
- From my home.
- In the city.
The person could work in an office building in the city or from their home in the city.
Remove universal statements
Words like “never, none, always, all” are extreme choices that respondents might be hesitant to commit to. Rather than absolute words, provide answers with specific references for behaviors or timeframes.
- I always read UX Booth’s blog.
- I never read UX Booth’s blog.
Referenced Alternatives: I read UX Booth’s blog:
- Once a week
- 2-3 times a week
- 4 or more times a week
- I do not read UX Booth’s blog.
Use ratings and scales
The Likert Scale , where respondents indicate their level of agreement or disagreement, is the most commonly used approach to scaling options when measuring attitudes or behaviors. Likert scales should be symmetrical and balanced. They should contain equal numbers of positive and negative responses within the distance between each item being the same.
Experts’ debates about scales—the number of levels (5, 7,10), and the inclusion of a neutral midpoint (neither agree nor disagree)—is too overwhelming to tackle in this article. Consult the many resources for Likert Scale best practices . SurveyMonkey suggests five scale points for unipolar and seven for bipolar. (My personal opinion is between five to seven is best; the higher the number the harder it is for people to gauge.) Always include word labels not just numbers to identify what each point on the scale means.
- Agreement: Disagree to Agree
- Familiarity: Not Familiar to Very Familiar
- Frequency: Never to Always
- Important: Not Important to Extremely Important
- Likelihood: Not Likely to Extremely Likely
- Quality: Poor to Excellent
- More Examples (Iowa State PDF)
Use the expected, “natural” order for answer scales because it is easier for people to respond. For ranges (e.g. excellent to poor) it’s okay to reverse the order, such as starting with the favorable and ending with unfavorable answer, since it can also influence choices.
Tip: Read “ There is a Right Way and Wrong Way to Number Rating Scales .”
Good survey design leads to good data.
The unfortunate result of bad survey design is bad data. Asking good questions and providing solid answers is not easy. Take advantage of what other researchers and academics have done and use starter templates when appropriate. It is the survey designer’s responsibility to be clear and unbiased. Make sure the questions will be informative, the answers accurate, and that the insight you can will lead to actionable results.
UX Booth is trusted by over 100,000 user experience professionals. Start your subscription today for free.
How to conduct your own market research survey
After watching a few of those sketches, you can imagine why real-life focus groups tend to be pretty small. Even without any over-the-top personalities involved, it's easy for these groups to go off the rails.
So what happens when you want to collect market research at a larger scale? That's where the market research survey comes in. Market surveys allow you to get just as much valuable information as an in-person interview, without the burden of herding hundreds of rowdy Eagles fans through a product test.
What is a market survey?
A market research survey is a questionnaire designed to collect key information about a company's target market and audience that will help guide business decisions about products and services, branding angles, and advertising campaigns.
Market surveys are what's known as "primary research"—that is, information that the researching company gathers firsthand. Secondary research consists of data that another organization gathered and published, which other researchers can then use for their own reports. Primary research is more expensive and time-intensive than secondary research, which is why you should only use market research surveys to obtain information that you can't get anywhere else.
A market research survey can collect information on your target customers':
Preferences, desires, and needs
Values and motivations
The types of information that can usually be found in a secondary source, and therefore aren't good candidates for a market survey, include your target customers':
Consumer spending data
Lots of this secondary information can be found in a public database like those maintained by the Census Bureau and Bureau of Labor Statistics . There are also a few free market research tools that you can use to access more detailed data, like Think with Google , Data USA , and Statista . Or, if you're looking to learn about your existing customer base, you can also use a CRM to automatically record key information about your customers each time they make a purchase.
If you've exhausted your secondary research options and still have unanswered questions, it's time to start thinking about conducting a market research survey.
How to design a market research survey
The first thing to figure out is what you're trying to learn, and from whom. Are you beta testing a new product or feature with existing users? Or are you looking to identify new customer personas for your marketers to target? There are a number of different ways to use a marketing research survey, and your choice will impact how you set up the questionnaire.
6 types of market research survey
1. Buyer persona research
A buyer persona or customer profile is a simple sketch of the types of people that you should be targeting as potential customers.
A buyer persona research survey will help you learn more about things like demographics, household makeup, income and education levels, and lifestyle markers. The more you learn about your existing customers, the more specific you can get in targeting potential customers. You may find that there are more buyer personas within your user base than the ones that you've been targeting.
2. Sales funnel research
The sales funnel is the path that potential customers take to eventually become buyers. It starts with the target's awareness of your product, then moves through stages of increasing interest until they ultimately make a purchase.
With a sales funnel research survey, you can learn about potential customers' main drivers at different stages of the sales funnel. You can also get feedback on how effective different sales strategies are. Use this survey to find out:
How close potential buyers are to making a purchase
What tools and experiences have been most effective in moving prospective customers closer to conversion
What types of lead magnets are most attractive to your target audience
3. Customer loyalty research
Whenever you take a customer experience survey after you make a purchase, you'll usually see a few questions about whether you would recommend the company or a particular product to a friend. After you've identified your biggest brand advocates , you can look for persona patterns to determine what other customers are most likely to be similarly enthusiastic about your products. Use these surveys to learn:
The demographics of your most loyal customers
What tools are most effective in turning customers into advocates
What you can do to encourage more brand loyalty
4. Branding and marketing research
The Charmin focus group featured in that SNL sketch is an example of branding and marketing research, in which a company looks for feedback on a particular advertising angle to get a sense of whether it will be effective before the company spends money on running the ad at scale. Use this type of survey to find out:
Whether a new advertising angle will do well with existing customers
Whether a campaign will do well with a new customer segment you haven't targeted yet
What types of campaign angles do well with a particular demographic
5. New products or features research
Whereas the Charmin sketch features a marketing focus group, this one features new product research for a variety of new Hidden Valley Ranch flavors. Though you can't get hands-on feedback on new products when you're conducting a survey instead of an in-person meeting, you can survey your customers to find out:
What features they wish your product currently had
What other similar or related products they shop for
What they think of a particular product or feature idea
Running a survey before investing resources into developing a new offering will save you and the company a lot of time, money, and energy.
6. Competitor research
You can get a lot of information about your own customers and users via automatic data collection , but your competitors' customer base may not be made up of the same buyer personas that yours is. Survey your competitors' users to find out:
Your competitors' customers' demographics, habits, and behaviors
Whether your competitors have found success with a buyer persona you're not targeting
Information about buyers for a product that's similar to one you're thinking about launching
Feedback on what features your competitors' customers wish their version of a product had
How to write and conduct a market research survey
Once you've narrowed down your survey's objectives, you can move forward with designing and running your survey.
Step 1: Write your survey questions
A poorly-worded survey, or a survey that uses the wrong question format, can render all of your data moot. If you write a question that results in most respondents answering "none of the above," you haven't learned much.
You'll find dozens of question types and even pre-written questions in most survey apps . Here are a few common question types that work well for market surveys:
If you're looking for a simple count, like "35% of people said ABC" or "20% of managers and 24% of employees," then there's a variety of question types you can use: Yes/No, checkbox, or multiple choice question type. These types of questions are called categorical or "nominal" questions.
Analysis of categorical-level questions can include counts and percentages—"22 respondents" or "18% of customers," for example—and they work great for bar graphs and pie charts. You cannot take averages or test correlations with nominal-level data.
The simplest survey question—and the only question you'll usually use in a poll —is a Yes/No question. Your survey app likely offers a Yes/No question; otherwise, use the multiple choice question and add Yes and No answers yourself.
Example : Are you a vegetarian? Yes/No
Need more nuance than a Yes/No answer gives? Multiple choice is what you need. You can add as many answers as you want, and your respondents can pick only one answer to the question.
Example : What's your favorite food? Pizza/Pasta/Salad/Steak/Soup/Other
Checkbox questions add the flexibility to select all of the answers that apply. Add as many answers as you want, and respondents aren't limited to just one.
Example : Which types of meat do you like? Beef/Pork/Chicken/Fish/Duck/Other
When question responses have a clear order (like "Income of $0-$25K, $26K-40K, $41K+"), we call them "ordinal" questions. Analysis for ordinal questions is similar to analysis for nominal questions: you can get counts and percentages. You cannot find averages or test correlations with ordinal-level data.
Dropdown questions work much like a multiple choice question—you'll have several different possible answers, and respondents can only choose one option. But you'll need to list the answers in order—perhaps largest to smallest—for ordinal data. You could also use this question to gather demographic data like their country or state of residence.
Example: What's your household income? $0-10k/$11-35k/$36-60k/$61k+
A more unique survey question type that you won't find in every survey app, ranking questions let you list a number of answers, and respondents can rearrange them all into the order they want. That way, they can give feedback on every answer you offer. It's a great way to see which items people like most and least at the same time.
Example: What's your favorite beverage? Rank in order of preference. Milk/Water/Juice/Coffee/Soda/Wine/Beer
For the most precise data and thorough analysis , use the interval or ratio question type. These questions allow you to conduct advanced analysis, like finding averages, testing correlations, and running regression models. You'll use ranking scale, matrix, or text fields in your survey app to ask these types of questions.
Interval questions are often asked on a scale of 1-5 or 1-7, like from "Strongly disagree" to "Strongly agree" or from "Never" to "Always." Ratio questions have a true zero and often ask people to input an actual number into the survey field (like "How many cups of coffee do you drink per day? ____"). You don't really have to worry about the differences between the two types.
The default choice for interval questions, ranking scale questions look like a multiple choice question with the answers in a horizontal line instead of a list. There will likely be 3 to 10 answers, either with a number scale, a like/love scale, a never/always scale, or any other ratio interval. It's a great way to find a more precise measure of people's thoughts than a Yes/No question could give.
Example: On a scale of 1-5, how would you rate our store cleanliness? 1/2/3/4/5
Have a lot of interval questions to ask? Use a matrix if your survey app includes it. You can list a number of questions in a list, and use the same scale for all of them. It simplifies gathering data about a lot of similar items at once.
Example : How much do you like the following: oranges, apples, grapes? Hate/Dislike/Ok/Like/Love
For ratio questions—or direct feedback, or personal data like names—you'll need the textbox question. There's usually a small and large textbox option, so choose the size that's appropriate for the data you're collecting. You'll add the question, and then there will be a blank where your respondent can enter their answer on their own.
Example: How many apps are installed on your phone? Enter a number: __
Step 2: Choose a survey platform
There are a lot of survey platforms to choose from, and they all offer different and unique features. Check out our list of the best online survey apps to help you decide.
Most survey apps today look great on mobile, but be sure to preview your survey on your phone and computer, at least, to make sure it'll look good for all of your users.
If you have the budget, you can also purchase survey services from a larger research agency.
Step 3: Run a test survey
Before you run your full survey, conduct a smaller test on 5-10% of your target respondent pool size. This will allow you to work out any confusing wording or questions that result in unhelpful responses without spending the full cost of the survey. Look out for:
Survey rejection from the platform for prohibited topics
Joke or nonsense textbox answers that indicate the respondent didn't answer the survey in earnest
Multiple choice questions with an outsized percentage of "none of the above" or "N/A" responses
Step 4: Launch your survey
If your test survey comes back looking good, you're ready to launch the full thing! Make sure that you leave ample time for the survey to run—you'd be surprised at how long it takes to get a few thousand respondents.
Even if you've run similar surveys in the past, leave more time than you need. Some surveys take longer than others for no clear reason, and you also want to build in time to conduct a comprehensive data analysis.
Step 5: Organize and interpret the data
Unless you're a trained data analyst, you should avoid crunching all but the simplest survey data by hand. Most survey platforms include some form of reporting dashboard that will handle things like population weighting for you, but you can also connect your survey platform to other apps that make it easy to keep track of your results and turn them into actionable insights.
Before you get started on your next market survey, check out our more in-depth guides:
5 advanced ways to automate your forms and surveys
The best online survey apps
The best free form builders and survey tools
How to get people to take a survey
This article was originally published in June 2015 by Stephanie Briggs. The most recent update was in August 2022.
Get productivity tips delivered straight to your inbox
We’ll email you 1-3 times per week—and never share your information.
Amanda is a writer and content strategist who built her career writing on campaigns for brands like Nature Valley, Disney, and the NFL. When she's not knee-deep in research, you'll likely find her hiking with her dog or with her nose in a good book.
- Forms & surveys
How to build an email marketing list—and 24 winning strategies 
How to build an email marketing list—and 24...
How you can (and when you shouldn't) use ChatGPT to write marketing copy
How you can (and when you shouldn't) use...
6 promotional event ideas to boost your brand
6 promotional event ideas to boost your...
The 11 best marketing podcasts to try in 2023
The 11 best marketing podcasts to try in...
Improve your productivity automatically. Use Zapier to get your apps working together.
Learn / Blog / Article
Back to blog
Survey questions 101: over 70 survey question examples + types of surveys and FAQs
How well do you understand your prospects and customers? Do you know who they are, what keeps them awake at night, and what brought them to your business in search of a solution?
Understanding customers is the key to improving and growing your business—but you won’t be able to understand your customers unless you learn more about them. One way to do this is by asking the right survey questions at the right point in their journey.
This piece will give you a thorough overview of different types of survey questions you can use, how to word them for maximum effect, when and why to use them, and includes over 70 examples of effective survey questions for ecommerce, Software/Software as a Service (SaaS), or publishing companies.
Plus, you'll get access to our pre-built survey templates .
What is a good survey question?
Types of survey questions
70+ survey question examples you can use
Dos and don'ts of writing survey questions
10 survey use cases: what you can do with good survey questions
Before diving into a list of questions (though you can skip right to it if you prefer), let’s cover a few survey question basics:
A good survey question is one that helps you get clear insights and business-critical information about your customers , including:
Who your target market is
How you should price your products
What is stopping people from buying from you
Why visitors leave your website
With this information, you can tailor your website, products, landing pages, and/or messaging to improve the user experience and (ultimately) maximize conversions .
Why is it important to ask good survey questions?
A good survey question is asked in a precise way at the right stage in the buyer’s journey to give you solid data about your customers’ needs and drives . The format you choose for your survey—in-person, email, on-page, etc.—is important, but if the questions themselves are poorly worded you could waste hours trying to fix minimal problems while ignoring major ones a different question could have uncovered. We'll explore the dos and don'ts of good question writing towards the end of this article.
How to run your surveys
The format you pick for your survey depends on what you want to achieve , and also on how much budget/resources you have. You can:
Use a feedback tool and set up a website survey that pops up whenever people visit a specific page → useful when you want to investigate website- and product-specific topics quickly, relatively inexpensive
Use a survey builder and create a survey that people can access in their own time → useful when you want to reach out to your mailing list and/or a wider audience (you just need to share the URL the survey lives at), has more space for people to elaborate on their answers, relatively inexpensive
Place survey kiosks in a physical location where people can give their feedback by pressing a button → useful for quick feedback on specific aspects of a customer's experience (there’s usually plenty of these in airports and waiting rooms), relatively expensive to maintain
Run in-person surveys with your existing or prospective customers → in-person questionnaires help you dig deep into your interviewees’ answers, relatively cheap if you do it over the phone but more expensive time-wise if done in a physical location you need to travel to/from, and costly if done in a lab
To run both on-site surveys (that appear on a website page) and online surveys (that exist on a separate URL), you will need dedicated survey-building software like Hotjar. Here is what the dashboard looks like: after choosing the type of survey to run, you will be able to build and include as many questions as you want in the exact order you need them.
Build and send a survey today 🔥
Use Hotjar to build your survey, place it on your website or send it via email, and get the customer insight you need to grow your business.
6 main types of survey questions
Before we dive into our list of 70+ question examples , here is a quick overview of the six different survey question types they belong to, with a few examples for each:
Likert scale questions
Rating scale (or ordinal) questions
'Yes' or 'no' questions
1. Open-ended survey questions
Open-ended questions give your respondents the freedom to answer in their own words , instead of limiting their response to a set of pre-selected choices (such as multiple-choice answers, yes/no answers, 0-10 ratings, etc.).
Examples of open-ended questions:
What other products would you like to see us offer?
If you could change just one thing about our product, what would it be?
When to use open-ended questions in a survey
The majority of example questions included in this post are open-ended, and there are some good reasons for that:
Open-ended questions help you learn about customer needs you didn’t know existed, and they shine a light on areas for improvement that you may not have considered before. If you limit your respondents’ answers, you can cut yourself off from key insights.
Open-ended questions are very useful when you first begin surveying your customers and collecting their feedback. If you don't yet have a good amount of insight, answers to open-ended questions will go a long way towards educating you about who your customers are and what they are looking for.
There are, however, a few downsides to open-ended questions:
First, people tend to be less likely to respond to open-ended questions in general because they take comparatively more effort to answer than, say, a yes/no one.
Second, but connected: if you ask multiple open-ended questions in a row during your survey, people will get tired of answering them, and their answers might become less and less helpful the more you ask.
Finally, the data you receive from open-ended questions will take longer to analyze
compared to easy 1-to-5 or Yes/No answers—but don’t let that stop you: there are plenty of shortcuts that make it easier than it looks, and we explain it all in our post about. There’s even a free analysis template you can pick up directly on the page.
2. Closed-ended survey questions
Closed-end questions limit a user’s response options to a set of pre-selected choices. This broad category of questions includes:
Rating scale questions
‘Yes’ or ‘no’ questions
I’ll describe each in greater detail below.
When to use closed-ended questions
Closed-ended questions work very well in two scenarios:
To open a survey , because they require little time and effort and therefore are easy for people to answer. This is called the foot-in-the-door principle: once someone commits to answering the first question, they may be more likely to answer the open-ended questions that follow.
When you need to create graphs and trends based on people’s answers . Responses to closed-ended questions are easy to tabulate and use as benchmarks; rating scale questions in particular (e.g. where you get people to rate customer service or on a scale of 1-10—more on this below) allow you to gather customer sentiment and compare your progress over time.
3. Nominal questions
A nominal question is a type of survey question that presents people with multiple answer choices; the answers are non-numerical in nature and don't overlap (unless you include an ‘all of the above’ option).
Example of nominal question:
What are you using [product name] for?
2. Personal use
3. Both business and personal use
When to use nominal questions
Nominal questions work well when there is a limited number of categories for a given question (see the example above). They’re easy for people to answer and for you to create graphs and trends from, but the downside is that you may not be offering enough categories for people to reply.
For example, if you are asking people what type of browser they are using and only give them 3 options to choose from, you may have effectively alienated everybody who uses a fourth type and cannot tell you about it.
🔥 Pro tip: you can add an open-ended component to a nominal question with an expandable ’other’ category, where respondents can add in an answer that isn’t on the list . When you do that, you’re essentially asking an open-ended question, because you aren’t limiting them to the options you’ve picked.
Which browser are you using?
Other (allows open-ended response)
4. Likert scale questions
The Likert scale is typically a 5- or 7- point scale that evaluates a respondent’s level of agreement with a statement or the intensity of their reaction towards something.
The scale develops symmetrically: the median number (e.g., a ‘3’ on a 5-point scale) indicates a point of neutrality, the lowest number (always a ‘1’) indicates an extreme view, and the highest number (e.g., a ’5’ on a 5-point scale) indicates the opposite extreme view.
Examples of Likert-type questions:
How strongly do you agree with the following statement: [company’s] payment process is simple and painless.
1 - Strongly disagree
2 - Somewhat disagree
3 - Neither agree nor disagree
4 - Somewhat agree
5 - Strongly agree
How satisfied were you with your customer service experience?
1 - Very dissatisfied
2 - Somewhat dissatisfied
3 - Slightly dissatisfied
4 - Neither satisfied nor dissatisfied
5 - Slightly satisfied
6 - Somewhat satisfied
7 - Very satisfied
When to use Likert scale questions
Likert-type questions are also known as ordinal questions because the answers are presented in a specific order. Like other multiple-choice questions, Likert scale questions come in handy when you already have some sense of what your customers are thinking . For example, if your open-ended questions uncover a complaint about a recent change to your ordering process, you could use a Likert scale question to determine how the average user felt about the change.
5. Rating scale questions
Rating scale questions are questions where the answers map onto a numeric scale (such as rating customer support on a scale of 1-5, or likelihood to recommend a product from 0 to 10).
Examples of rating questions:
How likely are you to recommend us to a friend or colleague on a scale of 0-10?
How would you rate our customer service on a scale of 1-5?
When to use rating questions
Whenever you want to assign a numerical value to your survey and/or visualize and compare trends, a rating question is the way to go.
A typical rating question is used to determine Net Promoter Score® (NPS): the question asks customers to rate their likelihood of recommending products or services to their friends or colleagues, and allows you to look at the results historically and see if you're improving or getting worse. Rating questions are also used for customer satisfaction surveys and product reviews (such as Amazon’s five-star product ratings).
Tip: when you use a rating question in a survey, be sure to explain what the scale means (e.g., ‘1’ for ‘Poor’, 5 for ‘Amazing’).
6. ‘Yes’ or ‘no’ questions
These questions are super-straightforward: they require a simple ‘yes’ or ‘no’ reply .
Examples of Yes/No questions:
Was this article useful? (Yes/No)
Did you find what you were looking for today? (Yes/No)
When to use ‘yes’ or ‘no’ questions
‘Yes’ and ‘no’ questions are a good way to quickly segment your respondents . For example, say you are trying to understand what obstacles or objections are stopping people from trying your product. You can place a survey on your pricing page, ask people if something is stopping them, and follow-up with the segment who replied ‘NO’ by asking them to elaborate further.
These questions are also great for getting your foot in the door. When you ask a ‘yes’ or ‘no’ question, it requires very little effort to answer . Once a user commits to answering the first question, they tend to become more willing to answer the questions that follow.
“Understand your customers’ pain points and then establish processes to reduce customer effort. The reason I am saying this is that customer pain points differ from one industry to another and even from one company to another. Check the consumer complaints section of your website and identify the most common problems faced by customers. Read customer reviews on your website or on third-party websites to understand what customers say about your brand. Conduct a survey of your customers to identify the key aspects that you can improve to provide a better customer experience.”
70+ survey question examples
Below we collected a list of good survey questions that you can ask, and categorized them across e-commerce, software/Software as a Service (SaaS), and publishing.
You don't have to use them word-for-word, but hopefully seeing this list will spark some extra good ideas for the surveys you're going to run right after reading this piece 😉
9 basic demographic survey questions
You ask these questions when you want to get some context about your respondents (and also so you can segment them later). A tip from us: don't ask these questions for the sake of it—if you're not going to use some of the data points (e.g. if gender is irrelevant to the result of your survey), move on to the ones that are really useful for you, business-wise.
What is your name?
What is your age?
What is your gender?
What company do you work for?
What vertical/industry best describes your company?
Ecommerce - Retail
Ecommerce - Travel
Ecommerce - Other
Other (Please specify)
What best describes your role?
Specialist / Team Member
C-Level Executive (CEO, CMO, etc)
In which department do you work?
What is the total number of employees in your company (including all locations where your employer operates)?
What is your company's annual revenue?
Gather more info about your users with our Product-Market Fit survey template.
20+ effective customer questions
Particularly recommended for ecommerce companies:
What information is missing or would make your decision to buy easier?
What is your biggest fear or concern about purchasing this item?
Were you able to complete the purpose of your visit today?
If you did not make a purchase today, what stopped you?
Was there anything about this checkout process we could improve?
What was your biggest fear or concern about purchasing from us?
What persuaded you to complete the purchase of the item(s) in your cart today?
If you could no longer use [product name], what’s the one thing you would miss the most?
What’s the one thing that nearly stopped you from buying from us?
Editor's tip: check out our quick guide about setting up an e-commerce post-purchase survey in 7 steps .
Other useful customer questions
Do you have any questions before you complete your purchase?
What other information would you like to see on this page?
What were the three main things that persuaded you to create an account today?
What nearly stopped you from creating an account today?
Which other options did you consider before choosing [product name]?
What would persuade you to use us more often?
What was your biggest challenge, frustration or problem in finding the right [product type] online?
Please list the top three things that persuaded you to use us rather than a competitor.
Were you able to find the information you were looking for?
How satisfied are you with our support?
How would you rate our service on a scale of 0-10? (0=terrible, 10=stellar)
How would you rate our support on a scale of 0-10?
How likely are you to recommend us to a friend or colleague? (This is an NPS question )
Is there anything preventing you from purchasing at this point?
Learn how satisfied customers are with this expert-built Customer Satisfaction / NPS survey template.
30+ product survey questions
Particularly recommended for SaaS (Software-as-a-Service) companies:
Questions for new or trial users
What nearly stopped you from signing up today?
How likely are you to recommend us to a friend or colleague on a scale of 0-10? (NPS question)
Is our pricing clear? If not—what would you change?
Questions for paying customers
What convinced you to pay for this service?
What’s the one thing we are missing in [product type]?
What's one feature we can add that would make our product indispensable for you?
If you could no longer use [name of product], what’s the one thing you would miss the most?
Are you making the most of your pricing plan? Find out what buyers think with this Pricing Plan Feedback survey template.
Questions for former/churned customers
What is the main reason you're canceling your account? (please be blunt and direct)
If you could have changed one thing in [product name], what would it have been?
If you had a magic wand and could change anything in [product name], what would it be?
Find out why customers churn with this f ree-to-use Churn Analysis survey template.
Other useful product questions
What were the three main things that persuaded you to sign up today?
What was your biggest challenge, frustration, or problem in finding the right [product type] online?
Do you have any questions before starting a free trial?
What persuaded you to start a trial?
Was this help section useful?
Was this article useful?
How would you rate our service on a scale of 1-10? (0=terrible, 10=stellar)
How would you rate our support on a scale of 1-10?
Is there anything preventing you from upgrading at this point?
Is there anything on this page that doesn't work the way you expected it to?
What could we change to make you want to continue using us?
If you did not upgrade today, what stopped you?
What's the next thing you think we should build? (can be multiple choice)
How would you feel if we discontinued this feature?
What's the next feature or functionality we should build?
Gather feedback on your product with our free-to-use survey templates.
20 effective questions for publishers and bloggers
Questions to help improve content.
If you could change just one thing in [publication name], what would it be?
What other content would you like to see us offer?
How would you rate this article on a scale of 1-10?
If you could change anything on this page, what would you have us do?
If you did not subscribe to [publication name] today, what was it that stopped you?
Does your copy and messaging resonate? Find ways to improve your website content with this survey template.
What convinced you to subscribe to [publication] today?
What almost stopped you from subscribing?
What were the three main things that persuaded you to join our list today?
What is the main reason you're unsubscribing? (please be specific)
Other useful content-related questions
What’s the one thing we are missing in [publication name]?
What would persuade you to visit us more often?
How likely are you to recommend us to someone with similar interests? ( NPS question )
What’s missing on this page?
What topics would you like to see us write about next?
How useful was this article?
What could we do to make this page more useful?
Is there anything on this site that doesn't work the way you expected it to?
What's one thing we can add that would make [publication name] indispensable for you?
If you could no longer read [publication name], what’s the one thing you would miss the most?
🔥 Pro tip: whichever questions you use, the qualitative data you get from a survey will supplement the insight you can capture through other traditional analytics tools (think Google Analytics) and behavior analytics tools (think heatmaps and session recordings , which visualize user behavior on specific pages or across an entire website). While analytics tools will tell you what is happening on a page or website, replies to your survey questions will usually help you understand why it's happening. Combining the two gives you both the context you need to solve a problem or capitalize on an opportunity and plenty of inspiration about how to do it.
How to write good (and effective) survey questions: the DOs and DON’Ts
To help you understand the basics and avoid some rookie mistakes, we asked a few experts to give us their thoughts on what makes a good and effective survey question.
Survey question DOs
Do focus your questions on the customer.
It may be tempting to focus on your company or products, but it is usually more effective to put the focus back on the customer. Get to know their needs, drives, pain points, and barriers to purchase by asking about their experience. That’s what you’re after: you want to know what it’s like inside their heads and how they feel when they use your website and products.
“Rather than asking: ‘Why did you buy our product?’ ask ‘What was happening in your life that led you to search for this solution?’ Instead of asking: ‘What's the one feature you love about [product],’ I ask: ‘If our company were to close tomorrow, what would be the one thing you’d miss the most?’ These types of surveys have helped me double and triple my clients.”
DO be polite and concise (without skimping on micro-copy)
Put time into your micro-copy—those tiny bits of written content that go into surveys. Explain why you’re asking the questions, and when people reach the end of the survey, remember to thank them for their time. After all, they’re giving you free labor!
“You are asking your audience to take time out of their day to do free work for you, so you need to be warm, personable, and even a little charming to get them to want to help you.”
DO consider the foot-in-the-door principle
One way to increase your response rate is to ask an easy question upfront, such as a ‘yes’ or ‘no’ question, because once people commit to taking a survey, they’re more likely to finish it.
“The foot-in-the-door principle helps you create a first point of contact with a person, laying the groundwork for the rest of your survey. Start with a small question, and build up from there. But be respectful: don’t use this principle to manipulate your users into doing something they didn’t want to do; and once they commit to helping you, don’t take advantage of their time.”
DO consider asking your questions from the first-person perspective
Okay, so we don’t do this here at Hotjar. You’ll notice all our sample questions are listed in second-person (i.e., ‘you’ format), but it’s worth testing to determine which approach gives you better answers. Some experts prefer the first-person approach (i.e., ‘I’ format) because they believe it encourages users to talk about themselves—but only you can decide which approach works best for your business.
“I strongly recommend that the questions be worded in the first person. This helps create a more visceral reaction from people and encourages them to tell stories from their actual experiences, rather than making up hypothetical scenarios. For example, here’s a similar question, asked two ways: Version 1: ‘What do you think is the hardest thing about creating a UX portfolio?’ Version 2: ‘My biggest problem with creating my UX portfolio is …’ The second version helps get people thinking about their experiences. The best survey responses come from respondents who provide personal accounts of past events that give us specific and real insight into their lives."
DO alternate your questions often
Shake up the questions you ask on a regular basis. Asking a wide variety of questions will help you and your team get a complete view of what your customers are thinking.
“Having run thousands of user research projects, I have found that the key is to alternate questions often. You want your team to be reading a wide variety of answers so they can truly empathize with their users.”
DO test your surveys before sending them out
Hotjar recently created a survey that we sent to 2,000 CX professionals via email. Before officially sending it out, we wanted to make sure the questions really worked.
We decided to test them out on internal staff and external people by sending out three rounds of test surveys to 100 respondents each time. Their feedback helped us perfect the questions and clear up any confusing language.
Survey question DON’Ts
Don’t ask closed-ended questions if you’ve never done research before.
If you’ve just begun asking questions, make them open-ended questions since you have no idea what your customers think about you at this stage. When you limit their answers, you just reinforce your own assumptions.
There are two exceptions to this rule:
1) using a closed-ended question to get your foot in the door at the beginning of a survey, and
2) using rating scale questions to gather customer sentiment (like an NPS survey).
DON’T ask a lot of questions if you’re just getting started
Having to answer too many questions can overwhelm your users. You really have to make peace with the fact you can’t ask as many questions as you’d like, so stick with the most important things and discard the rest.
Try starting off with a single question to see how your audience responds, then move on to two questions once you feel like you know what you’re doing.
How many questions should you ask? There’s really no perfect answer, but we recommend asking as few as you need to ask in order to get the information you want. In the beginning, focus on the big things:
Who are your users?
What do potential customers want?
How are they using your product?
What would win their loyalty?
DON’T just ask a question when you can combine it with other tools
Don’t just use surveys to answer questions that other tools (such as analytics) can also help you answer. If you want to learn about whether people find a new website feature helpful, you can also observe how they’re using it through analytics, session recordings , and other user testing tools for a more complete picture.
“Don’t use surveys to ask people questions that other tools are better equipped to answer. I’m thinking of questions like ‘What do you think of the search feature?’ with pre-set answer options like ‘Very easy to use,’ ‘Easy to use,’ etc. That’s not a good question to ask. Why should you care about what people ‘think’ about the search feature? You should find out whether it helps people find what they need and whether it helps drive conversions for you. Analytics, user session recordings, and user testing can tell you whether it does that or not."
DON’T ask leading questions
A leading question is one that prompts a specific answer, and you want to avoid those because they’ll give you bad data. For example, asking ‘What makes our product better than our competitors’ products?’ might boost your self-esteem, but it won’t get you good information because you’re planting the idea that your own product is the best on the market.
DON’T ask loaded questions
A loaded question is similar to a leading question, but it does more than just push a bias—it phrases the question such that it’s impossible to answer without confirming an underlying assumption.
A common (and subtle) form of loaded survey question would be, ‘What do you find useful about this article?’ If we haven’t first asked you whether you found the article useful at all, then we’re asking a loaded question.
Effective survey questions can help improve your business in many different ways. We’ve written in detail about most of these ideas in other blog posts, and I’ve included links for each of them below.
Use case #1: to create user personas
A user persona is a semi-fictional character based on the people who currently use your website or product. A persona combines psychographics and demographics and reflects who they are, what they need, and what may stop them from getting it.
Examples of questions to ask:
Describe yourself in one sentence, e.g. ‘I am a 30-year old marketer based in Dublin who enjoys writing articles about user personas.’
What is your main goal for using this website/product?
What, if anything, is preventing you from doing it?
📚 Read more → our post about creating simple and effective user personas in 4 steps highlights some good survey questions to ask when creating a user persona.
Use case #2: to understand why your product is not selling
Few things are more frightening than stagnant sales. When the pressure is mounting, you’ve got to get to the bottom of it, and good survey questions can help you do just that.
What made you buy the product? What challenges are you trying to solve?
What did you like most about the product? What did you dislike the most?
What nearly stopped you from buying?
📚 Read more → here’s a detailed piece about the best survey questions to ask your customers when your product isn’t selling , and why they work so well.
Use case #3: to understand why people leave your website
If you want to figure out why people are leaving your website, you’ll have to ask questions.
A good format for that is an exit-intent pop-up survey , which appears when a user clicks to leave the page.
Another way is to focus on the people who did convert, but just barely —something Hotjar CEO David Darmanin considers essential for taking conversions to the next level. By focusing on customers who bought your product (but almost didn’t), you can learn how to win over another set of users who are similar to them: those who almost bought your products, but backed out in the end.
Example of questions to ask:
Not for you? Tell us why. (exit-intent pop-up—ask when a user leaves without buying)
What almost stopped you from buying? ( ask post-conversion )
Get started with our Exit Intent survey template.
📚 Read more → HubSpot Academy increased its conversion rate by adding an exit-intent survey that asked one simple question when users left their website: “Not for you? Tell us why.”
“I spent the better half of my career focusing on the 95% who don’t convert, but it’s better to focus on the 5% who do. Get to know them really well, deliver value to them, and really wow them. That’s how you’re going to take that 5% to 10%.”
Use case #4: to understand your customers’ fears and concerns
Buying a new product can be scary: nobody wants to make a bad purchase. Your job is to address your prospective customers’ concerns, counter their objections, and calm their fears, which should lead to more conversions.
📚 Read more → take a look at our no-nonsense guide to increasing conversions for a comprehensive write-up about you can discover the drivers, barriers, and hooks that lead people to converting on your website.
“Overall, if you want to deliver an AMAZING customer experience, the SINGLE MOST IMPORTANT thing you can do is LEARN more about your customers so you can custom tailor that experience to them. It's not magic. It's not science. It is simply building a tighter relationship with your customer.”
Use case #5: to drive your pricing strategy
Are your products overpriced and scaring away potential buyers? Are you underpricing and leaving money on the table?
Asking the right questions will help you come up with a pricing structure that maximizes profit, but you have to be delicate about how you ask the questions. Don’t ask directly about price; otherwise, you’ll seem like you’re unsure of the value you offer. Instead, ask questions that uncover how your products serve your customers and what would inspire them to buy more.
How do you use our product/service?
What would persuade you to use our product more often?
What’s the one thing our product is missing?
📚 Read more → we wrote a series of blog posts about managing the early stage of a SaaS startup, which included a post about developing the right pricing strategy —something businesses in all sectors could benefit from.
Use case #6: to measure and understand product/market fit
Product/market fit is about understanding demand and creating a product that your customers want, need, and will actually pay money for. A combination of online survey questions and one-on-one interviews can help you figure this out.
What's one thing we can add that would make [product name] indispensable for you?
If you could change just one thing in [product name], what would it be?
📚 Read more → in our series of blog posts about managing the early stage of a SaaS startup, we covered a section on product/market fit , which has relevant information for all industries.
Use case #7: to choose effective testimonials
Human beings are social creatures. We’re influenced by people who are similar to us, and testimonials that explain how your product solved a problem are the ultimate form of social proof. The following survey questions can help you get some great testimonials.
What changed for you after you got our product?
How does our product help you get your job done?
How would you feel if you couldn’t use it anymore?
📚 Read more → in our post about positioning and branding your products , we cover the type of questions that help you get effective testimonials.
Use case #8: to measure customer satisfaction
It’s important to continually track your overall customer satisfaction so you can address any issues before they start to impact your brand’s reputation. You can do this with rating scale questions.
For example, at Hotjar, we ask for feedback after each customer support interaction (which is one important measure of customer satisfaction). As you can see from the screenshot below, we begin with a simple, foot-in-the-door question to encourage a response. We use the information to improve our customer support, which is strongly tied to overall customer satisfaction.
How would you rate the support you received? (1-5 scale)
If 1-3: How could we improve?
If 4-5: What did you love about the experience?
📚 Read more → our beginner’s guide to website feedback goes into great detail about how to measure customer service, NPS, and other important success metrics.
Use case #9: to measure word-of-mouth recommendations
The Net Promoter System (NPS) is a measure of how likely your customers are to recommend your products or services to their friends or colleagues. NPS is a higher bar than customer satisfaction because customers have to be really impressed with your product to recommend you.
Example of NPS questions (to be asked in the same survey):
How likely are you to recommend this company to a friend or colleague? (rate 0-10)
What’s the main reason for your score?
What should we do to WOW you? (optional)
Pro tip : you can use our NPS calculator to crunch the numbers.
📚 Read more → we created an NPS guide specifically for e-commerce companies, but it has plenty of information that will help companies in other industries as well.
Use case #10: to redefine your messaging
How effective is your messaging? Does it speak to your clients' needs, drives, and fears? Does it speak to your strongest selling points?
Asking the right survey questions can help you figure out what marketing messages work best, so you can double down on them.
Questions to ask :
What attracted you to [brand or product name]?
Did you have any concerns before buying [product name]?
Since you purchased [product name], what has been the biggest benefit to you?
If you could describe [brand or product name] in one sentence, how would you do it?
What is your favorite thing about [brand or product name]?
How likely are you to recommend this product to a friend or colleague? (rate 0-10)
📚 Read more → we talk about positioning and branding your products in a post that’s part of a series written for SaaS startups, but even if you’re not in SaaS (or you’re not a startup), you’ll still find it helpful.
“The products that are most-liked aren’t necessarily the ones you sell most of. Just because a restaurant might sell a lot of lasagna doesn’t mean their lasagna is well-liked. In fact, it might be deterring customers from ever coming back. "By knowing which of your products is most liked, you can: Design the most effective sales funnel, so your most-liked products aren’t hidden away. Improve your existing products to make purchasers more likely to buy from you again."
Frequently Asked Questions
How many people should i survey/what should my sample size be.
A good rule of thumb is to aim for at least 100 replies that you can work with.
You can use our sample size calculator to get a more precise answer, but understand that collecting feedback is research, not experimentation. Unlike experimentation (such as A/B testing), all is not lost if you can’t get a statistically significant sample size. In fact, as little as ten replies can give you actionable information about what your users want.
How many questions should my survey have?
There’s no perfect answer to this question, but we recommend asking as few as you need to ask in order to get the information you want. Remember, you’re essentially asking someone to work for free, so be respectful of their time.
How do I analyze open-ended survey questions?
A big pile of qualitative data can seem intimidating, but there are some shortcuts that make it much easier to analyze. We put together a guide for analyzing open-ended questions in 5 simple steps , which should answer all your questions.
Will sending a survey annoy my customers?
Honestly, the real danger is not collecting feedback. Without knowing what users think about your page and why they do what they do, you’ll never create a user experience that maximizes conversions. The truth is, you’re probably already doing something that bugs them more than any survey or feedback button would.
If you’re worried that adding an on-page survey might hurt your conversion rate, start small and survey just 10% of your visitors. You can stop surveying once you have enough replies.
Our survey templates should give you a solid starting point on your journey toward really understanding your users. Once you get some initial feedback, you can craft questions that dive deeper into their heads to uncover their most fundamental drives.
Do you have some favorite survey questions of your own? Share them in the comments, and let us know what they taught you about your users.
Net Promoter, Net Promoter System, Net Promoter Score, NPS and the NPS-related emoticons are registered trademarks of Bain & Company, Inc., Fred Reichheld and Satmetrix Systems, Inc.
How to conduct a survey to improve your brand identity
Guest author Matt Diggity shows how you can improve your brand identity by conducting more surveys with your customers.
How tracking user behavior on your website can improve customer experience
Imagine you’re running a brick-and-mortar store. From your perch at the counter, you can see and fix any issues the customers have as they move around the shop: if they have trouble navigating the aisles, you can make adjustments and help out; when they come up to the counter, you can strike up a conversation and learn who they are and what they’re looking for.
Understanding and measuring your Customer Effort Score (CES)
There’s a reason why moving junk food to a hard-to-reach shelf might help us eat less of it: the location is impractical, it’s going to take effort to reach it, and—unless the motivation is really strong—most of the time we end up not actually bothering.
Sometimes, online businesses are exactly like that hard-to-reach shelf: something impractical that requires extra effort and make people lose motivation and leave.
The good news is that there is a simple way to find out if that’s the case with your business: all you have to do is ask your visitors and customers how much effort they have to put into doing business with you. This is the Customer Effort Score (CES), and measuring it can help you make accurate predictions of future business success or failure.
A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions
7 tips for writing a great survey or poll · 1. Ask more closed-ended questions instead than open-ended questions · 2. Ensure your survey questions are neutral · 3.
Be as clear and specific as possible in how you frame the question. Give them as much context as you can to help make answering easier. For example, rather than
1. Set your goal · 2. Choose your survey question types · 3. Write the survey questions · 4. Decide your question sequence · 5. Design the look of your survey · 6.
A questionnaire is a research tool used to conduct surveys. It includes specific questions with the goal to understand a topic from the
Good questions are necessary to get good data. There are many things to take into account when writing survey questions to avoid
Writing effective items is only one part of constructing a survey questionnaire. For one thing, every survey questionnaire should have a written or spoken
Question Wording and Structure · Be clear, specific, and direct · Use the participants' vocabulary · Talk like a real person and treat the
How to write and conduct a market research survey · Step 1: Write your survey questions · Step 2: Choose a survey platform · Step 3: Run a test
Nominal questions work well when there is a limited number of categories for a given question (see the example above). They're easy for people