Skip to Main Content

Mar 24, 2010 | 9 minute read

Tips for Developing Customer Surveys

written by Linda Bustos

I recently caught up with Theresa Maddix, a Satisfaction Research Analyst with ForeSee Results to interview her on the subject of customer surveys. ForeSee Results is a market research consulting firm which also offers a customer survey tool for retailers.

Linda: Where do you start when planning a new survey?

Theresa: No matter what the purpose of your website survey, it is helpful to start with the end in mind. A series of four questions can help you do this:

1. What do you hope to accomplish with the survey?
2. What will your deliverables look like and what will you use as Key Performance Indicators?
3. How can you demonstrate that your survey data is credible, reliable and accurate?
4. What stories might your data tell?

After you have taken care of these questions, the mechanical aspects of survey development should fall into place smoothly.

Linda: Are there any best practices around when to invite a site visitor to complete the survey, or who to invite?

Theresa: The question of when to invite a site visitor to take a survey is dependent upon what the survey hopes to accomplish. For many retailers a primary goal is to move visitors further along the purchase funnel from awareness to consideration to purchase. If we know that the bounce rate on a site is very high, we might work on awareness and recommend inviting customers after a single page view to find out why they are leaving so fast. If we know from clickstream or prior survey data that visitors are getting stuck in the checkout process, we may either expand the page view requirements or only present the survey after the customer abandons from a checkout page.

The second part of this question is who to ask to take the survey. The benefit of survey data online is that with any website generating a decent amount of traffic we can easily receive enough respondents for statistical validity even after the data is segmented. We find that it is best to provide a random invitation without preconceptions of rates of first time to returning visitors, males or females, or rates of those visiting to purchase vs. browse. Each website has its own fascinating audience data to reveal in these areas. The time to segment the data is when we are looking for the actionable insights that we can derive from the surveys, not in the development stage.

Linda: What is a reasonable "participation rate" to expect? How do you know if your engagement rate is lower than average/could be improved?

Theresa: One of our underlying principles at ForeSee Results is Continuous Measurement. The site audience for each site we measure is distinct and the best yardstick to measure against in most cases is the site itself. When developing a survey for a site it is helpful to first review what the participation rate has been for prior surveys and adjust a new survey accordingly if it has declined.

Even with sites where there is no previous measure we do not like to rely on a hard and fast percentage, such as 12% completion rate. Our best practice is to determine independently whether the survey is too long: does it take more than five minutes to complete? How many questions are being asked? Are the questions too involved?

The participation rate is only one area of concern with surveys that are too long. Another, often overlooked issue, is that of respondent fatigue, where respondents stop considering each answer and just start filling in a survey blindly that provides too much of a cognitive load.

Linda: Like a checkout process, a customer survey can be "abandoned." Do you have any tips on reducing "survey abandonment"?

Theresa: Survey abandonment goes hand in hand with the participation question above. In addition to what we have mentioned for time to completion, sheer number of questions and complexity of questions, we would add the format of the survey as one way to reduce survey abandonment.

ForeSee Results uses a "scrollable" survey design—with the survey all on one page instead of many screens—as a result of research findings from Mick Couper of the Institute for Social Research (ISR) at the University of Michigan, the premier survey research organization in the world.

Couper wrote a comprehensive report that appeared in the Public Opinion Quarterly addressing many different web survey design issues. One of his findings was that scrollable survey design increased respondent efficiency in answering questions.

Linda: What are the best practices for survey design and usability? Is there a proven length or number of questions that can be used as a guideline?

Theresa: Internally at ForeSee Results, we attempt to provide surveys that take no more than 3-5 minutes to complete with a maximum of 46 total questions (not all questions are asked of all respondents because of skip patterns). We also work to ask questions that customers know how to answer almost intuitively. Our best refining tool for questions is really the respondents themselves. Almost every survey has questions with “Other, please specify” responses that shed light on areas where better clarity is needed or where respondents are confused about a question.

Linda: Can surveys be A/B split tested for usability and clarity?

Theresa: A/B and multivariate tests are great tools to improve site performance and customer conversion. One area where these tests can yield fantastic results is in page design.

For survey design, however, where our best results come out of the best data collection, developed from a tried and true format and methodology, we do not consider A/B or multivariate testing to be a best practice. The one area where multivariate testing might apply would be in question language. We do recommend using an iterative development approach for question language where changes are made rapidly to surveys based on customer feedback and responses. We also recommend instead using a “question bank” as a starting point that offers tried and true questions and design as a starting point.

Linda: Is there ever a case for running concurrent surveys for different purposes (e.g. satisfaction with website vs. consumer sentiment around value proposition, pricing, branding, assortment or particular products or satisfaction with tool like site search or product finder)?

Theresa: There is absolutely a case to be made for running concurrent surveys. We have clients who run as many as eight different surveys from the same and different areas of their website. Different surveys that might be for the same site are browse surveys, search surveys, fulfillment surveys, multichannel surveys, social media surveys, mobile surveys, competitor surveys and feedback surveys. Each survey serves its own very distinct purpose with very specific goals and KPIs in mind. To run multiple, concurrent surveys, however, the site must have enough traffic volume to be able support the number of invitations required.

Linda: Are there any universal questions that every online retailer should ask (or avoid asking)?

Theresa: Each site is unique, but we rely on the American Customer Satisfaction Index and feel Customer Satisfaction is central to business success, we always ask three Satisfaction questions. We also have future behavior questions that while not a “universal” are asked on most surveys. Some examples are likelihood to return and likelihood to recommend.

We have developed standard model questions around site elements that are common for most sites in areas, such as Look and Feel, Navigation and Site Performance. These are not universals, but since they are part of our model we can not only provide trend lines over time for individual sites with these questions but can also benchmark across all of our sites and within many industries and verticals.

Earlier, we also alluded to our “question bank” where we have questions that while not universal are top contenders for many sites. These questions have to do with purchase intent, acquisition source, frequency of visit, ability to find and ability to accomplish tasks.

Because surveys can provide quantitative and qualitative data, is there a minimum number of surveys a company should collect to be "statistically valid" or can you get reliable data from only a handful of surveys?

There absolutely is a minimum number of surveys that are needed for statistical validity. At ForeSee Results, we do not begin to report out our data until we have at least 300 respondents. At this level we feel comfortable with the Confidence Intervals around our reporting. Of course, as many readers in the audience will know, who are old hats at trending data, when you get down to 50 respondents or fewer the trend lines can go haywire and it is very difficult having any confidence in making recommendations.

Linda: What type of personal information is important to collect in every survey?

Theresa: On the contrary, it is usually much more effective to not gather personal information so that respondents feel secure in the confidentiality for what they produce. Personal identifiers are one way to quickly skew survey results based on what the respondent perceives the reaction might be of the company. Instead, we place unique identifiers on each survey, much the same as those in the clickstream world provide individual session IDs. When personal information is gathered, it must be voluntary, and in some cases we do ask respondents who are willing or interested to provide a minimal amount of personal information for future contact.

Linda: Is it ever okay to follow up with a survey participant who has provided personal information for any reason?

Theresa: Yes, there are some instances where it is necessary to follow up with respondents who have voluntarily shared their contact information. In fact, some retailers will follow up directly with visitors who share a negative experience and request a follow up. More usually, respondents who have stated their willingness are asked to provide information to be a part of future research. One example is a multi-channel survey that is provided after a site visit and a visit to a bricks and mortar store.

Linda: There's data collection, and then there's gleaning insights from data. Any tips on how to make intelligent decisions based on survey responses?

Theresa: The best analysts are like Sherlock Holmes in the recent movie with Robert Downey, Jr. They always start their survey analysis with the facts and refuse to overlook apparently minor details because they don't fit a simpler face-value explanation. It is often in these details and a marriage of the quantitative and qualitative data that great recommendations are born. Any strong analyst can easily tell multiple stories of “ah ha moments” they’ve had reading clustered visitor comments or seeing a trend line shift almost imperceptively or even sometimes in seeing something new in the a regular pattern of survey findings.

Linda: Any anecdotes or case studies you'd like to share from your experiences using customer satisfaction surveys?

Theresa:We have a library of case studies where our customers have experienced incredible gains as a result of our products and would be happy to share them with any reader who asks for more information.

Here is just one recent example: Scholastic, a long time bookseller to both students and teachers, faced below average conversion rates in spite of their site being recognized for its best practices design.

ForeSee data helped determine what customers thought, and where satisfaction was lagging. On the Store site, consumers were looking for products advertised with promotional pricing in Book Club flyers, but, Scholastic is barred from offering these products on their website, due to special promotional contracts that enable Scholastic to sell discounted products through the schools. Scholastic underwent a major push to migrate as much Book Club product to the website as possible, given the regulations. Conversion rates increased as a result.

Big thank you to Theresa for sharing her wisdom on this subject!