Surveys
A Classic Research Methodology
Surveys are the most used, and unfortunately, the most abused research methodology. It is the least common denominator, in that everyone from 5th graders to college professors can use it, but it takes a significant amount of thought to structure a valid, scalable survey.
There was a time in the lifespan of this classic technique when participants felt "special" when asked to complete a survey that their opinion mattered. JD Power and Nielsen were masters of this, and Nielsen became famous for its technique of enclosing a crisp dollar bill in every survey mailed to create a sense of obligation on the part of the recipient.
With the advent of the internet, online surveys have become incredibly pervasive in our daily lives. To understand just how pervasive surveying is, I decided to count how many surveys I was presented with while going about my usual business on a Saturday. The final count? Fifteen, including three on shopping receipts. (I was fortunate in that CVS was an early errand that day, so I had an ample 24" receipt/scroll upon which to record the day's findings.)
Consider also that this survey overload is a reinforcing cycle: researchers receive fewer responses to surveys, so require higher number of survey sends (impressions) to reach the level of validity they require. Now multiply this escalation in survey sends by thousands upon thousands of companies, researchers, and media, all of which are in a similar situation. In essence, we have our own miniature Tragedy of The Commons in survey research, in that it is a race to deplete what is an openly-accessible, but finite resource: attention.
From USA Today, January 7, 2012, "For some consumers, surveys breed feedback fatigue":
"Survey fatigue" has long been a concern among pollsters. Some social scientists fear a pushback on feedback could hamper important government data-gathering, as for the census or unemployment statistics.
If more people say no to those, "the data, possibly, become less trustworthy," said Judith Tanur, a retired Stony Brook University sociology professor specializing in survey methodology.
Response rates have been sinking fast in traditional public-opinion phone polls, including political ones, said Scott Keeter, the Pew Research Center's survey director and the president of the American Association for Public Opinion Research. Pew's response rates have fallen from about 36 percent in 1997 to 11 percent last year, he said. The rate includes households that weren't reachable, as well as those that said no.
The Associated Press conducts regular public opinion polling around the world and has seen similar trends in response rates. There's little consensus among researchers on whether lower response rates, in themselves, make results less reliable.
Keeter attributes the decline more to privacy concerns and an ever-busier population than to survey fatigue. But the flurry of customer-feedback requests "undoubtedly contributes to people putting up their guard," he said.
This has become a very real problem in consumer research, so we must get creative in how we first contact participants, continue the research relationship, and how we survey in the first place. We will discuss this a bit more in a moment, but in some cases, it can be helpful to take non-traditional approaches to find, approach, and continue relationships with your participant pool.
Duke University Initiative on Survey Methodology
In regard to pure survey design, Duke University has an excellent set of condensed Tipsheets to help you create well-structured surveys. They have examples throughout to help explain not only the problem and solution, but also what the question looks like in practice.
These Tipsheets cover the major points you need to pay attention to in any survey design, and are a great resource to bookmark. If you happen to have staff helping to write or deploy surveys, it can be useful to point them to the Duke Tipsheets to not only help them do a better job in adding questions to the survey, but also in reviewing the other questions.
Eleven Common Mistakes in Survey Design
Beyond the prescriptions of the Duke Tipsheets, there are quite a few common survey mistakes that happen in the field, and anyone can have significantly negative outcomes for the survey results and completion. These mistakes are not limited to just the less experienced, they happen all the time to experienced research designers.
What can make mistakes especially damaging is if they happen early in a series of surveys. So, for example, let's imagine you seek to create a "survey narrative" over time, building a robust set of baseline data on a core set of questions. If you commit an especially egregious mistake, such as omitting what would be a very common response to one of the questions, it not only distorts the results of that survey, but every survey that contains that question. You may have built 3 years of baseline data, but the results may be in question because of one carried miscue from the first survey.
Mistake #1: Survey Too Long
In a short-attention, low-engagement context, lengthy surveys are a liability. Participants will start to offer superficial, single word, or repetitive answers, or might abandon the survey altogether. Especially in the case of research pools you intend to survey over time, abandons can reduce the chance that your participants will opt in to the next round of survey. Furthermore, long surveys can skew results invisibly through attrition of participants, as we likely do not desire responses from only those who can devote 50 uninterrupted minutes in the middle of a workday.
Solutions: If you have a survey over 10 minutes, strongly consider carrying content to another survey or otherwise splitting the survey. Regardless of timing of the survey, one of the first things to do before the participant begins is to be clear about average completion time (i.e. "Average time required: 7 mins"). This creates not only an expectation on the part of the participant, but a type of implied contract. In the end, it is also common courtesy and good research practice. If you have stimuli integrated into the survey, such as animatics, videos, or prototypes, you can stretch the 10 minute boundary, but you still need to be conscious of (actual) time to complete.
Quite a few of the online survey tools have a feature you can enable to chart progress (as a percentage or bar chart) at the top of the page. This can allow someone to know where there are in the survey, generally allows better pacing, and can reduce opt-outs.
You may also have success gaining participants by using especially short survey length as a selling point. For example, a 'one-minute survey,' or even a 'one question survey.' There are a few mechanisms for continuance with a willing participant you can then use, such as an opt-in for future surveys in the future.
Mistake #2: Questions Too Long
Multiple clause questions with multiple modifiers are confusing to participants, and can lead to dirty data without you ever knowing it. There is no mechanism for you to know your results were skewed.
Solutions: Even in highly-educated participant pools, simplicity and clarity are paramount. Chances are, multiple clause questions can be worded more simply, and if they can't be, you will need to split the question. Save the lawyerly questions for the courtroom, counselor.
It can also be a great practice for larger surveys to include a simple mechanism to allow the participant to "flag" questions. You can add this as a cell at the bottom of the survey page, and some survey software allows it as a pop-in tab from the side of the survey page. Either way, the goal is for a participant to let you know they found the question confusing and to perhaps put a sentence of why.
Mistake #3: Survey Hard to Internalize for Participants
This is a classic issue that often goes unnoticed or unchallenged. These are surveys wherein each question is voiced in the third person, and using impersonal language and referring to "one" or "it" question constructions. For example: "It has been argued that one could tie a shoe with one hand. Agree or disagree." These types of constructions add a layer of interpretation for the participant, and may move their answers from their personal thoughts and feelings and into the hypothetical. For example, for the shoe tying, am I being asked if I can tie a shoe with one hand, if someone else conceivably could, of if I am aware of the argument.
Solutions: Be direct and personal. Your participants' responses are only valid in reference to themselves, not the thoughts of feelings of others or hypothetical "ones" hovering somewhere in the ether. Replace "one" and "it" with "you." If the more personal approach to these questions seems casual, it's because it is. If you prefer your surveys to sound clinical, yet be unclear and ineffective, that is entirely your prerogative.
Mistake #4: Survey Poorly Vetted
This tends to be more of an overall issue, and can range from typos and awkward questions to a lack of logical flow through the survey.
Solutions: Use a group of peers or "pre-deploy" your survey to a small group of live participants and contact them immediately after. This is especially easy to do on the web, as you can watch survey completions come in as they happen, and call the participant to ask if all of the questions made sense and if there are any suggestions they would have. The goal here is to take just a handful of live, unprompted survey participants and intercept them immediately after taking the survey.
Mistake #5: Choppy or No Flow
Surveys should feel like a good interview for participants, having a logical flow and seamlessly working from one question to the next. Choppy surveys tend to feel like being interviewed by a 4th grader: The questions in isolation may be valid, but it feels like you are being barraged with unlinked questions flowing from a stream of consciousness. This seems to be a more stylistic concern, but it can indeed confuse and detach participants from the survey.
Solutions: Don't be afraid to use "chapters" or breaker pages to allow your participants a bit of a break and to shift gears. So, if I were transitioning from a series of questions about demographics and into asking about experiences with the product category, I would insert a blank breaker page, and note something like, "Your Experiences With [Category]: In the following section of this survey, we would like to understand your thoughts, feelings and experiences with [category]. By [category], we are referring to products that [definition of the category to make sure all are on the same page]. It can be helpful to take a moment to think of specific times you have used (category) in the past to help you remember. Please restrict your answers to your experiences, and not the experiences of others."
Generally, the goal is to start with the more simple and straightforward background questions, and build on the questions until the most complex or emotionally charged are at about the 80% completion mark. The last 20% of the survey tends to act as a "cool-down" and reflection, capturing any closing remarks, feedback, or narrative responses. As a rule of thumb, if there is a chronological flow to the actual experience being examined (i.e., first impressions, use, disposal), the survey should mirror it.
Mistake #6: Incomplete Answer Choices
There are few faster ways to lose a well-intentioned participant than to run them through two or three questions that do not allow them to answer as they intend. Essentially, the participant realizes that they will not be able to express their thoughts, and there is therefore no reason to complete the survey. They abandon the survey.
Solutions: The simplest way to resolve this problem is simple: always include open-ended "Other" as a selection. Not only will it allow you to capture the responses, but "Other" responses can be a hotbed for new thinking and unexpected answers.
Having an internal review and a limited "pre-deploy" of the survey will also help you avoid many incomplete answers.
Mistake #7: Limited Variation in Questions
Having thirty Agree/Disagree questions in a row is not a terribly engaging survey for participants.
Solutions: Before writing any questions, list the topics you seek to address in your survey. It may help to arrange them in outline format to create chapters, but the overall goal is to avoid general line-listing of topics that breed uninspired questions. While you do not need to balance the types of questions in your survey, it can be helpful to give it a read-through with an eye toward the question type to make sure you have some variation.
Mistake #8: Swapping Axes or Scales
Having four of five questions with the scale arranged left to right, and the fifth with the scale reversed can lead to some unintended answers and the dirty data that comes with this confusion or omission.
Solutions: While it can occasionally be useful to add a "check question" to make sure that a participant is not running through the survey and clicking in the same place every time, varying question types does the same function. Having thirty Agree/Disagree questions in a row necessitates a check question or an axis reversal, but you shouldn't have thirty of the same question type in a row to begin with. One way or another, you shouldn't need to do axis reversals.
Mistake #9: No Open-Ended Components
Too often, expediency in calculation overrules the thoroughness of results. People tend to lean toward questions that can be calculated and tallied for this reason, or that they do not know how to treat or score open-ended questions.
Solutions: Although there are a wide variety of ways you can compile and summarize open-ended responses, remember that just because you capture data does not mean that you have to instantly undertake calculations and manipulations. So, for example, if you have twenty questions ready to go and five open-ended questions you aren't sure how you're going to score, deploy the survey. As long as you are able to capture the open-ended responses, that information does not have a shelf life.
Mistake #10: No Filters or Branches
The researcher does not include filter questions and decision points in the survey (i.e., asking a group different questions based on their response to an earlier question). For this reason, the questions are either inaccurate for a large proportion of participants, or the logic and structure of the overly-broad questions are so convoluted it becomes difficult to read. Either way, the end result is not good.
Solutions: I would argue that one of the most useful functional benefits of online survey tools is the ability to introduce filter questions to create a branched survey. Use it. Not having branches and filter questions when they are needed is usually a sign of a poorly executed survey, an inattentive researcher, or generally sloppy research by thinking that you can send everyone every question, and they will answer them all.
If you want to check for the need for filters and branches in your survey, try taking the survey acting as a member of any different groups of participants. If the survey is all about understanding your thoughts about a product trial, and you have the potential that a handful of people receiving the survey have not yet used the product, include a filter question to separate that group and ask them questions specific to their experience, why they haven't used the product yet, etc.
Mistake #11: Timing Mistakes
When surveys are related to an experience the participant had with a prototype, for example. And there is a two-month lag because the researcher waited until everyone in the beta group had received their prototype. The participant has had the prototype on the shelf for one and a half of those two months and has forgotten all of their first impressions and reactions.
Solutions: Lay out the project timing before you begin. If there are going to be any significant lag times between the experience you seek to understand and survey deployment, either split the beta groups and survey them separately, or simply include the survey file with the prototype itself. I tend to be a heavy proponent of including the survey right along with the product/offering being tested, as it tends to make participants more attentive to the experience and remind them to think and record their thoughts and feelings.
It is also possible to send a survey too soon in regard to understanding experiences, as you can send the survey before the participant has even received the product.
Selected Tools for Survey Research
While the best practices for survey design and question structure are the same regardless of how you deploy the survey, there are a few different approaches we may use to fit our research needs. Some offer almost instantaneous results by using massive pools of participants, while others allow you the flexibility to deploy surveys very quickly and effectively. For the sake of this discussion, I am going to assume we are already aware of our ability to survey via hard copy, phone, and other conventional means, especially for "nearfield" groups like customers.
In testing spaces and potential offerings, these tools can be used to deploy anything from surveys to a closed beta group of customers, to conducting wide-open public polling to gauge how many are familiar with a topic.
For example, I work with a charity benefiting children with Neurofibromatosis Type 1 (NF1), of which there are as many sufferers as multiple sclerosis worldwide (about 3MM). I wanted a quick gauge of awareness to test a hypothesis as I was setting up a messaging and branding platform, so I did a quick online awareness test with 500 American adults. The findings were that MS awareness was around 94% in American adults, while NF1 was less than 3%. Should I have needed ironclad results, I could have taken the next steps, but for my purposes, a sample of 500 was ample for my needs. Best of all, I had the responses within 30 minutes.
Here are a few tools and where they tend to fall on the survey spectrum:
- Survey Monkey, Question Pro, and other online survey suites. If you've never used any of these, they're usually chock-full of features, inexpensive, well-developed, and can do an acceptable job of performing basic summary calculations. You basically build the survey, and deploy a link to each participant that allows them to log into to the stand-alone survey pages.
- Wufoo: This is a relatively new tool, but it allows you to take similar functionality and embed it into existing websites and pages. This can be useful if you seek to understand an online experience while the participant is on that page, for example.
- Amazon Mechanical Turk: Still in beta, Amazon Mechanical Turk (or, mTurk) is a massive clearinghouse for what they call "human intelligence tasks" or "HITs." In essence, it is a marketplace for people to do very small tasks, such as completing surveys. This audience is worldwide, but because there are so many people participating already, you can use the included screening tools to find the type of audience and demographics you are looking for. The jobs are priced just as any free market would be, in that you could post a ten question survey job for $.05, and if it is perceived as low pay for the amount of work, people just won't complete the job. I tend to use mTurk for very early, public opinion-type testing when I need a same-day read on something, especially if I would like an international component.
- PickFu, Survata, and others: These are more conventional online panels where you can set the demographics, psychographics, and other criteria for those you would like included in your survey. In most cases, these people have already registered and been confirmed with the online panel service, so you can feel more comfortable in the knowledge that validity is usually quite good.
- Google Consumer Surveys: While many might defer to the massive pool of potential participants and the Google name, I have found that this service has one major flaw: it uses something akin to Google ad serving logic to embed your survey as an interrupter in articles, YouTube videos, etc. For them to access the media, they have to answer your question. Needless to say, you're getting a lot of low interest, and frankly angry people. Many times, people will either answer out of spite ("I just want to read this article!"), enter a string of random letters, or click whatever they need to get to the media. It's a great interface and a great idea, but the deployment mechanism has been a significant issue in regard to response quality.
A Goal for any Research: Creating a Narrative Over Time
Especially in the case of surveys, we want to not think of "a point in time" or a "snapshot," we want to think of a continuous line of research that needs to tell a story. If we want to understand shifts in perception over time in a closed group, for example, we need to pay attention to details like making sure we ask the same questions in the same order to help make sure our narrative is not skewed or interrupted.
The "snapshot" frame for research tends to lead to fractured efforts without an overarching structure or goal, and online surveys especially can worsen this condition with their instant feedback.
It may seem a bit esoteric at this point, but we will work on it in the Case this week a bit, as it is important to be able to build the entire narrative and understanding of the offering or topic. Isolated, unlinked efforts can be more confusing or distracting than they are worth.