Like many social scientists, much of my work revolves around surveys and questionnaires. Sometimes, senior management prepares and executes these investigations very well, yet other times folks seem to not know the first things about survey design. I believe the problem lies within the simplicity of a survey. We just put a few questions on paper and distribute it right? Isn’t recruitment much harder than design?
If only surveys were as simple as my sample image below:
Today, I describe the most common survey issues that pop-up in my work. We’ll start with the most simplistic problems and work our way to the tough ones!
I want to share one piece of general advice prior to discussing the most common issues. When you design a survey, always ensure each survey question is mapped back to a research topic for the survey. If your survey question fails to gather information about your research topic, remove it immediately!
Non-exclusive categories tend to appear in the demographic sections of survey (typically near the end of a survey). Resolving this problem is simple, so let’s review the example.
Original question: “Which of the following best describes your age?”
Issue: If you’re 25 years old, how can you answer this question? The categories overlap! The categories should ALWAYS be mutually exclusive.
Proposed fix: “Which of the following best describes your age?”
- 55 or older
Now it’s easy to tell which group you should choose!
Biased or leading questions
Biasing occurs when the question wording makes certain responses more or less likely, or influences how one answers questions further along in the survey.
Original question: “Do you waste energy by turning your air conditioner on during summer peak hours?”
Issue: If my research question is “do you use your air conditioner during peak hours (5-8pm in the summer),” why add the biased language? If the intent is to inform people that using energy at this time is wasteful, then it should be completely independent of the question, and not presented with the question at all!
Proposed fix: “Do you use your air conditioner between the hours of 5-8pm during the summer?” Now my research question aligns with the survey question!
Double Barreled Questions
Double-barreled questions can sneak into our surveys when we fail to pay close attention. In some cases, a change in wording can resolve then, but not in all cases. If we ask a question with multiple subjects, avoid asking how you feel about BOTH of the subjects at once.
Original question: “Do you agree or disagree: My mechanic was knowledgeable and professional.” scale used: Agree, somewhat agree, somewhat disagree, disagree
Issues: What if the mechanic was extremely knowledgeable about your car issues, but they were completely unprofessional? If someone answers the question with anything other than “agree,” can we understand why? Did they answer “somewhat agree” because they were not polite and timely enough, or did they answer that way because the mechanic couldn’t answer all the questions? We cannot take action with this question, so let’s break our question out into two!
Proposed fix: “Do you agree or disagree: My mechanic was knowledgeable,” question two “My mechanic was professional.” This way, it’s simple to tell the items apart!
One must avoid hypothetical questions if possible. Sometimes, hypothetical questions are critical to your research, but one needs to be able to identify when to use them. To be clear, a hypothetical question is like a “what if” question. For example, “If you wanted to upgrade your phone, what features are most important to you?” Hypotheticals are acceptable if a research question can only be asked in a hypothetical way. Product testing tends to be full of hypothetical questions because the product doesn’t exist yet, and the researchers are trying to see what features are more desirable. Otherwise, avoid hypotheticals if possible.
Original question: “If you were going to go to a family restaurant tonight, where would you go?”
Issues: My question when reviewing surveys is “what research question is this answering?” In this case, we are trying to find out what someone’s favorite family restaurant is. Why ask this in a hypothetical manner at all? We can just ask directly!
Proposed fix: “What is your favorite family restaurant in your area?” – done and done.
Investigations must always create valid results. For my work, a valid tool simply means, “this tool is measuring what we are intending to measure.” It sounds quite simple, I know, but sometimes things get lost in translation. For example, let’s look at a recent question I reviewed which asked about experience with a financing program.
Original question: “If you had to create a budget to buy a car within the next month, how much would be be ready to spend?”
Issues: In this case, researchers try to ask participants to guess how much money they would be comfortable spending in the future. In one of my older posts, the (not so) rational man model, I discussed why people fail to make rational choices between future decisions versus current decisions.
Proposed fix: A different research approach is needed in order to evaluate purchasing behavior. In most cases, researchers can review sales that actually occurred (point of sale data) to see how much the typical consumer is spending, and split the respondents into categories based on demographics later. It’s actually measuring what you’re intending to measure – consumers of each category tend to spend X amount of dollars. Some folks think you can measure anything with a survey, and sometimes we need to acknowledge its limitations and pursue other methods.
Additional issue: I put myself and my client at risk when I gather invalid data. If I fail to catch the error prior to making my grand conclusion, the client may go out and make business decisions based on invalid information. That might result in major losses on their part, and a very upset customer that I will probably never see again. Invalidity is very costly, and very dangerous!
When it comes to surveying, this is just the tip of the iceberg. The biggest issues that come across my desk come from folks outside of the social sciences. In a previous experience, an engineer co-worker put a survey together with the expectation that the entire survey could be ready to send to the respondents the same day. After reviewing the tool, we needed to go back and clean up most of the questions. Afterwords, we programmed the survey in our web tool, and then elicited feedback for the changes from the client. The process of taking a survey from paper to final web-tool (if web surveys are your final product) can take between 8-24 work hours and even more for complex designs. This is because the quality assurance reviews and skip pattern creation that must occur. Honestly though, most errors occur because people underestimate the time and precision needed to use a survey in the appropriate way.
That’s all for survey design for now! If you have any questions about surveys, or experiences to share, please leave leave a comment!
I will focus my next post back on research more so than research practices. Until then, thanks for reading!