KAI Home Page

The Use and Misuse of Surveys

A Guide to Painless Questionnaire Research

Karl Albrecht

The fictional detective Sherlock Holmes remarked, in one of his adventures, "It is a capital mistake to theorise in advance of the facts." This is why surveys, of many kinds, can be so important in making critical business decisions.

However, the customer survey is one of the most misunderstood and misused of all tools for gathering business information. The main reason for failing to get useful information from surveys, or - worse yet - getting misleading information, is naively believing that one understands how to do surveys. To quote another philosopher, who actually lived, the German intellectual Goethe said "There is nothing more frightening than ignorance in action."

After all, it seems like such a straightforward thing to do. Just write up a questionnaire, send it out to your customers (or employees or students or voters or whomever), get the answers back and calculate the results, right? Not necessarily.

Explaining just the most common survey blunders would take more space than available here, but we can target a few of the most important ones, to build a perspective on the overall process. Here are four of the most basic blunders committed by would-be survey takers.

Mistake # 1. Doing a Survey

In some cases a survey is one of the least effective ways of learning what you need to know. For example, if you are a service firm doing business regularly with a limited set of customers, say 30 or 40 of them, using a questionnaire is usually a case of overkill. It makes much more sense to go to your clients on an individual basis and explore their perceptions in depth. Sending all of them the same survey every six months will quickly create a case of survey burnout; they get tired of being bombarded with surveys so often.

Surveys are also likely to be ineffective when the measurement model is not clear. If you're not sure what you're trying to find out, it makes little sense to use a survey. The first order of business is to use a skillful discovery process such as focus group research or open-ended interviews to find out what you should be measuring. There is little point in just throwing questions at people in hopes of getting useful information. To paraphrase the old saying about data processing: gibberish in, gibberish out.

Have you ever been asked to fill out a survey while riding on an airline flight? If so, have you noticed how well-behaved the cabin crew tends to be for at least an hour or so before they hand out the questionnaire? They may be bored, slack, dull, and disinterested most of the time, but on the survey flight you'd think they were all fresh from the customer service training camp at Disneyland. Having the flight crew administer the survey is a sure way to guarantee results that are worse than useless. They will be misleading for sure.

Mistake # 2. Confusing Measurement With Research

Opinion measurement and research are two different processes. In most cases a survey is a measurement tool, not a research tool, for one very important reason. It has little or no chance of discovering anything unexpected about the customer's thinking process over and above the topics it asks about.

Very commonly a firm's marketing people will dream up a set of questions, put them into a customer survey, and send it out in hopes of getting some information that might suggest a strategy for competitive advantage. The problem is that their own thinking process contaminates the information coming from the customer, who typically only responds to the specific questions offered.

More than once our firm has been asked to review "customer research" data generated this way that turned out to be virtually worthless. It is often easier to start again in these cases than to try to extract some meager sense of meaning from a misguided survey.

Mistake # 3. Asking Useless Questions

Hotels very commonly use survey cards in guest rooms that are nearly useless for gathering any worthwhile customer information. They typically have very impoverished research models, i.e. the factors or attributes presented to the customer for evaluation are not well developed. They usually include a few simple items like staff courtesy, food quality in the restaurant, and the condition of the guest room. These surveys provide so little information that most hotels don't even use the results.

To compound the problem, customers typically realize they're worthless and unlikely to have any real impact. This probably explains why most hotel surveys get a return rate no higher than one to three per cent. Customers may be thinking about many quality factors other than the standard items on the survey, but they get no chance to respond with what they really want to say.

As a frequent business traveler, I depend on hotels to get messages and faxes to me reliably, accurately, and quickly. I've never seen a factor on a hotel survey dealing with message performance, yet it's more important to me than the food in the restaurant.

Actually, the most important question of all for the hotel survey is one very few of them ever think to ask, which is "Would you be inclined to stay here again if the occasion arose?" Repurchase intention is one of the most valuable and yet most overlooked of all survey variables. Why is it so often overlooked? Because the marketing people are still product oriented in their thinking, not customer oriented.

The purpose of the survey is to understand the customer, not the product.

Mistake # 4. Ineffective Questionnaire Design

People who should know better often do a remarkably shoddy job of putting together questionnaires. When United Airlines was launching its new shuttle service, "Shuttle by United," I happened to be in one of the company's local sales offices. On the counter I saw a customer survey dealing with the shuttle service, which the agents were trying to get customers to fill out.

The first question caught my eye. It said "How frequently do you or the people in your company fly in each of the following markets?" Below the question there appeared about 30 choices of what airline people call "city pairs," i.e. pairs of three-letter airport codes representing various flights you could take. This one item managed to demonstrate almost all of the key blunders one can make in writing questions.

First, it used industry trade jargon - "markets" - rather than customer language, i.e. "flights" or "trips". Very few air travelers would refer to the trip between Los Angeles and San Diego as a "market." Second, very few people except travel agents would be likely to understand all of the city-pair abbreviations used. How many people know that "ORD-LGA" stands for a flight between Chicago's O'Hare Field and New York's LaGuardia airport? Yet these were the descriptors presented to the customers on the survey.

Worse, it's a compound question. The phrase "you or the people in your company" gives the respondent a dilemma. If I fly in one of these "markets" frequently but nobody else in my company does, how shall I answer? If I don't but one other person does, how shall I answer? How about if a lot of us do? When they get the results of this question back, they won't be able to make much sense out of them.

Even worse yet is the fact that the question and the list of city pairs didn't cue the respondent about what kind of answer to provide. Was I supposed to put a check-mark beside the markets I fly in? What about the ones I don't fly in? Leave them blank? How do I answer for the city-pairs I don't understand?

The questionnaire had several other, equally incomprehensible questions like the first one. One was multiple-choice, while another was a free-form narrative. There didn't seem to be any consistent format from one question to another, which meant the results couldn't easily be tabulated or organized statistically. Each question would have to be interpreted in its own way by someone paging through all the surveys, interpreting them, and writing up a report on that item. This makes the results difficult to explain and interpret to managers and others who will read the report.

As mentioned, these are just some of the most common research blunders. These don't even include serious errors like surveying the wrong people, not getting a statistically valid response, and not knowing whether the people who answer the survey are a representative cross-section of the target population.

Other mistakes include putting too many questions on one survey, which actually means trying to do two or more research projects at once, and using 10-point multiple-choice scales for opinion surveys, when respondents can't distinguish that many differences on the scale.

Surveying is rapidly gaining in popularity as a management tool. It's time we brought some skill and discipline to the process, stopped wasting time and money, and stopped confusing ourselves and other people. No one should be allowed to send out surveys to customers, clients, prospects, employees, or anyone else whose views are important to the success of the organization, without having at least some basic training in survey methods. It doesn't take a statistical genius, but it does take some practical knowledge and common sense.

Surveys can be tremendously valuable, but only if they measure something worth measuring. Here are some practical guidelines for designing questionnaires that will get you the data you need and help you achieve your business objectives.

Rule #1: Keep the end user of the data in mind when designing your questionnaire.

The person answering the survey only sees it once, but the people who read the results will constantly benefit from (or be victimized by) the design thinking that goes into the questionnaire. If it is unnecessarily long, repetitive, poorly sequenced, or has poorly worded questions or multiple-choice scales, your clients will be distracted from interpreting the reports based on the results. Design the questionnaire for the end user of the data, while keeping the respondent in mind.

Rule #2: Keep your survey questionnaire as short as possible.

Some surveys should be no longer than five or ten questions, especially when the respondents are busy, in a hurry, or otherwise not highly motivated to answer, such as in an airport or shopping mall. For other surveys, 20-25 questions may be the limit. Very seldom should you use more than 50-60 questions on a survey, unless you are confident the respondents will take the time to answer truthfully and thoughtfully. It is rarely advisable to use more than 100 questions on a survey. If your questionnaire gets to be very lengthy, you may be trying to accomplish too many things with one survey project. You may actually be trying to do two projects, or even more, in one survey.

Rule #3: Don't let a committee write the questions.

If you're working with a group or task force, have one literate person compose a draft and let others comment. Know when to stop nit-picking and get on with it. Get agreement on the objective of each question, but not necessarily on the exact wording. In writing the actual questions, it is very important to phrase your questions skillfully to make sure you get reliable answers. If your respondents misunderstand your questions, you will misunderstand their answers. Here are some guidelines to keep in mind as you compose your survey questions.

  • Keep each question as short as possible.
  • Use simple, concrete terminology. Avoid terms the reader might not know. You'll be amazed at the number of ways people can misread or misinterpret questions.
  • Ask only one thing with each question. Avoid "compound" questions, such as: "How do you like the quality and selection of our merchandise?" The customer may think your quality is high but your selection is poor, or vice versa; he or she won't know how to respond to the question. In some cases, you may want to combine two factors together, but be careful not to combine unrelated factors that will confuse the respondent.
  • Use a simple and consistent pattern of presentation in the wording of your questions.
  • Use "you" when possible; make it personal. You want the respondent to answer from his or her own point of view. Or, you might want to phrase your questions in the form of "I-statements," e.g. "I have opportunities to get ahead in this organization."
  • Avoid "loaded" questions that imply certain positive or negative evaluations are appropriate. Don't "shop" for answers.

  • Minimize mental gymnastics in answering the questions; don't make the respondent do calculations or work out logical conclusions.

Rule #4: Decide in advance how you'll process the answer data.

Keep the data management process in mind at every step of the project. You can hand tally the results — not recommended for large surveys — but you'll almost certainly need to use a computer. You can build your own computer spreadsheet, or you can use any of a number of commercial software applications, including some that are completely online.

If you don't think about the functions and limitations of the analysis software ahead of time, you might very well find yourself with a hodge-podge of question formats that can't be managed by the application you're using. Make sure your question formats are fully supported by the application. Also, examine some of the typical reports and formats provided by the application, and make sure they'll be adequate for your purposes in presenting the results.

To create your questions with a typical software application or online system, you usually have five preformatted choices. You can use multiple-choice questions, numeric questions, list questions, ranked questions, or comment questions.

  • Use a multiple choice question when you can offer the person a short list of pre-established answers that will tell you what you want to know, i.e. range of opinions, male/female, education levels, or degrees of satisfaction.
  • Use numeric questions for continuous variables like age, number of years at current residence, or the number of people in the family. You can also use numbers as category identifiers if there are many different subgroups in your population.
  • Use a list question when you want the respondent to select from a group of choices, in the case where any or all of them might apply. For example, you might ask "Which of these best-selling books have you read in the past 12 months?"
  • Use a ranked question to ask respondents to evaluate a group of choices against one another, i.e. number them in order of value, importance, or priority.
  • Use a comment question when there is no way to predict the nature of the answer. Comment questions allow the person responding to express the answer in his or her own words.

For some question items, you need to think carefully in choosing between the various formats in deciding which type to use. With the question of age, for example, you can either ask for a specific number, or you can divide the range of expected ages into bands and assign each band to a multiple-choice option.

If you have any doubt about which format to use, think about how you will actually be using the information. Numeric questions allow you to make finer distinctions in the population, because they are continuous variables. Multiple-choice formats, however, offer simplicity and convenience in processing the data.

Rule #5: Don't misuse multiple choice questions.

For most opinion surveys, it is customary to use multiple choice questions, or "scalar" questions, as the primary means for asking people about their opinions. The multiple-choice form is familiar, easy to read, easy to answer, and easy to analyze. But many survey builders misunderstand the principles of using scalar questions. It is important to use scalar questions consistently, to make the survey reports easy to read and interpret.

Decide on the number of options for your multiple-choice questions. The most common scale for multiple-choice questions in opinion surveys is the five-point Likert scale. Dr. Rensis Likert, of the University of Michigan, developed this scale many years ago for behavioral sciences research. It is widely accepted because it offers a convenient range of choices that meets the needs of most situations.

The great advantage of standardizing your multiple-choice scale is that it makes the survey report much easier to read. If you display the frequency data (how many people answered "1," how many answered "2," etc.) in column format, it's very hard for the end user to follow the meaning of questions that have varying scales. One question offers four options, one offers six options, others offer a five-point scale. Again, work backward from the end user's experience and you'll see how important readability is.

Some people like to use other kinds of multiple-choice scales, such as the seven point scale or even a ten point scale. Advocates of the seven point scale claim it gives them the ability to make finer distinctions in the measurement of opinion. Advocates of the five-point scale claim the aspect of finer distinctions is an illusion, and that the average of the scale values provides the same differentiation for measuring opinions which are, after all, subjective variables at best.

In some cases, it is possible to "anchor" the various points on a rating scale to some verifiable criteria, such as monetary amounts, objective degree of illness, medical diagnosis, or observable behaviors. In those cases, a wider rating scale might be appropriate. Survey experts call these anchored rating scales.

Bear in mind, however, that long scales result in reports that are more difficult and tedious to read. When you are laying out your report in tabular format, i.e. with each question taking up one row and the various options arranged in columns, you may have to reduce the font size to make seven or ten columns fit across the page. Or, you might need to change the layout to "landscape" style to fit all the columns on the page. In addition, forcing the reader to study ten columns of frequency values can make it more difficult than necessary to interpret the "spread" of data, especially when the wider scale does not convey any additional precision.

Rule #6: Don't "Lead The Witness."

The Likert scale presents a person with five options, ranging from "least to most" or "most to least." There is some debate among the experts as to which sequence to present. Some people like to put the most "negative" or critical option first on the list and finish with the most positive option. Some people prefer to put the most positive option first and have the others progress "downhill." They may feel that one sequence or the other will create a mental bias in the respondent, causing him or her to select a more positive or more negative answer.

It is best to avoid such psychological hair-splitting. Most people are accustomed to associating high numerical ratings, such as 4 or 5, with positive evaluations. Using 1 for a high score and 5 for a low score can be confusing to them. Consider that the people reading your report will tend to think of a higher mean value as "better" than a lower one. When in doubt, stay with the old rule, "big number means good score."

Whichever form you choose, use it consistently throughout your questionnaire. Don't change back and forth from using 1 as the highest and 5 as the lowest and then to 1 as the lowest and 5 as the highest. That will confuse your respondents, your data entry person, and probably yourself. It will certainly confuse the people who will read your survey report.

Many survey takers like to make the multiple-choice question a declarative statement with which the respondent can agree or disagree using a scale from 1 through 5. An example might be:

"Product X gives good value for the price."

  • 1 = strongly disagree
  • 2 = disagree
  • 3 = neutral; no strong feeling
  • 4 = agree
  • 5 = strongly agree

Rule #7: Use sequence and idea flow to your advantage.

Think about the logical flow of ideas implied by your questions as the respondent reads them one at a time. Also, think about how the questions relate to overall themes as your clients read the report.

Consider placing the more personal questions, e.g. age, gender, marital status, etc. at the end of the survey. This has two advantages. First, people responding to the survey may feel more comfortable with these types of questions once they have provided answers to the main body of the questionnaire. Seeing these questions at the very top of the questionnaire can cause a person to feel his or her personal identity is being encroached upon.

Second, if the questionnaire has many questions, people may experience "survey fatigue," and feel less motivated to answer carefully and thoughtfully as they proceed. They may find these types of demographic questions easier to answer and more palatable at the end.

Rule #8: Test your questionnaire before you launch it.

Even though you think your questionnaire is properly designed — clear, easy to understand, and easy to fill out — there will always be people who have trouble with it. The more respondents who can't or don't answer it properly, the more your results will be contaminated, possibly in ways you can't even detect. It's amazing how many people can misunderstand or misinterpret a question that you consider perfectly clear and straightforward. Testing your questionnaire — or "piloting" it — on a small group of people who are similar to the survey population you want to reach can expose faulty assumptions, misleading or confusing language, and problems answering it that you might not have anticipated.

You can invite a small group of people, say five to seven, to a meeting, and ask them to read the questionnaire. Go through the questions one at a time and ask if they understand them and know how to answer them. Be alert for any language problems, and be prepared to fine-tune the wording based on what they tell you. Don't forget that some people will want to second-guess your thinking, and offer criticisms and "improvements" to show how smart they are. Just take the input that's relevant and useful.

If you're planning to mount the questionnaire on a website and gather the results online, consider measuring how long it takes to complete it online. If it's too long, some people will drop out before they finish, and some may refuse to take it because they don't want to invest that much time. Consider telling the respondent, with a message at the top of the online page, about how many minutes it will require.

Rule #9: Use the results wisely.

Decide in advance how you'll use the survey results. Who will see it, under what circumstances, and how will you help them understand it?

If it's an employee survey in an organization, keep in mind that the employees will naturally expect management to publish the results. Running a survey and then hiding the results can create fear, suspicion, animosity, and cynicism. To share the results of an employee survey effectively, you need to get all of the managers engaged, from top to bottom. The managers should fully understand the results and their implications to start with, and then they should make the results available to their staff members. The first time an organization runs an employee survey, the employees may well react with suspicion: "What's this all about?" "Why are they doing this?" "Can my answers be used to identify me individually?" "What's going to happen after the results are in?" When the survey becomes an annual procedure, these natural suspicions tend to diminish.

If the end users of the survey report don't have much experience with surveys, consider getting together with them and discussing the process of interpreting the results. Advise them to expect to see a few extreme answers from disgruntled individuals; to see a few nasty and exaggerated comments; and to be prepared for possible surprises — certain questions with unusually high or low scores. Show break-outs of sub-groups, such as responses of males and females separately; ethnic categories; and management compared to non-management staff.

Rule #10: Learn from every survey project.

Once your survey project is complete, review the whole process and look for lessons. How might we have done it differently, or better? Did we do the right things at each stage of the process? What have we learned about designing surveys? What have we learned about running surveys online? What have we learned about interpreting the results? With careful thought at each step along the way, you might make a few mistakes, but they'll probably be small and not "terminal." With a few survey projects under your belt, you'll quickly master both the art and science of sensing people.

 

About the Author:

Dr. Karl Albrecht is an executive management consultant, futurist, lecturer, and author of more than 20 books on professional achievement, organizational performance, and business strategy.

He has pioneered a number of important new concepts in the business world. For example, he is widely regarded as the father of the American "customer revolution" and service management. His book Service America: Doing Business in the New Economy (co-authored with the late Ron Zemke), sold over 500,000 copies and has been translated into seven languages.

He is also a leading authority on cognitive styles and the development of advanced problem solving skills. His recent books Social Intelligence: the New Science of Success, and Practical Intelligence: the Art and Science of Common Sense, together with his Social Intelligence Profile and his Mindex Thinking Style Profile are widely used in business and education. The Mensa society honored him with its lifetime achievement award, for significant contributions by a member to the understanding of intelligence.

Originally a physicist, and having served as a military intelligence officer and business executive, he now consults, lectures, and writes about whatever he thinks would be fun.