Survey research was discussed in Chapter 13. Table 13.2 listed the major steps in survey research. In this section, we will focus on the fourth and fifth steps in the sequence. These steps focus on the construction of the survey instrument and the pretesting of that instrument to make sure that it works.
Survey instruments should be constructed with care in order to (1) obtain the information sought and (2) avoid bias and maximize the reliability and validity of the instrument. The process requires several drafts of the instrument, with editing, testing, and refinements to produce a final version.
Our first step is to decide what areas of information are to be surveyed and what the nature of the population to be sampled is. Once these are determined, then we can move on to determining the form of the survey and the nature and language level of the items. For example, a survey of college graduates on their understanding of contemporary economic issues will be very different in form and language from a survey of sixth graders on the same topic.
We must also be clear on how much emphasis is to be placed on obtaining descriptive information and how much on hypothesis testing. We make a distinction between status surveys and survey research. Status surveys focus on obtaining descriptive information on the participants. Survey research tests hypotheses about relationships among variables. For example, we might hypothesize that mothers are more willing than fathers to spank their misbehaving children. We might also hypothesize that spanking children is inversely related to socioeconomic status. To test these hypotheses, we ask a sample of mothers and fathers questions about their socioeconomic status and about their willingness to use spanking. Of course, what we obtain will be what they say they are willing to do regarding spanking and their own report of their socioeconomic status.
Survey items can be written in different formats. For example, we can ask open-ended questions, in which the nature of the answer is up to the participant. An example would be "What are your feelings about using spanking to discipline children?" In such a question, the participant can give a broad range of answers.
Such open ended questions are useful in getting general information, but the answers are difficult to code for later systematic analysis because there in no guarantee that the participant will address the specific issues that you might want to evaluate. For example, if people say that they think that spanking is OK under some circumstances, it is not clear, for example, whether (1) they would use spanking, (2) they approve of spanking as a routine policy, or (3) their feelings about spanking are based on its effectiveness or some philosophical position. If those are topics that you are interested in learning about in your survey, you either need to make the open-ended questions more explicit or you should consider using a more structured survey. With structured survey items, participants are asked to select, from an finite list of choices, the one the best represents their opinion or situation.
In general, the more emphasis that a survey places on hypothesis testing, the more structured the survey items must be. Writing structured items is far more difficult than writing open-ended questions. Considerable thought must be given, not only to how the question is to be asked, but also to the range of responses. Have all important possible answers been included? We will discuss later the various ways in which structured items can be constructed.
Most survey instruments include two types of items--demographic items and substantive content items. Demographic items provide descriptive information about the respondents, such as their age, sex, occupation, education, marital status, and so on. Demographic items are essentially status survey items. Demographic items are usually presented at either the beginning or the end of a survey instrument, and they tend to be grouped together. The researcher must determine exactly what demographic information will be needed for the purposes of the study.Substantive content items must be selected carefully to address the questions of interest to the researcher. A survey might address a content area that has several sub-areas. For example, survey research on voters' understanding of current political issues might have items on economics, national defense, education, and social service. These four sub-areas can be presented as separate sections or the items might be mixed together.
Items should be written clearly and in language that is appropriate for the respondents' age and education level. They must be constructed so that they are logically consistent. For example, you would never want to ask participants to choose a single response when more than one response could logically be possible. The following is an example of such a situation.
Please designate your marital status
___ single
___ married
___ divorced
___ widowed
___ never married
The problem with this item is that many possible situations cannot be accurately reflected by the choices given. For example, how would people who are currently married but previously widowed answer. They are both married and widowed. Someone who is divorced and has not remarried is both single and divorced. Someone who has never married is both single and never married. To avoid these problems, the choices must be specified more clearly and be constructed so that they are mutually exclusive if you want participants to select only a single choice for classification.
In addition to being clear, readable, and logically correct, each item in the survey should be unbiased. Biased items can lead the respondent to some desired answer, thus skewing the survey results. This point is best illustrated with a pair of examples.
Suppose that a government agency is attempting to demonstrate support for a particular public works project such as a bridge. In a telephone survey they ask "Do you support the proposal by the bridge authority to build a new bridge?" The question seems direct enough, but what is hidden in it is that the agency is making a specific proposal for a specific bridge and that fact is being glossed over in the question. If there is near universal agreement in the community that a new bridge is needed, most people say yes to this question. But the vagueness of the question might lead most people to misinterpret it to mean that a new bridge was needed, not that the specific bridge proposed by the agency was what was needed.
If, on the other hand, another survey asked people to choose between three choices (do not replace the bridge, build the bridge proposed by the agency, or build a different bridge proposed by a citizen's group), the apparently overwhelming support for the agency's plan might disappear as a majority of those asked favored the alternative plan. The original question was biased, leading respondents to a particular response that did not accurately reflect their feelings.
Unfortunately, such deliberately biased surveys are a routine part of politics in many places, giving a rather bad name to survey research. It is important to realize that such deliberate biasing is not survey research at all, but rather politics. The true survey researcher does everything possible to avoid such biased questions.
The length of the survey will vary according to the needs of different studies. As a general rule, the best surveys are concise and focused. Surveys that are too long may increase the rate of respondent refusals to begin or continue cooperating in a survey.
Written surveys must be clear, clean, and neat. The print size and style should be such that the document is easy to read. The survey should be well organized and uncluttered. Instructions should be clearly worded and unambiguous. It is also best not to mix response formats. It would be better to have the respondent answer each item in the same manner. Having items with yes/no answers, followed by others with multiple choice answers, and still others that are open-ended will add to the respondents' burden and decrease your survey's reliability.
Previous |