Ninth Edition CoverGraziano & Raulin
Research Methods (9th edition)

Chapter 13 Summary
A Second Look at Field Research:
Field Experiments, Program Evaluation,
and Survey Research

Conducting Field Research

Field research includes low-constraint naturalistic research, which was discussed early in this text, and experiments and other high-constraint research conducted in natural settings, which are discussed here.

Reasons for Doing Field Research

There are three major reasons for conducting experiments in field settings: (1) to test the external validity of causal conclusions developed in the laboratory, (2) to determine the effects of events that occur in the field, and (3) to improve generalization across settings.

Difficulties in Field Research

We need to conduct experiments in field settings so as to be able to draw causal inferences, but doing so can be difficult. The essential question is: "How can we answer causal questions in natural settings when we cannot utilize many of the usual manipulations and controls of the laboratory?"

Flexibility in Research

Flexibility and creativity are important in research. Researchers listen to their hunches, flashes of insight, flights of creativity, and alertness to interesting, unanticipated events. All can open new directions to be explored, especially in field research. However, for the student it is better to learn the fundamentals before taking such free-flight leaps.

Quasi-Experimental Designs

In some situations, we cannot meet all of the demands of an experiment, but still want to draw causal inferences. We can use quasi-experimental designs in those situations, which allow us to draw causal inferences, although not with high confidence. However, they still give us more useful information than not experimenting at all. In quasi-experiments:

  1. We state causal hypotheses.
  2. We include at least two levels of the independent variable, but cannot always directly manipulate the independent variable.
  3. We usually cannot assign participants to groups, but must accept already existing groups.
  4. We include specific procedures for testing the hypotheses.
  5. We include some controls for threats to validity.

Nonequivalent Control-Group Design

Often in field research we must compare already existing groups. But this does not allow random assignment or matching, and the groups may not be equivalent at the start of the study. However, the groups might be similar on most characteristics, thus allowing us to draw causal conclusions, albeit with less confidence.

There are two major problems in using nonequivalent groups: (1) The groups may differ on dependent measures at the start of the study, and (2) there may be other differences between the groups that have not been controlled by random assignment. To address the first issue, a basic strategy is to measure the dependent variable both before and after the manipulation. Then, following the manipulation, the difference scores between pre-manipulation and post-manipulation scores are taken as an indication of the effects of the independent variable. To address the second issue, we carefully examine the procedure and try to rule out confounding factors. 

Interrupted Time-Series Design

In interrupted time-series designs, a single group of participants is measured several times both before and after some event or manipulation. Time-series designs have two potential confounding factors: history and instrumentation. The interrupted time-series design can be improved by adding one or more comparison groups.

Program Evaluation

The task in program evaluation is to evaluate how successfully a program is meeting its goals. This has obvious practical value in social services and health matters.

Practical Problems in Program Evaluation Research

There are many serious practical constraints on program evaluation. These include ethical as well as technical issues.

Issues of Control

  • Selecting Appropriate Dependent Measures. Several dependent measures are usually required in program evaluation.
  • Minimizing Bias in Dependent Measures. Minimizing bias in the dependent measures is important in program evaluation, where the possibility of bias is high.
  • Control Through Research Design in Program Evaluation. As with any research project, the major controls in program evaluation are incorporated into the research design.

Typical Program Evaluation Designs

  • Randomized Control-Group Design. This is an ideal design, but difficult to carry out in a field setting. For example, ethical considerations will often preclude random assignment.
  • Nonequivalent Control-Group Design. This is the best alternative if a randomized control-group design is not possible. We can often select a naturally-occurring group as a control. Although this is not a true experiment, it does provide some control.
  • Single-Group, Time-Series Design. If a control group is not possible, the best alternative strategy is some form of time-series design.
  • Pretest-Posttest Design. This is a weak program evaluation design and is not recommended.

Surveys

Surveys gather information by asking participants about their experiences, attitudes, or knowledge. Survey research is not a single research design but, rather, uses several basic procedures to obtain information from people in their natural environments. 

Types of Surveys

Status surveys describe the current status of some population characteristics. An example is a status survey to determine what proportion of current voters are Republicans, Democrats, or Independents.

Survey research is more complex, seeking not only the status of characteristics but also attempting to discover relationships among variables.

Steps in Survey Research

Listed below are several steps in the survey research process:

  1. Determine what area of information is to be sought.

  2. Define the population to be studied.

  3. Decide how the survey is to be administered.

  4. Construct the first draft of the survey instrument; edit and refine the draft.

  5. Pretest survey with a subsample; refine it further.

  6. Develop a sampling frame and draw a representative sample.

  7. Administer the final form of the instrument to the sample.

  8. Analyze, interpret, and communicate the results.

Types of Survey Instruments

The instrument can be a questionnaire or an interview schedule, a mailed survey, or a group-administered questionnaire. Surveys begin by explaining their purpose and the questions then fall into two main categories: demographic (e.g., respondent’s age, gender, occupation, education, etc.) and content (respondent’s opinions, attitudes, knowledge, behavior, etc.).

Developing the Survey Instrument

The survey researcher should identify the general area to be surveyed, determine what questions are to be answered, construct the survey instrument, and determine the procedures for its administration. The items must cover the area of information being studied and be written in language appropriate for the intended respondents. Items can be open ended, multiple choice, or Likert scales.

Sampling Participants

The population is the larger group about which we want to gain information. Obtaining an adequate sample from that population is one of the most important factors in conducting surveys.

  • Sampling considerations. The researcher must specify the sampling procedures. To generalize from the survey sample to the population of interest, we must draw a sample that adequately represents the population. 
  • Sampling procedures. Sampling procedures fall into two major categories: nonprobability sampling and probability sampling. An example of nonprobability sampling is interviewing the first 50 people met in the street. Newspaper surveys are often carried out in this way. The obvious weakness in nonprobability sampling is that the participants might not be representative of the population. In contrast, probability sampling results in more representative samples. Probability sampling may be carried out using either simple random sampling or stratified random sampling procedures.
  • Sample size and confidence intervals. The size of the sample needed to adequately represent a population depends on the degree of homogeneity in the population. In general, the more heterogeneous the population, the larger the sample needed. 

Survey Research Design 

The researcher must also determine the research plan or design. Two basic designs are used in survey research.

  • Cross-sectional design. A cross-sectional design involves administering the survey at one time to a single sample and measuring the characteristics as they exist at that point in time (i.e., a cross section). 
  • Longitudinal design. The longitudinal or panel design is a within-subjects survey research design in which the same group or panel of participants is surveyed successively at different times.

Most of the research designs discussed in this text have been cross-sectional designs. Some of the designs discussed in Chapter 11 and many of the designs discussed in this chapter represent longitudinal research. In cross-sectional research, each participant is measured at only one point in time, while in longitudinal research, each participant is measured at more than one point in time.

Ethical Principles

One must be especially sensitive to ethical issues in program evaluation research. Programs are designed to meet people's needs, and it is critical that people in the programs not feel obligated to be involved in the evaluation. If they do, their participation is coerced.