Ninth Edition CoverGraziano & Raulin
Research Methods (9th edition)

Chapter 10 Summary
Single-Variable, Independent-Groups Designs

Experiments attempt to answer causal questions. Careful planning for experimental design can build in the controls necessary to have confidence in the conclusions that we draw from the results.

Variance

Variation is necessary in order to carry out experiments. However, we must be cautious about unwanted or extraneous variation because it can threaten the validity of an experiment. Experimental design answers questions by controlling many sources of extraneous variation.

The most basic measure of variation is the variance. Experimental design has two basic purposes: (1) to provide answers to questions by testing causal hypotheses and (2) to control variance. How these are accomplished is the focus of this chapter.

Sources of Variance

  • Systematic Between-Groups Variance. In an experiment in which we are testing for differences between groups, we seek a high between-groups variance. If, following the manipulation of the independent variable, the groups are essentially the same on the dependent measure, then we have no evidence for an effect of the independent variable. A high between-groups variance is necessary in order to support the causal hypothesis. However, the between-groups variance is a function of both experimental effects and confounding variables. If there is any possibility that the group differences are due to extraneous variables, then we cannot draw a causal inference. It is for this reason that we must anticipate possible confounding factors and plan controls to reduce the systematic between-groups variance that is due to extraneous variables. Thus, between-groups variance has two sources: (1) the influence of the manipulated independent variables (experimental variance), and (2) the influence of extraneous, uncontrolled variables (extraneous variance). In experiments we should seek to maximize the experimental variance and control the extraneous variance.
  • Nonsystematic Within-Groups Variance. The term "error variance" denotes the nonsystematic within-groups variability. While systematic variance reflects influences on each group as a whole, error variance is due to random factors that affect only some participants in a group. Error variance is made up of individual differences between participants, experimenter errors, equipment variations, and so on. Error variance is the variation among individual participants within a group that is due to chance factors. Thus, within-group random errors do not affect the mean of the group as a whole, but do affect the variance.

In summary, the basic forms of variance are:

  1. Systematic between-groups variance, which includes:
    Experimental variance (due to independent variables),
    Extraneous variance (due to confounding variables); and
  2. Nonsystematic within-groups error variance (due to chance factors).

The relationship of the between- and the within-groups variances is very important in analyzing data.

Controlling Variance in Research

Each experimental study is designed to maximize experimental variance, control extraneous variance, and minimize error variance.

  • Maximizing Experimental Variance. It is necessary to design and carry out the experiment so that the levels of the independent variable are clearly different from each other. A manipulation check is often used to evaluate whether the manipulation actually had its intended effect on the participants.
  • Controlling Extraneous Variance. In general, the experimental and control groups must be as similar as possible prior to the manipulation and must be treated in exactly the same way except for the independent variable manipulation. Listed below are several procedures for controlling extraneous variance.
  1. Make sure the independent variable manipulation is the only difference between the conditions.

  2. The best general method for controlling extraneous variance is to randomly assign participants to conditions.

  3. A potentially confounding factor such as age or sex can be controlled by eliminating it as a factor by selecting participants who are as homogeneous as possible on that variable.

  4. A potentially confounding variable can be controlled by building it into the experiment as an additional independent variable.

  5. Extraneous variance can be controlled by matching participants or by using a within-subjects design.

  • Minimizing Error Variance. Measurement error can be minimized by maintaining carefully controlled conditions of measurement and using measuring instruments that are reliable. To reduce error variance due to individual differences, a within-subjects design can be used.

Control in experimentation refers to control of variance. The general control procedures discussed in Chapter 9 dealt with the control of error variance; the other control procedures help to control both error variance and extraneous variance. The most powerful control for extraneous variance is a carefully developed experimental design in which participants are randomly assigned to conditions.

Nonexperimental Approaches

The following section will describe several nonexperimental designs.

Ex Post Facto Studies

"After the fact" studies are commonly used, but they are not experiments and have serious limitations. They involve observing the current situation and relating those observations to previous events. The major weakness of ex post facto procedures is that no independent variable is manipulated; thus, controls to guard against confounding cannot be applied and, therefore, the researcher cannot eliminate rival hypotheses. These studies can help generate hypotheses but cannot test causal hypotheses.

Single-Group, Posttest-Only Design

In the single-group, posttest-only design, a variable is manipulated with a single group of participants, and the dependent variable is then measured. This design is weak because it does not control for the confounding variables of history, maturation, or regression to the mean.

Single-Group, Pretest-Posttest Design

The single-group, pretest-posttest design is also a weak design. In this design, a single group of participants is observed, a manipulation is carried out, and the same group is observed again. The pre-post difference on the dependent measure is taken to indicate the effects of the independent variable manipulation. This design fails to control for maturation, history, and regression to the mean.

Pretest-Posttest, Natural Control-Group Design

A good control to add to the above designs is a no-treatment control group. In the natural control-group design, participants are not randomly assigned to conditions. Instead, naturally occurring groups of participants are used, one of which is designated as the experimental group. This controls for some confounding, but the lack of random assignment means that there is no assurance that the groups are equivalent at the beginning of the study. Thus, potential confounding exists in this design.

Experimental Designs

Two critical factors that distinguish experiments from other designs are the addition of control groups and random assignment to groups. Although many variations of experimental designs are possible, four basic designs are used to test a single independent variable using independent groups of participants. Three of the four are discussed in detail and the fourth in introduced here.

Randomized, Posttest-Only, Control-Group Design

In the randomized, posttest-only, control-group design, participants are randomly assigned to experimental and control groups. The experimental group receives the independent variable manipulation, and the control group does not. The critical comparison is between the two levels of the independent variable on the dependent measure at the posttest. 

External validity is protected by random selection or careful ad hoc definition of participants. Threats to internal validity from regression to the mean and mortality are reduced by random assignment; threats from instrumentation, history, and maturation are reduced by inclusion of the no-treatment control group. Random assignment helps to ensure statistical equivalence of the groups at the start of the study.

Randomized, Pretest-Posttest, Control-Group Design

In the randomized, pretest-posttest, control-group design, participants are randomly assigned to experimental and control conditions and are pretested on the dependent variable. The experimental group is then administered the treatment, and both groups are posttested on the dependent variable. 

By adding the pretest, we provide further assurance that the two groups are equivalent at the beginning of the study. Random assignment also helps to assure the two groups' equivalence. While adding a pretest has advantages, it also has some disadvantages, which will be discussed shortly.

Multilevel, Completely Randomized, Between-Subjects Design

The multilevel, completely randomized, between-subjects design is a simple extension of the previously discussed designs. Participants are randomly assigned to three or more conditions. Pretests might or might not be included in this design depending on the question asked by the experimenter. Since this is an extension of earlier designs, it controls for the same confounding factors as the simpler two-group designs.

Pretest-Manipulation Interaction: A Potential Problem

The addition of a pretest improves control, but creates a new problem: the potential interaction of the pretest and the experimental manipulation. To control for this interaction, Solomon's four-group design combines the randomized, pretest-posttest control group design and the randomized, posttest-only, control-group design. The Solomon four-group design is a powerful design, but it requires the resources of two experiments. Therefore, it is not recommended for routine use.

Analysis of Variance

When more than two groups are used and score data is produced, the analysis of variance (ANOVA) is appropriate. The major comparison involves the between-groups and within-groups variance. The between-groups and within-groups variance estimates are called mean squares. The statistical significance of ANOVA is based on the F-test, which is the ratio of mean square between-groups to mean square within-groups. The results of the computations are summarized in an ANOVA summary table.

Specific Means Comparisons in ANOVA

The F-test tells us only whether there is a significant difference between the groups, but does not tell us which groups are significantly different from which others. When we have three or more groups, we must probe to determine where the differences are. This is done by comparing specific means for significant differences. 

These specific means comparisons are best carried out as a planned part of the research (i.e., a planned comparison or an a priori comparison) in which the experimenter makes predictions at the beginning of the experiment about which groups will differ and in what directions they will differ. At times, a priori comparisons cannot be made. If a significant F is found in such a situation, we compare the pattern of means using a post hoc comparison (also called an a posteriori or incidental comparison).

Graphing the Data

It is useful to graph the group means from an ANOVA using either a frequency polygon or histogram. It is customary to include a standard error bar on these graphs to indicated the amount of variability.

Ethical Principles

Random assignment is critical in experimental research, but it can raise difficult ethical issues. For example, is it ethical to deny treatment to control participants. If treatment proves to be effective, and some treatments will not be effective, control participants are normally offered the treatment after the study is completed.