Ninth Edition CoverGraziano & Raulin
Research Methods (9th edition)

Scoring and Analyzing
the Survey

Scoring surveys can become very complicated, but here we will deal only with the most basic issues.

Structured survey items are scored primarily as categories (e.g., yes/no) or as scales with continuous values. Items commonly include 5 to 7 scale steps, so that each item can be scored from 1 to 5 or 1 to 7. 

It can sometimes be confusing trying to remember what the numbers mean during the analysis phase. Therefore, it is important to carefully label each item when doing the statistical analysis. To help the researcher remember what the numbers mean, it is traditional to label a scale so that large numbers mean more of the trait identified by the scale name. For example, if higher scores represent more knowledge about economics, the tradition is to label the item economics knowledge. If higher scores represented less knowledge about economics, one would normally label the scale economics ignorance.

Creating Scales by Combining Items

It is common in surveys to construct scales by summing the responses across a set of items that cover the same topic. Consider, for example, a 10-item survey of students' views of the campus security department. Assume that each item is on a 7-point scale, and each scale proceeds from low to high values, corresponding to low (negative) and high (positive) evaluation. Each item would be scored 1, 2, 3, 4, 5, 6, or 7. These individual item scores can then be summed, for a total score on the whole survey. The lowest possible total score on the 10-item survey, and the most negative evaluation, would be 10 (for a respondent who scores 1 on each item). The highest possible total score, and the most positive evaluation, would be 70 (for a respondent who scores 7 on each item).

Increasing the number of items in a measure will increase the internal consistency reliability of the measure. It is important, however, to sum the items carefully. For example, a low score on some items may mean a more favorable opinion, whereas a high score on other items represent a more favorable opinion. In such a case, we would say that some of the items are reverse-keyed. Without going into the details, it is a good idea to reverse key about half of the items that make up a given scale to protect against a particular response set bias known as acquiescence (the tendency to agree with statements no matter what they actually say). But reverse keying requires careful scoring by the researcher so that the scales created make conceptual sense. This is best explained with an example of a simple scale made up of two survey items as shown below.

How would you rate the current president of the university?
(Smart) 1 --- 2 --- 3 --- 4 --- 5 --- 6 --- 7 (Ignorant)
(Incompetent) 1 --- 2 --- 3 --- 4 --- 5 --- 6 --- 7 (Competent)

If we wanted to construct a scale that measured a positive impression of the president, clearly low numbers on the first item represent a positive impression and high numbers on the second item represent a positive impression. Since we are calling the scale a positive impression scale, we want large scores to represent the more positive impressions. A larger score on the second item already represents a positive impression. The first item, however, needs to be reverse keyed. To do that, we convert the numbers marked by the participant into a score, where a 1=7, a 2=6, a 3=5 ... to a 7=1. We can do that manually, but most computer analysis programs that combine individual items to produce a scale score are set up to allow us to easily indicate the direction of keying. If you use a computer program to compute a scale score, it is a good idea to compute the correlation of each item with the total score. This step often allows you to easily identify items that you have inadvertently miskeyed because they will have a negative correlation with the total score.

Testing Hypotheses

Once scores are obtained, statistical analyses can be carried out to test any number of hypotheses that you might have generated as part of your initial research design. We might start with a relatively simple hypothesis, such as "Compared with female students, male students will have significantly higher evaluations of campus security." Clearly, to test that hypothesis the responses for the females would be compared with the responses for the males. A statistical test of significance of the difference between groups (male/female) would be conducted. In similar fashion, the same survey could be used on several different college campuses. Comparisons could then be made among those campuses, and conclusions drawn about the differences among those campuses in how their students view their own security departments.

We might test somewhat more complex hypotheses, such as those involving predictions about the relationship of student demographic factors and student evaluations of security. Here are two such hypotheses.
Hypothesis 1: As students proceed through college, their evaluations of the campus security department will become more positive.
Hypothesis 2: There is a positive correlation between students' grades and their evaluation of campus security.

To test those hypotheses, correlational analyses would be carried out to determine the direction and strength of the relationship between the variables.

Surveys are extremely useful research procedures, and can be used to answer many questions. The exercises that follow and the reference list will give you more understanding and information about surveys.

Previous

Next