By Dr. Joe Pugh
My dissertation was my first foray into quantitative research. While planning the survey, I soon became confused by the terminology used to describe the wide variety of possible responses. I also found that I was not alone.
Differing views exist—and have existed for several decades—as to how to account for various types of responses to a survey when calculating its response rate. In 1975, the Survey Research Section of the American Statistical Society conducted a meta-analysis of twenty-six federally-funded and ten non-federal survey studies. The researchers found that survey response rates were difficult to compare. Response rates had different names and different definitions in different circumstances. This led to incomparability across surveys.
For example, in one survey, a response rate of .90 was actually .56 when substitutions were properly considered. Another response rate reported as .76 was really .50 when persons with unpublished phone numbers, telephones out-of-order, and some other cases were included as eligible units. They found correct calculations for only half of the surveys analyzed.
According to the American Association for Public Opinion Research (AAPOR), there were still no standardized definitions for response rates until the last decade. Response rates (representing the number of survey invitees who respond in some way to a survey invitation), cooperation rates (positive responses to an invitation), and completion rates (cooperative responses that yield usable data) have often been treated as interchangeable terms. Nor has there been, until recently, agreement on the methods of calculating survey response rates, even in studies that recognized the distinctions in response rates. Response rate calculation and reporting formats have been largely at the discretion of the researcher.
The AAPOR best practices call for a detailed accounting of the disposition of each response to the survey. The disposition of a survey case is the final outcome from each unit in the population that was identified for inclusion in the survey. Examples are the total number of sample elements that are contacted by the researcher, sample members who complete interviews or questionnaires, population members who could not be contacted, refusals (persons who are contacted but refuse to participate), terminations (persons who initially participate, but terminate participation before completing the survey), and non-eligibles (persons found to be ineligible to participate in the survey). The challenge for researchers is to report response rates as defined by, and calculated according to formulae prescribed by the AAPOR.
The Standard Definitions of the AAPOR prescribes four types of outcome rates from surveys. Response rates measure the total number of responses elicited by a survey. Six response rates can be calculated. They differ in the ways that they account for partially completed surveys, refusals, and ineligible units in the population. Cooperation rates account for the proportion of a survey population that actually provides data to the survey. Four cooperation rates can be calculated, differing in how they account for population members who partially complete a survey instrument and those who are willing but unable to participate in the survey. Refusal rates account for survey population units that refuse to participate, initially participate but terminate participation, or cannot be contacted. Contact rates account for the portion of a survey population for whom there is any type of evidence that the individuals were actually contacted by a surveyor.
The Standard Definitions of the AAPOR provides guidance for calculating outcome rates for in-person household, telephone, mail, and Internet surveys. The document also provides precise categories for all possible survey case outcomes.
If you would like to discuss this topic in more detail, or would like information on sources, please feel free to contact me at email@example.com