Research shows mass communications studies rely heavily on nonrepresentative samples


LAWRENCE — For decades, academic studies have sought to understand the effects of mass media on people from all walks of life. However, based on the sampling procedures used in a majority of those studies, people should be careful in interpreting the findings and assuming they apply to all Americans, as participant pools may not be representative of the U.S. population and very often rely on college students as the samples.

University of Kansas researchers analyzed more than 1,100 quantitative studies conducted in the U.S. from the top six mass communications journals from 2000 to 2014 and found that nearly 80 percent rely on nonprobability samples. From the more than 1,100 studies, they found that among surveys, about 80 percent do not use probability samples, and among experiments, only 2.5 percent do so, meaning that from a statistics perspective, findings cannot be generalized to a broader population beyond the study sample. The results do not invalidate the findings of those studies, the researchers point out, but they do suggest the field can do better in ensuring studies have more representative populations and that we perhaps do not understand media effects as well as previously thought.

“This is not a critique of the field. It’s a self-examination of the field,” said Joseph Erba, assistant professor of journalism and co-author of the study. “If we want to understand the role of media in society, we must use samples that reflect that society more. It’s as simple as that.”

The study was authored by Erba; Peter Bobkowski, associate professor of journalism; Brock Ternes, lecturer in sociology, and Yuchen Liu and Tara Logan, then-master’s students, all at KU. They presented the findings at the International Communication Association Conference. The authors analyzed the samples, or participant groups, in all quantitative mass communication studies conducted in the United States and published in the top six journals in the field in impact factor from 2000 to 2014. They decided to begin in 2000, as it was the first year Latinos outnumbered African-Americans in the U.S. Census. As the demographics of the U.S. population have been rapidly changing, the researchers wanted to see if this was reflected in mass communications studies.

Researchers have long known that age, race, gender, income level, education level, place of residence and numerous other factors all affect how individuals interact with media. Yet college students are inordinately relied on as the participants for mass media studies. Since at least the late '70s, researchers have pointed out that college students are not representative of the United States as a whole as the demographic tends to be more white, female, educated and from a higher socioeconomic background than the majority of the population.

For the study, KU researchers looked at student vs. nonstudent samples and probability vs. nonprobability samples as the two main variables. Probability sampling is the idea that people of all demographics or anyone from a particular group under study would have a chance to take part. They found the following numbers:

  • 82.6 percent of the studies used nonprobability samples
  • Slightly more than half, 51.1 percent, used college student samples
  • Surveys used student samples 28.5 percent of the time to experiments’ 78.7 percent
  • Only 19.8 percent of the studies received funding to support the work.
  • Among surveys, 29.6 percent used probability sampling, while only 2.5 percent of experiments did so.

While most studies did not receive funding, it made a noticeable difference in the samples used: Twenty-nine percent of funded studies used probability sampling, compared with 14.3 percent that did not receive funding, and both surveys and experiments that received funding were more likely to use probability sampling. Funding also made a difference in sample populations. More than 72 percent of funded studies used noncollege student samples, while that majority held true in both surveys and experiments. Funding agencies often require researchers to show that they will not rely on convenience samples before awarding funding.

There are numerous reasons why student samples are used so frequently, the authors say. Academics have easy access to students; it does not cost much to enlist them as survey or experiment participants, and data can be quickly obtained. For professors facing tenure deadlines and the requirement to publish quickly, student samples make practical sense. Research funding that necessitates probability samples can also be difficult to obtain due to competition for decreasing amounts of funding. However, authors should be very clear about their samples and not extrapolating more from the findings than is prudent. The authors of the analyzed studies were, in fact, clear on those points, KU authors said, but interpretation and generalization of the findings should be handled carefully.

“One of the main contributions of this study is that we’re relying too much on student samples and nonprobability samples,” Erba said. “We’ve had researchers pulling the alarm on this for 30 plus years, but it’s still happening.”

Generalizing or overextrapolating findings from convenience samples would be akin to taking a survey in New York or Bismarck, North Dakota, and claiming the findings represent the attitudes of the entire United States, Erba said. Ensuring population validity through tested methods and representative samples is imperative to claiming to understand media effects on the national population, the authors said.

“We teach our students every day to be skeptical and to evaluate critically the information and data they come across in the media,” Bobkowski said. “In this study we applied that skeptical approach to the research that we and our colleagues produce. The results reinforce the need for this critical stance. No matter how intuitive or surprising the results of any study appear, we always need to ask who was included in the study and whether the sample can be expected to represent a broad population."

The authors propose looking to fields such as political science to improve their use of nationally representative samples. As a field, mass communications researchers could potentially pool resources and build large, nationally representative samples and collect large bodies of data researchers could access and analyze as part of their studies. Political science has been very successful using such a model, Bobkowski said. Doing so would match the best practices taught to mass communications students and help avoid problematic generalization of findings, while simultaneously helping better understand media effects.

“Our findings show we still have some work to do to enhance the population validity of mass communications research and we should reflect on the theories we’ve built and how many of them apply to wider populations,” Erba said.

Mon, 07/31/2017

author

Mike Krings

Media Contacts