opinion polls – The Journalist's Resource https://journalistsresource.org Informing the news Wed, 10 Jul 2024 00:31:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://journalistsresource.org/wp-content/uploads/2020/11/cropped-jr-favicon-32x32.png opinion polls – The Journalist's Resource https://journalistsresource.org 32 32 What’s a nationally representative sample? 5 things you need to know to report accurately on research https://journalistsresource.org/politics-and-government/nationally-representative-sample-research-clinical-trial/ Tue, 09 Jul 2024 17:27:53 +0000 https://journalistsresource.org/?p=78735 Knowing what a nationally representative sample is — and isn't — will help you avoid errors in covering clinical trials, opinion polls and other research.

The post What’s a nationally representative sample? 5 things you need to know to report accurately on research appeared first on The Journalist's Resource.

]]>

Journalists can’t report accurately on research involving human subjects without knowing certain details about the sample of people researchers studied. It’s important to know, for example, whether researchers used a nationally representative sample.

That’s important whether a journalist is covering an opinion poll that asks American voters which presidential candidate they prefer, an academic article that examines absenteeism among U.S. public school students or a clinical trial of a new drug designed to treat Alzheimer’s disease.

When researchers design a study, they start by defining their target population, or the group of people they want to know more about. They then create a sample meant to represent this larger group. If researchers want to study a group of people across an entire country, they aim for a nationally representative sample — one that resembles the target population in key characteristics such as gender, age, political party affiliation and household income.

Earlier this year, when the Pew Research Center wanted to know how Americans feel about a new class of weight-loss drugs, it asked a sample of 10,133 U.S. adults questions about obesity and the effects of Ozempic, Wegovy and similar drugs. Pew designed the survey so that the answers those 10,133 people gave likely reflected the attitudes of all U.S. adults across various demographics.

If Pew researchers had simply interviewed 10,133 people they encountered at shopping malls in the southeastern U.S., their responses would not have been nationally representative. Not only would their answers reflect attitudes in just one region of the country, the individuals interviewed would not represent adults nationwide.

A nationally representative sample is one of several types of samples used in research. It’s commonly used in research that examines numerical data in public policy fields such as public health, criminal justice, education, immigration, politics and economics.

To accurately report on research, journalists must pay close attention to who is and isn’t included in research samples. Here’s why that information is critical:

1. If researchers did not use a sample designed to represent people from across the nation, it would be inaccurate to report or imply that their results apply nationwide.

A mistake journalists make when covering research is overgeneralizing the results, or reporting that the results apply to a larger group of people than they actually do. Depending on who is included in the sample, a study’s findings might only apply to the people in the sample. Many times, findings apply only to a narrow group of people at the national level who share the same characteristics as the people in the sample — for example, individuals who retired from the U.S. military after 2015 or Hispanic teenagers with food allergies.

To determine who a study is designed to represent, look at how the researchers have defined this target population, including location, demographics and other characteristics.

“Consider who that research is meant to be applicable to,” says Ameeta Retzer, a research fellow at the University of Birmingham’s Department of Applied Health Sciences.

2. When researchers use a nationally representative sample, their analyses often focus on what’s happening at a national level, on average. Because of this, it’s never safe to assume that national-level findings also apply to people at the local level.

“As a word of caution, if you’re using a nationally representative sample, you can’t say, ‘Well, that means in California …,” warns Michael Gottfried, an applied economist and professor at the University of Pennsylvania’s Graduate School of Education.

When researchers create a nationally representative sample of U.S. grade school students, their aim is to gain a better understanding of some aspect of the nation’s student population, Gottfried says. What they learn will represent an average across all students nationwide.

“On average, this is what kids are doing, this is how kids are doing, this is the average experience of kids in the United States,” he explains. “The conclusion has to stay at the national level. It means you cannot go back and say kids in Philadelphia are doing that. You can’t take this information and say, ‘In my city, this is happening.’ It’s probably happening in your city, but cities are all different.”

3. There’s no universally accepted standard for representativeness.

If you read a lot of research, you’ve likely noticed that what constitutes a nationally representative sample varies. Researchers investigating the spending habits of Americans aged 20 to 30 years might create a sample that represents this age group in terms of gender and race. Meanwhile, a similar study might use a sample that represents this age group across multiple dimensions — gender, race and ethnicity along with education level, household size, household income and the language spoken at home.

“In research, there’s no consensus on which characteristics we include when we think about representativeness,” Retzer notes.

Researchers determine whether their sample adequately represents the population they want to study, she says. Sometimes, researchers call a sample “nationally representative” even though it’s not all that representative.

Courtney Kennedy, vice president of methods and innovation at Pew Research Center, has questioned the accuracy of election research conducted with samples that only represent U.S. voters by age, race and sex. It’s increasingly important for opinion poll samples to also align with voters’ education levels, Kennedy writes in an August 2020 report.

“The need for battleground state polls to adjust for education was among the most important takeaways from the polling misses in 2016,” Kennedy writes, referring to the U.S. presidential election that year.

4. When studying a nationwide group of people, the representativeness of a sample is more important than its size.

Journalists often assume larger samples provide more accurate results than smaller ones. But that’s not necessarily true. Actually, what matters more when studying a population is having a sample that closely resembles it, Michaela Mora explains on the website of her research firm, Relevant Insights.

“The sheer size of a sample is not a guarantee of its ability to accurately represent a target population,” writes Mora, a market researcher and former columnist for the Dallas Business Journal. “Large unrepresentative samples can perform as badly as small unrepresentative samples.”

If a sample is representative, larger samples are more helpful than smaller ones. Larger samples allow researchers to investigate differences among sub-groups of the target population. Having a larger sample also improves the reliability of the results.

5. When creating samples for health and medical research, prioritizing certain demographic groups or failing to represent others can have long-term impacts on public health and safety.

Retzer says that too often, the people most likely to benefit from a new drug, vaccine or health intervention are not well represented in research. She notes, for example, that even though people of South Asian descent are more likely to have diabetes than people from other ethnic backgrounds, they are vastly underrepresented in research about diabetes.

“You can have the most beautiful, really lovely diabetes drug,” she says. “But if it doesn’t work for the majority of the population that needs it, how useful is it?”

Women remain underrepresented in some areas of health and medical research. It wasn’t until 1993 that the National Institutes of Health began requiring that women and racial and ethnic minorities be included in research funded by the federal agency. Before that, “it was both normal and acceptable for drugs and vaccines to be tested only on men — or to exclude women who could become pregnant,” Nature magazine points out in a May 2023 editorial.

In 2022, the U.S. Food and Drug Administration issued guidance on developing plans to enroll more racial and ethnic minorities in clinical trials for all medical products.

When journalists cover research, Retzer says it’s crucial they ask researchers to explain the choices they made while creating their samples. Journalists should also ask researchers how well their nationally representative samples represent historically marginalized groups, including racial minorities, sexual minorities, people from low-income households and people who don’t speak English.

“Journalists could say, ‘This seems like a really good finding, but who is it applicable to?’” she says.

The Journalist’s Resource thanks Chase Harrison, associate director of the Harvard University Program on Survey Research, for his help with this tip sheet.  

The post What’s a nationally representative sample? 5 things you need to know to report accurately on research appeared first on The Journalist's Resource.

]]>
Why people think the economy is doing worse than it is: A research roundup https://journalistsresource.org/economics/economy-perception-roundup/ Fri, 12 Jan 2024 21:07:17 +0000 https://journalistsresource.org/?p=77211 We explore six recent studies that can help explain why there is often a disconnect between how national economies are doing and how people perceive economic performance.

The post Why people think the economy is doing worse than it is: A research roundup appeared first on The Journalist's Resource.

]]>

The U.S. economy is in good health, on the whole, according to national indicators watched closely by economists and business reporters. For example, unemployment is low and the most recent jobs report from the Bureau of Labor Statistics shows stronger than expected hiring at the end of 2023.

Yet news reports and opinion polls show many Americans are pessimistic on the economy — including in swing states that will loom large in the 2024 presidential election. Some recent polls indicate the economy is by far the most important issue heading into the election.

Journalists covering the economy in the coming months, along with the 2024 political races, can use academic research to inform their interviews with sources and provide audiences with context.

The studies featured in the roundup below explore how people filter the national economy through their personal financial circumstances — and those circumstances vary widely in the U.S. The top tenth of households by wealth are worth $7 million on average, while the bottom half are worth $51,000 on average, according to the Institute for Economic Equity at the Federal Reserve Bank of St. Louis.

We’ve also included several questions, based on the research, which you can use in interviews with policy makers and others commenting on the economy, or as a jumping off point for thinking about this topic.

The economic state of play

Toward the end of 2023, inflation was down substantially from highs reached during the latter half of 2022, according to data from the Center for Inflation Research at the Federal Reserve Bank of Cleveland.

Gross domestic product from the third quarter of 2023, the most recent available, is in line with or better than most GDP readings over the past 40 years. GDP measures the market value of all final goods and services a country produces within a given year.

Unemployment has been below 4% for two years, despite the recently high inflation figures. The U.S. economy added 216,000 jobs in December 2023, beating forecasts.

The average price of a gallon of regular unleaded gasoline is nearly $3 nationally, down more than 70 cents since August 2023 (though attacks on Red Sea shipping lanes have recently driven up oil prices).

Despite these positive indicators, one-third of voters rate the economy generally, or inflation and cost of living specifically, as “the most important problem facing the country today,” according to a December 2023 poll of 1,016 registered voters conducted by the New York Times and Siena College.

The economy was the most pressing issue for voters who responded to that poll, outpacing immigration, gun policy, crime, abortion and other topics.

Crime, for example, registered only 2%, while less than 1% chose abortion as the most important problem in the country. The margin of error for the poll is +/- 3.5%, meaning there is a high probability that concerns about the economy among U.S. voters easily eclipse concerns about other issues.

Similarly, 78% of people who responded to a December 2023 Gallup poll rated current economic conditions as fair or poor. Gallup pollsters have reported similar figures since COVID-19 shutdowns began in March 2020.

Explanations and insights

The tone of news coverage is one possible explanation for the disconnect between actual economic performance and how individuals perceive it, according to a recent Brookings Institution analysis. Since 2018 — including during and after the recession sparked by COVID-19 — economic reporting has taken on an increasingly negative tone, despite economic fundamentals strengthening in recent years, the analysis finds. The Brookings authors use data from the Daily News Sentiment Index, a measure of “positive” and “negative” economic news, produced by the Federal Reserve Bank of San Francisco.

The six studies featured below offer further insights. All are based on surveys and polls, some of which the researchers conducted themselves. Several also explore economic perception in other countries.

The findings suggest:

  • Economic inequality tends to lead people into thinking the economy is zero-sum, meaning one group’s economic success comes at the expense of others.
  • In both wealthy and poorer countries, belief in conspiracy theories leads people to think the economy is declining — things were once OK, now they are not.
  • In the U.S., political partisanship may be a more accurate predictor of economic perception than actual economic performance.
  • Households at higher risk of experiencing poverty are less likely to offer a positive economic assessment, despite good macroeconomic news.

Research roundup

Economic Inequality Fosters the Belief That Success Is Zero-Sum
Shai Davidai. Personality and Social Psychology Bulletin, November 2023.

The study: How does economic inequality in the U.S. affect whether individuals think prosperity is zero-sum? A zero-sum outlook indicates that “the gains of the few come at the expense of the many,” writes Davidai, an assistant professor of business at Columbia University, who surveyed 3,628 U.S. residents across 10 studies to explore the relationship between inequality and zero-sum thinking.  

In one study, participants answered questions about how they experience economic inequality in their personal lives. In another, participants read about the salaries of 20 employees at one of three randomly assigned hypothetical companies. The first company had a range of very low and very high salaries. The second had high salaries with little variation. The third had low salaries with little variation. Participants were then asked if they interpreted the distribution of wages as equal or unequal, as well as about their political ideology. Davidai uses several other designs across the 10 surveys.

The findings: Participants who read about the company with highly unequal salaries were more apt to report zero-sum economic beliefs. Overall, when controlling for factors including income, education and political ideology, the perception of the existence of economic inequality led to more zero-sum thinking. Davidai also suggests that while people with higher incomes may not perceive their own success as having resulted from others experiencing loss, the existence of inequality could lead them to think generally about economic gains in zero-sum terms.

The author writes: While “the belief that the rich gain at the expense of the poor helps explain why perceived inequality fosters a view of the world as unjust,” Davidai notes that “zero-sum beliefs may also have some positive implications. Since perceived inequality fosters a view of ‘the rich’ as gaining at others’ expense, it may bolster support for disparity-mitigating policies.”

Multinational Data Show that Conspiracy Beliefs are Associated with the Perception (and Reality) of Poor National Economic Performance
Matthew Hornsey, et. al. European Journal of Social Psychology, October 2022.

The study: Using survey data from 6,723 university students across 36 countries, the authors investigate the relationship between belief in conspiracy theories and negative views of current and future national economic success, measured by GDP.

Participants were from Turkey, Colombia, Nigeria, Brazil, Japan, Canada, China, France, Latvia, the United Kingdom and the U.S., among others. The same conspiracy can have different levels of meaning to people in different countries. For example, “conspiracy theories about the death of Princess Diana might have different cultural relevance in the United Kingdom than they do in China.” For this reason, the authors kept their questions high-level and asked about participants’ willingness to believe that government actors collude in a systematic way to “hide the truth” from the public.

The findings: People were more likely to believe in conspiracies in countries with relatively low GDP per capita. Conspiracy beliefs were also higher among participants who thought their nation was in poor economic health. Individual financial circumstances among participants were not related to a greater propensity to believe in conspiracies, though the authors note this finding is not generalizable, as their sample consisted of young, mostly middle-class university students. In other words, their sample was not representative of overall national demographics.

The authors write: “These relationships did not seem to be a reflection of a general disagreeable orientation; indeed, the more strongly people self-reported having conspiracy beliefs, the more positively they reported the economic performance of the country in the past. As such, those high in conspiracy belief were characterized by a sense of economic deterioration: things were good once, but not so much now and going forward.”

Evaluating the Unequal Economy: Poverty Risk, Economic Indicators, and the Perception Gap
Timothy Hellwig and Dani Marinova. Political Research Quarterly, March 2022.

The study: Broad, aggregate measures of a nation’s economic health, such as GDP, fail to capture individual economic experiences, the authors write. Instead, the authors analyze poverty risk. To do that, they examined data from 27 countries from European Union Statistics on Income and Living Conditions surveys, along with public opinion surveys. Data for each country is either from 2009 or 2014. The focus on poverty “captures not only currently experienced problems but also anticipated ones,” the authors write. The authors classify a household as at risk of experiencing poverty if its members were relatively less likely to say that the economy improved over the past year.

The findings: The authors take the position that “poverty risk serves as a filter for macroeconomic information.” For those with no risk of experiencing poverty, strong GDP performance means an overall rosy economic outlook. But those at a higher risk of poverty are less likely to offer a positive assessment of an economy, despite good macroeconomic news, such as GDP growth or low unemployment numbers. Aggregate national data points are “far removed from their daily struggles,” as the authors put it.

The authors write: “We further show that those at risk of poverty know less about economic performance by standard economic indicators but offer more accurate estimates of national poverty rates. These novel findings underline the need to depart from familiar indicators and address how unequal economies structure preferences and policy responses.”

Perception of Economic Inequality Weakens Americans’ Beliefs in Both Upward and Downward Socioeconomic Mobility
Alexander Browman, Mesmin Destin and David Miele. Asian Journal of Social Psychology, March 2022.

The study: Economic mobility works both ways. It can mean moving up the economic ladder or moving down. The authors explore the relationship between strong beliefs that personal financial health in the U.S is unequal, with a small number of people holding disproportionate wealth, and whether stratified economic classes are unlikely to shift. The authors conducted three online surveys, not representative of national demographics, among 618 U.S. adults in 2018 and 2020.

The findings: Participants who strongly believed that a small number of individuals hold most of the wealth in the U.S. were more likely to also believe that the rich would stay rich while people with low income were unlikely to climb the economic ladder.

The researchers also found that Americans’ perceptions of inequality may be becoming more accurate — the participants in the 2020 study more accurately estimated the wealth held by the nation’s richest 20%, compared with participants in the surveys conducted two years prior.

The authors write: “[Participants] believed that social class groups in their country were largely ossified and impermeable, and thus that Americans were unlikely to move out of the groups they were born into.”

Cognitive Political Economy: A Growing Partisan Divide in Economic Perceptions
David Brady, John Ferejohn and Brett Parker. American Politics Research, January 2022.

The study: The authors explore what they call a “puzzling” gap in political research: how being a Democrat or Republican might affect people’s perception of whether the national economy is doing well or poorly. Namely, the paper aims to reveal whether the difference in partisan perception has widened across more than two decades, as well as the root causes of any change. The authors examine results from 234 monthly Gallup polls conducted between 1999 and 2020 to see how people responded to Gallup’s ongoing question asking whether the economy is getting better or worse.

The findings: By the end of the period studied, both Democrats and Republicans were more likely to have a dim view of the economy when a president of the opposing party was in office, compared with the beginning of the period studied. The authors observe the biggest gap in September 2020. At that time — roughly six months after widespread COVID-19 shutdowns began — 78% of Republicans thought the economy was getting better, compared with 5% of Democrats. Democrats and Republicans were most closely in agreement on how the economy was doing in January 2009, when only 17% of participants from either party thought the economy was on an upswing, a reflection of the recession happening at the time.

Democrats were more optimistic about the economy when a Democrat was president and the same was true for Republicans during Republican administrations. Independents were not swayed by the party holding the presidency. There were two recessions during the period studied. The authors find that data about the economy is only relevant to economic perceptions during the recessions, “otherwise, individuals are largely content to rely on their partisan affiliation.”

The authors write: “Whatever the role of economic factors in partisan economic perceptions, it is nevertheless clear that political variables are of primary importance. Moreover, it appears the influence of those variables is becoming more pervasive.”

Economic Self-Interest and Americans’ Redistributive, Class, and Racial Attitudes: The Case of Economic Insecurity
Cody Melcher. Political Behavior, March 2021.

The study: Melcher, a sociologist at Loyola University New Orleans, uses the 2016 American National Election Studies survey of more than 4,000 U.S. adults to examine how they perceive their personal economic health, currently and over the coming year. He then examines how these economic perceptions affect participants’ social and political views.

The findings: Respondents worried about experiencing economic hardship in the future tended to have a negative attitude toward powerful and rich business entities. The same negative attitude toward big business was expressed by people expecting to become unemployed during the coming year. Those with high anxiety over facing general economic hardship were also more likely to agree that the federal government should enact policies aimed at improving employment. Those specifically expecting job loss were less likely to perceive the U.S. as a country that, broadly, offers economic opportunities. They also were more likely to align with “the perception that ‘many whites are unable to find a job because employers are hiring minorities instead.’”

The author writes: “The evidence presented here makes it clear that existing measures and conceptualizations of economic self-interest — and the body of empirical work that discounts economic factors in American public opinion — need to be rethought in light of economic insecurity.”

Questions for sources

Here are some questions you may find worth asking sources, based on this research:

  • For people with lower levels of income, do they see the economic gains for some corresponding to others losing out? Or, do they interpret the economy as an ever-expanding pie, able to accommodate all who have the ability and desire to profit? The answers may help explain individual rationales behind perceptions of national economic performance.  
  • Among people who believe in conspiracies, did that belief coincide with a personal economic shock — for example, job loss or a major medical expense?
  • Are people able to reflect on the specific reasons they think the economy is doing well when a president of their same political party is in office? For people who have voted in several presidential elections, do they feel they are now less likely than in the past to think the economy would be in good hands under a president of the opposing party?

The post Why people think the economy is doing worse than it is: A research roundup appeared first on The Journalist's Resource.

]]>
‘Horse race’ reporting of elections can harm voters, candidates, news outlets: What the research says https://journalistsresource.org/politics-and-government/horse-race-reporting-election/ Mon, 23 Oct 2023 12:53:00 +0000 https://live-journalists-resource.pantheonsite.io/?p=60520 Our updated roundup of research looks at the consequences of one of the most common ways journalists cover elections — with a focus on who’s in the lead and who’s behind instead of policy issues.

The post ‘Horse race’ reporting of elections can harm voters, candidates, news outlets: What the research says appeared first on The Journalist's Resource.

]]>

This collection of research on horse race reporting, originally published in September 2019 and periodically updated, was last updated on Oct. 23, 2023 with recent research on third-party political candidates, probabilistic forecasting and TV news coverage of the 2020 presidential election.

When journalists covering elections focus primarily on who’s winning or losing instead of policy issues –what’s known as horse race coverage — voters, candidates and the news industry itself suffer, a growing body of research suggests.

Media scholars have studied horse race reporting for decades to better understand the impact of news stories that frame elections as a competitive game, relying heavily on public opinion polls and giving the most positive attention to frontrunners and underdogs who are gaining support. It’s a common strategy for political news coverage in the U.S. and other parts of the globe.

Thomas E. Patterson, professor of government and the press at the Harvard Kennedy School of Government, says U.S. election coverage often does not delve into policy issues and candidates’ stances on them. In fact, policy issues accounted for 10% of news coverage about the 2016 presidential election that Patterson examines in his December 2016 working paper, “News Coverage of the 2016 General Election: How the Press Failed the Voters.” The bulk of the reporting concentrated on who was winning and losing and why.

When he looked at how CBS and Fox News covered the 2020 presidential election in their evening newscasts, he found similar patterns. For example, three-fourths of the stories the CBS Evening News ran on Democratic candidate Joe Biden focused on the horse race, as did a third of its stories about Republican candidate Donald Trump, Patterson writes in his December 2020 working paper, “A Tale of Two Elections: CBS and Fox News’ Portrayal of the 2020 Presidential Campaign.”

In both papers, Patterson notes this type of reporting can help some candidates while hurting others.

“[These reports] tend to be a source of positive news for the candidate who’s ahead in the race, except when that candidate is slipping in the polls,” he writes in his 2020 analysis. “Speculation about the reasons for the decline then drive the story, and there’s nothing positive about that narrative.”

Dozens of academic studies chronicle the dangers of horse race journalism. Scholars find it’s associated with:

  • Distrust in politicians.
  • Distrust of news outlets.
  • An uninformed electorate.
  • Inaccurate reporting of opinion poll data.

Studies also indicate horse race reporting can:

  • Shortchange female candidates, who tend to focus on policy issues to build their credibility.
  • Give novel or unusual candidates an edge.
  • Hurt third-party candidates, who often are overlooked or ignored by newsrooms because their chances of winning are usually quite slim compared with Republican and Democratic candidates.

In recent years, scholars have begun investigating the impact of a relatively new type of horse race journalism: probabilistic forecasting. Some newsrooms have the resources and expertise to conduct sophisticated analyses of data collected from multiple opinion polls to more precisely predict candidates’ chances of winning. This allows news outlets to present polling data as the percentage likelihood that one candidate will win over another candidate.

The research to date indicates probabilistic forecasting can confuse voters and possibly lead them to believe an election outcome is more certain than it actually is. Researchers worry that will affect voter turnout — when people doubt their votes will make a difference, many might not bother turning in a ballot.

Part of the problem is some voters misinterpret probabilistic forecasting, researchers explain in a January 2023 study in the journal Judgment and Decision Making. They don’t understand the difference between a candidate’s probability of winning and their predicted vote share.

“A vote share of 60% is a landslide win, but a win probability of 60% corresponds to an essentially tied election,” write the researchers, led by Andrew Gelman, a professor of statistics and political science at Columbia University.

Research to help journalists understand the pitfalls of horse race reporting

Journalists wanting to know more about the consequences of horse race reporting, keep reading. Below, we’ve gathered and summarized academic studies that examine the topic from various angles. For additional context, we included several studies that look at how journalists use — and sometimes misuse — opinion polls. We’ll update this roundup of research periodically, as new studies are released.

If you need help reporting on polls, please read our tip sheet on questions journalists should ask when covering them and our tip sheet on interpreting margins of error.

Also, because it’s unlikely newsrooms will stop covering elections as a competitive game, we created a tip sheet to help them improve. Check out “‘Horse Race’ Coverage of Elections: What to Avoid and How to Get It Right.”

The consequences of horse race reporting

The Polls and the U.S. Presidential Election in 2020 … and 2024
Arnold Barnett and Arnaud Sarfati.Statistics and Public Policy, May 2023.

This study looks at how accurately FiveThirtyEight, which aggregates opinion polls and publishes news stories about the results, predicted the outcomes of America’s 2020 presidential election. The authors find it “did an excellent job” predicting who would win in each state but underestimated Donald Trump’s vote share by state by a “modest” amount.

Once a standalone news site, FiveThirtyEight, which recently became 538, was incorporated into ABC News’ website after the news organization acquired it in 2018.

The authors note it’s important to gauge how accurately predictions were made because voters’ perceptions of how polls performed in the most recent presidential election can have consequences for the next one. The accuracy of 2024 presidential polls “is already a live issue at the start of 2023,” write the authors, Arnold Barnett, a professor of management science and statistics at the MIT Sloan School of Management, and Arnaud Sarfati, a graduate student at MIT at the time the paper was written.

“If such polls — as distilled by a respected aggregator like FiveThirtyEight — are viewed as trustworthy, they could affect the intensity of pressure on Joe Biden to retire,” Barnett and Sarfati write. “They could influence Republican voters in state primaries who wonder whether Donald Trump could plausibly win reelection. The potential candidacies of Democrats like Amy Klobuchar or Republicans like Ron DeSantis could rise or fall with their standings in voter surveys.”

The analysis finds FiveThirtyEight underestimated Trump’s vote share in each state by an average of 1.90 percentage points. Trump outperformed FiveThirtyEight’s estimates in both heavily Democratic states and heavily Republican states.

Barnett and Sarfati write that “it is concerning that, for the second election in a row, the polls underestimated the support for Donald Trump and FiveThirtyEight did not devise an appropriate adjustment for the downward bias.”

A possible reason for the shortfall, according to the authors: Trump supporters might have been more likely to refuse to participate in voter surveys than Trump opponents. 

“While one hopes that lessons from 2020 will avoid the problem in 2024, there is no certainty that this will be the case,” Barnett and Sarfati write.

Information, Incentives, and Goals in Election Forecasts
Andrew Gelman, Jessica Hullman, Christopher Wlezien and George Elliott Morris. Judgment and Decision Making, January 2023.

In this paper, scholars offer a highly technical analysis of probabilistic forecasts of elections in the U.S. and how they are communicated to the public. They also make recommendations aimed at helping the public understand how these forecasts are made and how results should be interpreted.

The scholars point out that “forecasters have some responsibility to take into account what readers may do with a visualization or statement of forecast predictions.” They suggest researchers and news outlets work together to figure out the best ways to present this information to the public.

“Designing a forecast without any thought to how it may play into readers’ decisions seems both impractical and potentially unethical,” write the four researchers: Gelman, of Columbia University; Jessica Hullman, an associate professor of computer science at Northwestern University; Christopher Wlezien, a professor of government at the University of Texas at Austin; and George Elliott Morris, the editorial director of data analytics at ABC News.

“In general, we think that more collaboration between researchers invested in empirical questions around uncertainty communication and journalists developing forecast models and their displays would be valuable,” they add.

The authors suggest researchers and journalists work together to improve election predictions and news outlets’ methods of communicating results. If that information is well presented and explained, the public’s ability to interpret forecasts correctly could develop over time, they write.

“Naturally, adding too much information risks overwhelming readers,” they add. “The majority spend only a few minutes on the websites, and may feel overwhelmed by concepts such as correlation that forecasters will view as both simple and important, but are largely beside the point of the overall narrative of the forecast. Still, increasing readers’ literacy about model assumptions could happen in baby steps: a reference to a model assumption in an explanatory annotation on a high level graph, or a few bullets at the top of a forecast display describing information sources to whet a reader’s appetite.”

Third-Party Candidates, Newspaper Editorials, and Political Debates
John F. Kirch. Newspaper Research Journal, May 2022.

News outlets exclude or limit coverage of third-party political candidates, even when those candidates are legitimate contenders, suggests this analysis of editorials in the Washington Post and 12 other newspapers that report on Virginia politics.

When Towson University journalism professor John Kirch looked at how these newspapers’ editorial staff characterized candidates in the 2013 gubernatorial race in Virginia, he discovered they often excluded Libertarian candidate Robert Sarvis. He was mentioned in 28.8% of all editorials that ran between Sept. 4, 2013 and Nov. 6, 2013. The Republican candidate, Ken Cuccinelli, appeared in 91.9% of editorials and the Democratic candidate, Terry McAuliffe, appeared in 73.9%.

The Washington Post did not mention Sarvis at all during that period.

Kirch writes that Virginia’s 2013 gubernatorial campaign makes for a good case study because Sarvis was a candidate with strong academic and professional credentials who had run as a Republican for a state senate seat in 2011.

“He ran against two highly unpopular major-party candidates, whose approval ratings were below 50% for most of the campaign,” Kirch writes. “And Sarvis was a serious candidate, which is defined in this study as one who received at least 5% support in the polls, the threshold used by the federal government to determine whether a candidate is eligible for public financing. If ever there was a gubernatorial campaign in which newspaper editorials would consider endorsing or advocating for a third-party candidate’s inclusion in debates, it is the Virginia race.”

When the newspapers’ editorials did mention Sarvis, they sometimes labeled him as a long shot, a spoiler or a protest vote rather than a serious competitor. They never mentioned his education or career as an economist, mathematician or businessman. Meanwhile, the Democratic candidate was identified as the former head of the Democratic Party in 25.6% of the editorials in which he appeared and identified by his occupation as a businessman in 18.3%. The Republican candidate was described as the state’s attorney general in 55.9% of the editorials in which he appeared.

However, one of the 13 newspapers examined endorsed Sarvis for governor — the Register & Bee in Danville, Virgina. Four others advocated for him to be included in the gubernatorial debates while the editorials of nine newspapers ignored his exclusion from the debates.

Kirch blames horse race coverage as “a factor in why minor parties are ignored, with scholarship showing that third-party candidates are left on the sidelines because they rarely meet the metrics the news media use to measure the contest aspects of a campaign, such as fundraising abilities and poll support.”

Projecting Confidence: How the Probabilistic Horse Race Confuses and Demobilizes the Public
Sean Jeremy Westwood, Solomon Messing and Yphtach Lelkes. The Journal of Politics, 2020.

This paper examines problems associated with probabilistic forecasting — a type of horse race journalism that has grown more common in recent years. These forecasts “aggregate polling data into a concise probability of winning, providing far more conclusive information about the state of a race,” write authors Sean Jeremy Westwood, an associate professor of government at Dartmouth College, Solomon Messing, a senior engineering manager at Twitter, and Yphtach Lelkes, an associate professor of communication at the University of Pennsylvania.

The researchers find that probabilistic forecasting discourages voting, likely because people often decide to skip voting when their candidate has a very high chance of winning or losing. They also learned this type of horse race reporting is more prominent in news outlets with left-leaning audiences, including FiveThirtyEight, The New York Times and HuffPost.

Westwood, Messing and Lelkes point out that probabilistic forecasting might have contributed to Clinton’s loss of the 2016 presidential election. They write that “forecasts reported win probabilities between 70% and 99%, giving Clinton an advantage ranging from 20% to 49% beyond 50:50 odds. Clinton ultimately lost by 0.7% in Pennsylvania, 0.2% in Michigan, 0.8% in Wisconsin, and 1.2% in Florida.”

The Consequences of Strategic News Coverage for Democracy: A Meta-Analysis
Alon Zoizner. Communication Research, 2021.

This paper examines what was known about the consequences of horse race journalism at the time it was written. Although the paper first appeared on the Communication Research journal’s website in 2018, it wasn’t published in an issue of the journal until 2021. In the academic article, Alon Zoizner, an assistant professor of communication at the University of Haifa, Israel, analyzes 32 studies published or released from 1997 to 2016 that examine the effects of “strategic news” coverage. He describes strategic news coverage as the “coverage of politics [that] often focuses on politicians’ strategies and tactics as well as their campaign performance and position at the polls.”

Among the key takeaways: This type of reporting elevates the public’s cynicism toward politics and the issues featured as part of that coverage.

“In other words,” Zoizner writes, “this coverage leads to a specific public perception of politics that is dominated by a focus on political actors’ motivations for gaining power rather than their substantive concerns for the common good.”

He adds that young people, in particular, are susceptible to the effects of strategic news coverage because they have limited experience with the democratic process. They “may develop deep feelings of mistrust toward political elites, which will persist throughout their adult lives,” Zoizner writes.

His analysis also reveals that this kind of reporting results in an uninformed electorate. The public receives less information about public policies and candidates’ positions on important issues.

“This finding erodes the media’s informative value because journalists cultivate a specific knowledge about politics that fosters political alienation rather than helping citizens make rational decisions based on substantive information,” the author writes. Framing politics as a game to be won “inhibits the development of an informed citizenship because the public is mostly familiar with the political rivalries instead of actually knowing what the substantive debate is about.”

Another important discovery: Strategic news coverage hurts news outlets’ reputations. People exposed to it “are more critical of news stories and consider them to be less credible, interesting, and of low quality,” Zoizner explains. “Strategic coverage will continue to be a part of the news diet but in parallel will lead citizens to develop higher levels of cynicism and criticism not only toward politicians but also toward the media.”

News Coverage of the 2016 Presidential Primaries: Horse Race Reporting Has Consequences
Thomas E. Patterson. Harvard Kennedy School working paper, 2016.

Horse race reporting gave Donald Trump an advantage during the 2016 presidential primary season, this working paper finds. Nearly 60% of the election news analyzed during this period characterized the election as a competitive game, with Trump receiving the most coverage of any candidate seeking the Republican nomination. In the final five weeks of the primary campaign, the press gave him more coverage than Democratic frontrunners Hillary Clinton and Bernie Sanders.

“The media’s obsession with Trump during the primaries meant that the Republican race was afforded far more coverage than the Democratic race, even though it lasted five weeks longer,” writes Patterson, who looked at election news coverage provided by eight major print and broadcast outlets over the first five months of 2016. “The Republican contest got 63 percent of the total coverage between January 1 and June 7, compared with the Democrats’ 37 percent — a margin of more than three to two.”

Patterson’s paper takes a detailed look at the proportion and tone of coverage for Republican and Democratic candidates during each stage of the primary campaign. He notes that the structure of the nominating process lends itself to horse race reporting. “Tasked with covering fifty contests crammed into the space of several months,” he writes, “journalists are unable to take their eyes or minds off the horse race or to resist the temptation to build their narratives around the candidates’ position in the race.”

Patterson explains how horse race journalism affects candidates’ images and can influence voter decisions. “The press’s attention to early winners, and its tendency to afford them more positive coverage than their competitors, is not designed to boost their chances, but that’s a predictable effect,” he writes. He points out that a candidate who’s performing well usually is portrayed positively while one who isn’t doing as well “has his or her weakest features put before the public.”

Patterson asserts that primary election coverage is “the inverse of what would work best for voters.” “Most voters don’t truly engage the campaign until the primary election stage,” he writes. “As a result, they enter the campaign nearly at the point of decision, unarmed with anything approaching a clear understanding of their choices. They are greeted by news coverage that’s long on the horse race and short on substance … It’s not until later in the process, when the race is nearly settled, that substance comes more fully into the mix.”

What Predicts the Game Frame? Media Ownership, Electoral Context, and Campaign News
Johanna Dunaway and Regina G. Lawrence. Political Communication, 2015.

Corporate-owned and large-chain newspapers were more likely to publish stories that frame elections as a competitive game than newspapers with a single owner, according to this study. The authors find that horse race coverage was most prevalent in close races and during the weeks leading up to an election.

Researchers Johanna Dunaway, an associate professor of communication at Texas A&M University, and Regina G. Lawrence, associate dean of the University of Oregon School of Journalism and Communication in Portland, looked at print news stories about elections for governor and U.S. Senate in 2004, 2006 and 2008. They analyzed 10,784 articles published by 259 newspapers between Sept. 1 and Election Day of those years.

Their examination reveals that privately-owned, large-chain publications behave similarly to publications controlled by shareholders. “We expected public shareholder-controlled news organizations to be most likely to resort to game-framed news because of their tendency to emphasize the profit motive over other goals; in fact, privately owned large chains are slightly more likely to use the game frame in their campaign news coverage at mean levels of electoral competition,” Dunaway and Lawrence write.

They note that regardless of a news outlet’s ownership structure, journalists and audiences are drawn to the horse race in close races. “Given a close race, newspapers of many types will tend to converge on a game-framed election narrative and, by extension, stories focusing on who’s up/who’s down will crowd out stories about the policy issues they are presumably being elected to address,” the authors write. “And, as the days-’til-election variable shows, this pattern will intensify across the course of a close race.”

Gender Bias and Mainstream Media
Meredith Conroy. Chapter in the book Masculinity, Media, and the American Presidency, 2015.

In this book chapter, Meredith Conroy, an associate professor of political science at California State University, San Bernardino, draws on earlier research that finds horse race coverage is more detrimental to women than men running for elected office. She explains that female candidates often emphasize their issue positions as a campaign strategy to bolster their credibility.

“If the election coverage neglects the issues, women may miss out on the opportunity to assuage fears about their perceived incompetency,” she writes. She adds that when the news “neglects substantive coverage, the focus turns to a focus on personality and appearance.”

“An overemphasis on personality and appearance is detrimental to women, as it further delegitimizes their place in the political realm, more so than for men, whose negative traits are still often masculine and thus still relevant to politics,” she writes.

Contagious Media Effects: How Media Use and Exposure to Game-Framed News Influence Media Trust
David Nicolas Hopmann, Adam Shehata and Jesper Strömbäck. Mass Communication and Society, 2015.

How does framing politics as a strategic game influence the public’s trust in journalism? This study of Swedish news coverage suggests it lowers trust in all forms of print and broadcast news media — except tabloid newspapers.

The authors note that earlier research indicates people who don’t trust mainstream media often turn to tabloids for news. “By framing politics as a strategic game and thereby undermining trust not only in politics but also in the media, the media may thus simultaneously weaken the incentives for people to follow the news in mainstream media and strengthen the incentives for people to turn to alternative news sources,” write the authors, David Nicolas Hopmann, an associate professor at University of Southern Denmark, Adam Shehata, a senior lecturer at the University of Gothenburg, and Jesper Strömbäck, a professor at the University of Gothenburg.

The three researchers analyzed how four daily newspapers and three daily “newscasts” covered the 2010 Swedish national election campaign. They also looked at the results of surveys aimed at measuring people’s attitudes toward the Swedish news media in the months leading up to and immediately after the 2010 election. The sample comprised 4,760 respondents aged 18 to 74.

Another key takeaway of this study: The researchers discovered that when people read tabloid newspapers, their trust in them grows as does their distrust of the other media. “Taken together, these findings suggest that the mistrust caused by the framing of politics as a strategic game is contagious in two senses,” they write. “For all media except the tabloids, the mistrust toward politicians implied by the framing of politics as a strategic game is extended to the media-making use of this particular framing, whereas in the case of the tabloids, it is extended to other media.”

How journalists use opinion polls

Transforming Stability into Change: How the Media Select and Report Opinion Polls
Erik Gahner Larsen and Zoltán Fazekas. The International Journal of Press/Politics, 2020.

This paper demonstrates journalists’ difficulty interpreting public opinion polls. It finds news outlets often reported changes in voter intent when no statistically significant change had actually occurred.

The authors write that they examined political news in Denmark because news outlets there provide relatively neutral coverage and don’t have partisan leanings. They looked at news coverage of polls of voter intent conducted by eight polling firms for eight political parties from 2011 to 2015. Their analysis focuses on 4,147 news articles published on the websites of nine newspapers and two national TV companies.

The researchers learned that journalists tended to report on polls they perceived as showing the largest changes in public opinion. Single outlier polls also got a lot of attention. Not only did many news articles erroneously report a change in public opinion, they often quoted politicians reacting as though a change had occurred, potentially misleading audiences further. Journalists also avoided reporting information on the margin of error for the poll results.

In most cases, the news stories should have been about stability in public opinion, note the authors, Erik Gahner Larsen, senior scientific adviser at the Conflict Analysis Research Centre at the University of Kent in the United Kingdom, and Zoltán Fazekas, an associate professor of business and politics at Copenhagen Business School.

“However, 58 percent of the articles mention change in their title,” they write. “Furthermore, while 82 percent of the polls have no statistically significant changes, 86 percent of the articles does not mention any considerations related to uncertainty.”

The ‘Nate Silver Effect’ on Political Journalism: Gatecrashers, Gatekeepers, and Changing Newsroom Practices Around Coverage of Public Opinion Polls
Benjamin Toff. Journalism, 2019.

This study, based on in-depth interviews with 41 U.S. journalists, media analysts and public opinion pollsters, documents changes in how news outlets cover public opinion. It reveals, among other things, “evidence of eroding internal newsroom standards about which polls to reference in coverage and how to adjudicate between surveys,” writes the author, Benjamin Toff, an assistant professor at the University of Minnesota’s Hubbard School of Journalism and Mass Communication. Toff notes that journalists’ focus on polling aggregator websites paired with the growing availability of online survey data has resulted in an overconfidence in polls’ ability to predict election outcomes — what one reporter he interviewed called the “Nate Silver effect.”

Both journalists and polling professionals expressed concern about journalists’ lack of training and their reliance on poll firms’ reputations as evidence of poll quality rather than the poll’s sampling design and other methodological details. Toff, who completed the interviews between October 2014 and May 2015, points out that advocacy organizations can take advantage of the situation to get reporters to unknowingly disseminate their messages.

The study also finds that younger journalists and those who work for online news organizations are less likely to consider it their job to interpret polls for the public. One online journalist, for example, told Toff that readers should help determine the reliability of poll results and that “in a lot of ways Twitter is our ombudsman.”

Toff calls on academic researchers to help improve coverage of public opinion, in part by offering clearer guidance on best practices for news reporting. “The challenge of interpreting public opinion is a collective one,” he writes, “and scholarship which merely chastises journalists for their shortcomings does not offer a productive path forward.”

News Reporting of Opinion Polls: Journalism and Statistical Noise
Yosef Bhatti and Rasmus Tue Pedersen. International Journal of Public Opinion Research, 2016.

This paper, which also looks at news coverage of opinion polls in Denmark, finds that Danish journalists don’t do a great job reporting on opinion polls. Most journalists whose work was examined don’t seem to understand how a poll’s margin of error affects its results. Also, they often fail to explain to their audiences the statistical uncertainty of poll results, according to the authors, Yosef Bhatti of Roskilde University and Rasmus Tue Pedersen of the Danish Center for Social Science Research.

The two researchers analyzed the poll coverage provided by seven Danish newspapers before, during and after the 2011 parliamentary election campaign — a 260-day period from May 9, 2011 to Jan. 23, 2012. A total of 1,078 articles were examined.

Bhatti and Pedersen find that journalists often interpreted two poll results as different from each other when, considering the poll’s uncertainty, it actually was unclear whether one result was larger or smaller than the other. “A large share of the interpretations made by the journalists is based on differences in numbers that are so small that they are most likely just statistical noise,” they write.

They note that bad poll reporting might be the result of journalists’ poor statistical skills. But it “may also be driven by journalists’ and editors’ desires for interesting horse race stories,” the authors add. “Hence, the problem may not be a lack of methodological skills but may also be caused by a lack of a genuine adherence to the journalistic norms of reliability and fact-based news. If this is the case, unsubstantiated poll stories may be a more permanent and unavoidable feature of modern horse race coverage.”

The post ‘Horse race’ reporting of elections can harm voters, candidates, news outlets: What the research says appeared first on The Journalist's Resource.

]]>
‘Horse race’ coverage of elections: What to avoid and how to get it right https://journalistsresource.org/politics-and-government/horse-race-coverage-elections-improve/ Thu, 12 Oct 2023 13:00:00 +0000 https://journalistsresource.org/?p=70621 It's unlikely journalists will stop covering elections as a competitive game, despite researchers' warnings that it can harm voters and others. Two scholars offer ideas for at least improving so-called 'horse race' reporting.

The post ‘Horse race’ coverage of elections: What to avoid and how to get it right appeared first on The Journalist's Resource.

]]>

We updated this tip sheet on horse race’ coverage, originally published in April 2022, on Oct. 23, 2023 to include new hyperlinks and other information.

U.S. newsrooms have been amply criticized for years for covering elections as a competitive game, with a focus on who’s winning and losing instead of on candidates’ policy positions. Despite research documenting the various ways so-called “horse race” reporting can hurt voters, candidates and even news outlets themselves, it’s unlikely journalists will stop.

In fact, horse race coverage of elections has grown more common over the years, thanks in part to the dramatic rise in public opinion polls, which allow journalists to track and quantify voter support for specific candidates.

Harvard Kennedy School media scholar Thomas E. Patterson, who has studied election coverage for decades, has warned that news outlets fail their audiences when they prioritize poll results and campaign strategy over discussions about candidate qualifications, leadership styles and policy positions.

Horse race coverage is partly to blame for “the car wreck that was the 2016 election,” Patterson writes in a December 2016 working paper, “News Coverage of the 2016 General Election: How the Press Failed the Voters.”

“In the 2016 general election, policy issues accounted for 10% of the news coverage — less than a fourth the space given to the horserace,” writes Patterson, the Bradlee Professor of Government and the Press at Harvard’s Shorenstein Center on Media, Politics and Public Policy.

While many scholars and industry leaders argue news outlets should curb or eliminate horse race coverage, some acknowledge they would have fewer concerns if news stories were more accurate. Multiple studies published over the last decade point out problems in the way journalists interpret and report the results of opinion polls.

“We’re not necessarily against horse race journalism, but we should be thinking about, ‘Why does it look the way it does?’ and ‘How can it be improved?” says researcher Erik Gahner Larsen, who studies journalists’ use of opinion polls and co-wrote a book about it, Reporting Public Opinion: How the Media Turns Boring Polls into Biased News, released in 2021.

We asked Patterson and Larsen for their ideas on how newsrooms could improve horse race coverage. Both shared insights and advice on what journalists should avoid and how to get it right.

WHAT TO AVOID: Reporting on any opinion poll you come across.
HOW TO GET IT RIGHT: Scrutinize and compare opinion polls to gauge their quality. Rely most often on those conducted by reputable pollsters.

Patterson and Larsen urge journalists to pay attention to the details of a poll, including the questions asked, when the poll was conducted, how many people participated and how well that group represents the population as a whole. When comparing polls, keep in mind many factors can lead to differences in results.

For example, the way pollsters word their questions and the order in which they ask them can affect how people respond. Timing also can influence results. Two polls conducted just days or weeks apart can get drastically different results, especially if a significant event altered the public’s opinion or perception about the subject of the polls.

Patterson suggests journalists rely on poll results from firms with a long history of high-quality work and use caution when covering results from entities with less experience and expertise. He identifies these as reputable organizations that conduct national polls in the U.S.: 

WHAT TO AVOID: Focusing on a single opinion poll — especially outliers — without providing context.

HOW TO GET IT RIGHT: When covering an individual poll, put its findings into perspective by noting historic trends and what other recent polls have found. Consider combining poll results and reporting averages to give audiences the most accurate picture of public sentiment.

“Don’t just cover one, but look at the full picture,” Larsen says. “Acknowledge the existence of other opinion polls. How does [this poll] compare to long-term trends?”

He advises against overplaying outliers — polls with results that differ substantially from or even contradict the findings of most other polls. While journalists and audiences might find polls showing major changes more interesting, their findings probably are not reliable and might be a statistical fluke, Larsen explains.

Combining poll results and reporting on averages would offer audiences the best understanding of public opinion at a given point in time. Only some news organizations have the technical expertise to perform such analyses, however.

For journalists who need help calculating weighted averages, Larsen recommends reaching out to a pollster or statistician.

WHAT TO AVOID: Covering poll results without taking into account the poll’s margin of error.
HOW TO GET IT RIGHT: Learn what a margin of error is and how it relates to polling and poll results. Make sure news stories featuring poll results reflect their margins of error.

The margin of error, typically expressed as a range of numbers, indicates how likely the opinions expressed by people who participated in an opinion poll reflect the opinions of the population as a whole.

When journalists ignore or overlook a poll’s margin of error — the topic of this journalism tip sheet — their coverage often misrepresents the results. One of the most common mistakes journalists make: Reporting that a particular political candidate has more or less voter support than another when, in fact, considering the poll’s margin of error, it’s simply too close to tell.

For example, let’s say a polling firm asks a nationally representative sample of U.S voters whether they would choose Candidate A or Candidate B in an election. Let’s also say 51% of those voters pick Candidate A and 49% select Candidate B and the poll’s margin of error is 4%. Many journalists would report that most voters prefer Candidate A or that Candidate A has the lead, neither of which is correct.

The correct interpretation of this poll: If this polling firm had asked the same question of every registered voter in the U.S., the actual share of all voters who prefer Candidate A likely falls somewhere between 47% to 55% and the actual percentage preferring Candidate B likely ranges between 45% and 53%. In this case, journalists should report that it’s unclear which candidate has greater support. It’s also accurate to say the two candidates are “statistically tied.”

When Larsen and a fellow researcher studied news coverage of polls in Denmark, they learned that journalists there tended to focus on polls they perceived as showing the biggest changes. In the resulting paper, published in 2020, Larsen and his colleague note that most of the 4,147 print and TV news stories they reviewed had erroneous descriptions of differences in poll results.

Often, journalists reported changes in poll results when no change actually occurred, says Larsen, senior scientific adviser at the Conflict Analysis Research Centre at the University of Kent in the United Kingdom.

He encourages journalists to ask experts for help describing poll results.

“I think that it’s good advice to say go to political scientists and experts and statisticians — when in doubt, its good to reach out to professionals,” he says. “It can easily be complicated stuff.”

WHAT TO AVOID: Assuming that simply cutting coverage of opinion polls will improve election news and lead to a more informed electorate.
HOW TO GET IT RIGHT: Recognize that horse race reporting takes several forms and audiences seek it out because they’re drawn to competitions. Make horse race coverage more valuable by incorporating information voters need to make their choices.

National election coverage focuses heavily on opinion polls, but at the state and local level, polling is far less common. In those races, journalists use other methods to measure public support and answer the ever-present questions “Who’s winning?” and “Who’s losing?”

One way they do that is by monitoring candidates’ fundraising activities and periodically comparing how much money they have raised and spent. Another way to gauge who’s ahead: Tracking candidates’ success in drawing support from influential community leaders, legislators and groups such as teacher unions and law enforcement associations.

In some parts of the U.S., local organizations hold straw polls, either online or at in-person events, to get a sense of who voters favor. Local newsrooms sometimes report the results of these informal vote tallies.

Over the years, journalism organizations such as the Poynter Institute and industry critics such as New York Magazine columnist Ed Kilgore and pollster Mark Blumenthal have offered ideas for improving horse race journalism in its various forms.

Blumenthal, the former senior polling editor for The Huffington Post, suggests news outlets incorporate coverage of candidates’ qualifications and policy proposals into their horse race coverage.

The reason audiences seek out horse race stories is because they find them more interesting than stories summarizing candidates’ issue positions, he writes in a piece published by NBC News. He adds that journalists should “use the drama of the horse race to draw readers into coverage that connects campaign strategies to the underlying contrasts (on issues, qualifications, leadership styles) between the candidates.”

“If a story attracts readers or viewers interested in ‘who is going to win,’ how well does that story highlight the debate between the candidates?” Blumenthal writes. “How well does it use the tools of its particular medium (hyperlinks, sidebars or on-air references to Web site URLs) to promote stories or resources that give uncertain voters ‘what they need to know’ to make better decisions?”

Roy Peter Clark, a senior scholar and writing instructor at Poynter, has recommended political journalists look to their colleagues who cover sports for ideas on how to revamp horse race reporting.

In “In Defense of the Horse Race,” published on Poynter’s website in 2008, Clark praises The Boston Globe’s Super Bowl coverage, pointing out that football fans interact energetically with the Globe’s website. It offers traditional coverage of the game event as well as opportunities for audience members to share opinions and engage with one another.

“What if we imagined the coverage of Super Tuesday the way we experience the Super Bowl?,” Clark asks.

He writes that journalists could use horse race coverage to grab audiences’ attention and direct them toward more in-depth coverage.

“If the contest is taut, competitive and exciting, we’ll sit riveted to find out what will happen next,” he writes.

The post ‘Horse race’ coverage of elections: What to avoid and how to get it right appeared first on The Journalist's Resource.

]]>
Election Beat 2020: Polls, polls and more polls — navigating the numbers https://journalistsresource.org/politics-and-government/election-beat-2020-polls-polls-and-more-polls-navigating-the-numbers/ Tue, 13 Oct 2020 11:16:00 +0000 https://live-journalists-resource.pantheonsite.io/?p=66135 As Election Day has drawn closer, opinion polls have taken up ever more of the news hole. Which of the dozens of polls that cross journalists’ desks are reliable, and which should be ignored?

The post Election Beat 2020: Polls, polls and more polls — navigating the numbers appeared first on The Journalist's Resource.

]]>

As Election Day has drawn closer, opinion polls have taken up ever more of the news hole. Which of the dozens of polls that cross journalists’ desks are reliable, and which should be ignored?

In 2016, the expectation of a Clinton victory, derived from polls, led to a flurry of finger pointing. “How did everyone get it so wrong?” blared a Politico headline. Some of the final pre-election polls were far off the mark. Most of these were state-level polls. The national polls, on the other hand, were relatively accurate. Most of them had Hillary Clinton ahead by a couple of percentage points, which was roughly her popular-vote margin.

The best national polls — the Wall Street Journal/NBC News poll being an example — have a remarkable track record in estimating the final presidential vote. One reason is that they have high methodological standards and rely on live interviewers rather than automated callers. As well, in their final poll and sometimes earlier ones, they sample a large number of respondents. Sampling error — the degree to which a sample is likely to represent what the electorate as a whole is thinking — is a function of sample size. Everything else being equal, the larger the sample, the smaller the sampling error. Additionally, leading national polls have data from past elections that has enabled them after the fact to test alternative estimation models in order to discover which ones yield the most precise predictions. How much weight, for example, should be placed on respondents’ stated vote intention relative to the strength of their party identification? It’s the weighted results that journalists receive, which leaves them somewhat at the mercy of pollsters. Pollsters do not routinely disclose the weights they have used in translating a poll’s raw data into the results that are made public.

National polls also serve as checks on each another. Through the 1970s, national polls were few in number. Since the 1990s, more than 200 such polls have been conducted during the presidential general election. It’s easy to identify an outlier — a poll with findings that are markedly at odds with those of other polls. The Rasmussen poll, for instance, is often an outlier, sometimes by a large amount, typically in the Republican direction. The proliferations of polls has also allowed estimates derived from aggregating the polls — a methodology applied, for example, by FiveThirtyEight’s Nate Silver.

Silver has developed a grading system for high-frequency polling organizations by comparing their poll results to actual election results. The highest graded national polls tend to be those that are university-based, such as the Monmouth Poll. Polls that rely on live interviewers also tend to get high grades. Those that rely on automated questioning tend to be less accurate. The lowest-graded national polls tend to be online-only polls like Survey Monkey and Google Consumer Surveys. Although the accuracy of online polling has increased over time as estimation models have improved, they’re still markedly less accurate on average than polls that employ more traditional methods.

In general, state-level polls tend to have weaker track records than national polls. Many of them are conducted by organizations that don’t poll regularly and lack sophisticated models for weighting the results. Some of the state-level polls during the 2016 election, for example, failed even to correct for the fact that their samples included a disproportionately high number of college-educated respondents. Budgetary restraints also affect many state-wide polls. If they rely on live interviewers, they tend to use relatively small samples to reduce the cost. Some of them use less-reliable automated calling methods in order to survey a larger number of respondents. Then, too, relatively few polls are conducted in most states, which reduces the possibility of judging a poll by comparing its results to those of other polls. Moreover, when there are comparators, there’s a question of whether they are reliable enough to serve that purpose. Many of the newer state polls are low-budget affairs made possible by such developments as robocalls and online surveying.

There are some solid state-wide polls, including the multi-state polls conducted by the New York Times in collaboration with Sienna College. These polls are methodologically rigorous but otherwise reflect the tradeoffs characteristic of most state polls. Compared with the New York Times/Sienna College national polls, the state-level polls typically sample only half as many respondents — meaning that the state polls have a larger sampling error.

In 2016, the national popular vote was within the range that allowed for the possibility that the popular vote winner would lose the electoral college vote. The larger the national margin, the smaller the likelihood of such an outcome. As the campaign unfolds over its final weeks, journalists will need to take that statistical tendency into account, just as they’ll need to consider the profiles, methods, and track records of the polls they cite.

Thomas E. Patterson is Bradlee Professor of Government & the Press at Harvard’s Kennedy School and author of the recently published Is the Republican Party Destroying Itself? Journalist’s Resource plans to post a new installment of his Election Beat 2020 series every week leading up to the 2020 U.S. election. Patterson can be contacted at thomas_patterson@harvard.edu.

Further reading:

“FiveThirtyEight’s Pollster Ratings,” FiveThirtyEight, May 19, 2020.

Will Jennings, Michael Lewis-Beck, and Christopher Wlezien, “Election forecasting: Too far out?” International Journal of Forecasting 36, 2020.

Costas Panagopoulos, Kyle Endres, and Aaron C. Weinschenk, “Preelection poll accuracy and bias in the 2016 U.S. general elections,” Journal of Elections, Public Opinion and Parties 28, 2018.

Thomas E. Patterson, “Of Polls, Mountains: U.S. Journalists and Their Use of Election Surveys,”
Public Opinion Quarterly 69, 2005.

The post Election Beat 2020: Polls, polls and more polls — navigating the numbers appeared first on The Journalist's Resource.

]]>
Covering political polls: A cautionary research roundup https://journalistsresource.org/politics-and-government/research-roundup-political-polling/ Thu, 25 Apr 2019 20:16:15 +0000 https://live-journalists-resource.pantheonsite.io/?p=59024 Journalist's Resource rounds up some of the latest political polling research as Joe Biden jumps into the 2020 presidential race.

The post Covering political polls: A cautionary research roundup appeared first on The Journalist's Resource.

]]>

On April 25, 2019, former Vice President Joe Biden became the latest big-name politician to join the race for the 2020 Democratic Party presidential nomination. Among Democrat voters, he leads the field over the next most popular candidate, Vermont Sen. Bernie Sanders, by 7 percentage points — with a sampling margin of error of 5.4 percentage points — according to a recent poll from Monmouth University.

But public and media perception has been burned by polls before — see the 2016 presidential election — and there’s still a long, long way to go before the Democratic field is settled. Donald Trump officially became the Republican Party nominee for president in July 2016, but a year prior there were still 16 other candidates angling for the nomination.

Precisely because there are still so many town halls and county fairs to come for the Democratic contenders, we’re rounding up some recent academic research that can inform coverage of political opinion polls in this early presidential contest. This research digs into bias in evaluating political polling, polling errors across time and space, the relationship between media coverage and polling, and more.

All the Best Polls Agree with Me: Bias in Evaluations of Political Polling

Madson, Gabriel J.; Hillygus, D. Sunshine. Political Behavior. February 2019.

The credibility of a poll comes down to survey methods, the pollster’s reputation and how transparent the pollster is with their data. Does the public care about any of that? The authors conducted two surveys with a total of 2,048 participants — 600 recruited from Amazon Mechanical Turk and 1,448 from the national Cooperative Congressional Election Study. They found participants perceived polls to be more credible when polls agreed with their opinions, and less credible when polls disagreed.

“Polls are not treated as objective information,” the authors write.

Disentangling Bias and Variance in Election Polls

Shirani-Mehr, Houshmand; et al. Journal of the American Statistical Association. July 2018.

Margins of error indicate the precision of polling estimates. The margin of error of an opinion poll says something about how close the poll’s results are likely to match reality. A larger sample typically will come with a smaller margin of error, while a smaller sample means a larger margin of error.

Confidence intervals and margins of error go hand-in-hand. The final Gallup poll before the 2012 election showed Mitt Romney with 49% of the popular vote and Barack Obama with 48%. The poll had a 95% confidence interval and a 2 percentage point margin of error. So, Gallup was 95% confident the election would end with Romney winning 51% to 46%, Romney losing 47% to 50% or somewhere in the middle. In the end, Obama outperformed Gallup’s confidence interval with 51% of the popular vote, while Romney got 47%.

Political polls typically report margins of error related only to sample size. For that reason they often underestimate their uncertainty, according to the authors. For example, there may be errors because pollsters don’t know the number of people in their target population who will vote.

The authors analyzed 4,221 polls across 608 state-level presidential, senatorial and gubernatorial elections from 1988 to 2014. The polls were conducted within the last three weeks of campaigns.

On average, they find a 3.5 percentage point difference between poll results and election outcomes, “about twice the error implied by most confidence intervals,” the authors write.

“At the very least, these findings suggest that care should be taken when using poll results to assess a candidate’s reported lead in a competitive race.”

Election Polling Errors Across Time and Space

Jennings, Will; Wlezien, Christopher. Nature Human Behaviour. March 2018.

The authors look at more than 30,000 national polls from 351 elections across 45 countries from 1942 to 2017. They find that national polls taken from 2015 to 2017 performed in line with historical norms. But polls that ask about the largest political parties tended to be less accurate than those asking about smaller parties.

“These errors are most consequential when elections are close, as they can be decisive for government control,” the authors write.

While the reputation of an individual pollster matters when evaluating poll results, the authors find that presidential polls conducted 200 days out from a presidential election were generally less accurate than those conducted closer to the election day.

Partisan Mathematical Processing of Political Polling Statistics: It’s the Expectations That Count

Niemi, Laura; et al. Cognition. May 2019.

Polling results bombard the public during presidential campaigns, and it can be difficult for voters to process that information. The authors surveyed 437 participants recruited from MTurk and find that for the 2012 and 2016 presidential elections, those who had committed to a particular candidate underestimated their opponents — even in the face of conflicting polling information. Those who didn’t actually think their candidate would win did not succumb to the same cognitive dissonance.

Mass Media and Electoral Preferences During the 2016 U.S. Presidential Race

Wlezien, Christopher; Soroka, Stuart. Political Behavior. June 2018.

Does the dog wag the tail, or is it the other way around? The authors compare polling data and nearly 30,000 stories in nine major newspapers across the United States leading up to the 2016 presidential election, to clarify the relationship between media coverage and voter preferences. Their most robust finding indicates coverage at these media outlets followed public opinion. As polls shifted in favor or away from the candidates, so too did the tone of media coverage become positive or negative.

“Results speak to the importance of considering media not just as a driver, but also a follower of public sentiment,” the authors write.

Don’t look to polls for yes-or-no answers

For as much as journalists and the public may want political polls to indicate yes-or-no answers, they don’t, they won’t, and they never have. University of Minnesota journalism professor Benjamin Toff put it like this in a March 2018 essay in Political Communication:

“Polls are more pointillism than photorealism; their results are meant to be observed from a distance. One should never mistake these impressionistic representations of public sentiment for the actual thing.”

If you’re curious what went wrong with polling during the 2016 presidential race, check out this postmortem from the American Association for Public Opinion Research. The upshot? National polls were generally correct, but at the state level polls showed a closer race whose outcome was more uncertain.

For more guidance on covering polls, check out 11 questions journalists should ask about public opinion polls and 7 tips related to margin of error. Plus, political involvement during the 2016 presidential election wasn’t very different from previous elections. FiveThirtyEight also offers a good rundown of trustworthy pollsters. Finally, this is how the press failed voters in the 2016 presidential election.

The post Covering political polls: A cautionary research roundup appeared first on The Journalist's Resource.

]]>
The margin of error: 7 tips for journalists covering polls and surveys https://journalistsresource.org/media/margin-error-journalists-surveys-polls/ Mon, 05 Nov 2018 17:07:01 +0000 https://live-journalists-resource.pantheonsite.io/?p=57694 To help journalists understand margin of error and how to correctly interpret data from surveys and polls, we’ve put together a list of seven tips, including clarifying examples.

The post The margin of error: 7 tips for journalists covering polls and surveys appeared first on The Journalist's Resource.

]]>

Journalists often make mistakes when reporting on data such as opinion poll results, federal jobs reports and census surveys because they don’t quite understand — or they ignore — the data’s margin of error.

Data collected from a sample of the population will never perfectly represent the population as a whole. The margin of error, which depends primarily on sample size, is a measure of how precise the estimate is. The margin of error for an opinion poll indicates how close the match is likely to be between the responses of the people in the poll and those of the population as a whole.

To help journalists understand margin of error and how to correctly interpret data from polls and surveys, we’ve put together a list of seven tips, including clarifying examples.

 

 

  1. Look for the margin of error — and report it. It tells you and your audience how much the results can vary.

Reputable researchers always report margins of error along with their results. This information is important for your audience to know.

Let’s say that 44 percent of the 1,200 U.S. adults who responded to a poll about marijuana legalization said they support legalization. Let’s also say the margin of error for the results is +/- 3 percentage points. The margin of error tells us there’s a high probability that nationwide support for marijuana legalization falls between 41 percent and 47 percent.

 

  1. Remember that the larger the margin of error, the greater the likelihood the survey estimate will be inaccurate.

Assuming that a survey was otherwise conducted properly, the larger the size of a sample, the more accurate the poll estimates are likely to be. As the sample size grows, the margin of error shrinks. Conversely, smaller samples have larger margins of error.

The margin of error for a reliable sample of 200 people is +/- 7.1 percent. For a sample of 4,000 people, it’s 1.6 percent. Many polls rely on samples of around 1,200 to 1,500 people, which have margins of error of approximately +/- 3 percent.

 

  1. Make sure a political candidate really has the lead before you report it.

If a national public opinion poll shows that political Candidate A is 2 percentage points ahead of Candidate B but the margin of error is +/- 3 percentage points, journalists should report that it’s too close to tell at this point who’s in the lead.

Journalists often feel pressure to give their audience a clear-cut statement about which candidate is ahead. But in this example, the poll result is not clear cut and the journalist should say so. There’s as much news in that claim as there is in the misleading claim that one candidate is winning.

 

  1. Note that there are real trends, and then there are mistaken claims of a trend.

If the results of polls taken over a period are exceedingly close, there is no trend even though the numbers will vary slightly. To take a hypothetical example, imagine that pollsters ask a sample of Florida residents whether they would support a new state sales tax. In January, 31 percent said yes.  In July, 33 percent said yes. Imagine now that each poll had a margin of error +/-2 percentage points.

If you’re a reporter covering this issue, you want to be able to tell audiences whether public support for this new tax is changing. But in this case, due to the margin of error, you cannot infer a trend. What you can say is that support for a new sales tax is holding steady at about a third of Florida residents.

 

  1. Watch your adjectives. (And it might be best to avoid them altogether.)

When reporting on data that has a margin of error, use care when choosing adjectives. Jonathan Stray, a journalist and computer scientist who’s a research scholar at Columbia Journalism School, highlights some of the errors journalists make when covering federal jobs reports in a piece he wrote for DataDrivenJournalism.net.

Stray explains: “The September 2015 jobs number was 142,000, which news organizations labelled ‘disappointing’ to ‘grim.’ The October jobs number was 271,000, which was reported as ‘strong’ to ‘stellar.’

Neither characterization makes sense considering those monthly jobs growth numbers, released by the U.S. Bureau of Labor Statistics, had a margin of error of +/- 105,000. (It’s a reason why, when the agency later releases adjusted figures based on additional evidence, the jobs number for a month is often substantially different from was originally reported.).

Journalists should help their audiences understand how much uncertainty is in the data they use in their reporting — especially if the data is the focus of the story. Stray writes: “This is one example of a technical issue that becomes an ethics issue: ignoring the uncertainty … If we are going to use data to generate headlines, we need to get data interpretation right.”

 

  1. Keep in mind that the margin of error for subgroups of a sample will always be larger than the margin of error for the sample.

As we mentioned above, the margin of error is based largely on sample size. If a researcher surveys 1,000 residents of Los Angeles County, California to find out how many adults there have completed college, the margin of error is going to be slightly more than +/- 3 percent.

But what if researchers want to look at the college completion rate for various demographic groups — for example, black people, women or registered Republicans? In these cases, the margin of error depends on the size of the group. For example, if 200 of those sampled are from a particular demographic group, the estimate of the margin of error in their case will be roughly +/- 7 percent. Again, the margin of error in a sample depends largely on the number of respondents. The smaller the number, the larger the margin of error. That’s true whether you’re talking about the entire sample or a subset of it.

 

  1. Use caution when comparing results from different polls and surveys, especially those conducted by different organizations.

Although most polling firms have a similar methodology, polls can differ in their particulars. For example, did a telephone poll include those with cell phones or was it limited to those with a landline; was the sample drawn from registered voters only or was it based on adults of voting age; was the wording of the question the same or was it substantially different? Such differences will affect the estimates derived from a poll, and journalists should be aware of them when comparing results from different polling organizations.

 

 

If you’re looking for more guidance on polls, check out these 11 questions journalists should ask about public opinion polls.

 

Journalist’s Resource would like to thank Todd Wallack, an investigative reporter and data journalist on the Boston Globe’s Spotlight team, for his help creating this tip sheet.

This visualization, created by Fadethree and obtained from Wikimedia Commons, is public domain content.

 

How to tell good research from flawed research: 13 questions journalists should ask

Reporting on data security and privacy: Tips from Dipayan Ghosh

The post The margin of error: 7 tips for journalists covering polls and surveys appeared first on The Journalist's Resource.

]]>
11 questions journalists should ask about public opinion polls https://journalistsresource.org/politics-and-government/public-opinion-polls-tips-journalists/ Thu, 14 Jun 2018 14:53:26 +0000 https://live-journalists-resource.pantheonsite.io/?p=56624 Our new tip sheet outlines 11 questions journalists should ask to help them decide how to frame the findings of a public opinion poll — or cover them at all.

The post 11 questions journalists should ask about public opinion polls appeared first on The Journalist's Resource.

]]>

Regardless of beat, journalists often write about public opinion polls, which are designed to measure the public’s attitudes about an issue or idea. Some of the most high-profile polls center on elections and politics. Newsrooms tend to follow these polls closely to see which candidates are ahead, who’s most likely to win and what issues voters feel most strongly about.

Other polls also offer insights into how people think. For example, a government agency might commission a poll to get a sense of whether local voters would support a sales tax increase to help fund school construction. Researchers frequently conduct national polls to better understand how Americans feel about public policy topics such as gun control, immigration reform and decriminalizing drug use.

When covering polls, it’s important for journalists to try to gauge the quality of a poll and make sure claims made about the results actually match the data collected. Sometimes, pollsters overgeneralize or exaggerate their findings. Sometimes, flaws in the way they choose participants or collect data make it tough to tell what the results really mean.

Below are 11 questions we suggest journalists ask before reporting on poll results. While most of this information probably won’t make it into a story or broadcast, the answers will help journalists decide how to frame a poll’s findings — or whether to cover them at all.

1. Who conducted the poll?

It’s important to know whether it was conducted by a polling organization, researcher, non-expert, political campaign or advocacy group.

2. Who paid for it?

Was the poll funded by an individual or organization that stands to gain or lose something based on the results?

3. How were people chosen to participate?

The best polls rely on randomly selected participants. Keep in mind that if participants were targeted in some way — for example, if pollsters went to a shopping mall and questioned people they encountered there — the results may be very different than if pollsters posed questions to a random sample of the population they’re interested in studying.

4. How was the poll conducted? 

It’s important to find out if participants filled out a form, answered questions over the phone or did in-person interviews. The method of collecting information can influence who participates and how people respond to questions. For instance, it’s easier for people to misrepresent themselves in online polls than in person — a teenager could claim to be a retiree.

5. What’s the margin of error? 

Be sure to ask about the margin of error, an estimate of how closely the views of the sample reflect the views of the population as a whole. When pollsters ask a group of people which presidential candidate they prefer, pollsters know the responses they will get likely won’t match the responses they’d get if they were to interview every single voter in the United States. The margin of error is reported as plus or minus a certain number of percentage points.

Journalists covering tight political races should pay close attention to the margin of error in election polls. If a poll shows that one candidate is 2 percentage points ahead of another in terms of public support but the margin of error is plus or minus 3 percentage points, the second candidate could actually be in the lead. The Pew Research Center offers a helpful explainer on the margin of error in election polls.

6. Were participants compensated?

Offering people money or another form of compensation can also affect who participates and how. Such incentives might encourage more lower-income individuals to agree to weigh in. Also, participants may feel compelled to answer all questions, even those they aren’t sure about, if they are paid.

7. Who answered questions?

Were most participants white? Or female? A sample of primarily white, elderly, high-income women is likely to provide very different results than a sample that closely resembles the general population.

8. How many people responded to the poll? 

While there isn’t a perfect number of participants, higher numbers generally result in more accurate representations. If pollsters want to know if the American public supports an increase in military funding, interviewing 2,000 adults will likely provide a more accurate measurement of public sentiment than interviewing 200.

9. Can results be generalized to the entire public?

Journalists should be clear in their coverage whether the results of a poll apply only to a segment of the population or can be generalized to the population as a whole.

10. What did pollsters ask?

Knowing which questions were asked can help journalists check whether claims made about poll results are accurate. It also can help journalists spot potential problems, including vague terms, words with multiple meanings and loaded questions, which are biased toward a candidate or issue. Cornell University’s Roper Center for Public Opinion Research offers an example of a loaded question.

Request a copy of the questions in the order they were asked. Participants’ answers also can differ according to question order.

11. What might have gone wrong with this poll? 

Get pollsters to talk about possible biases and shortcomings that could influence results.

Want more info on public opinion polls? Check out these other resources:

  • The Journalist’s Resource has written an explainer on polls.
  • The Poynter Institute offers a free online course on understanding and interpreting polls.
  • FiveThirtyEight, a news site that focuses on statistical analysis, has updated its pollster rankings in time for the 2018 midterms. It gave six election pollsters a grade of A-plus: Monmouth University, Selzer & Co., Elway Research, ABC News/Washington Post, Ciruli Associates and Field Research Corp.
  • The American Association for Public Opinion Research provides numerous resources, including information on poll and survey response rates, random sampling and why election polls sometimes get different results.

Research chat: Political scientist Michael Traugott on primary polls

The post 11 questions journalists should ask about public opinion polls appeared first on The Journalist's Resource.

]]>
Why media should think twice about public-opinion polls: Panel discussion https://journalistsresource.org/politics-and-government/criticism-media-use-public-opinion-polls/ Tue, 17 Nov 2015 21:34:26 +0000 http://live-journalists-resource.pantheonsite.io/?p=47436 2015 panel discussion on the media’s widespread use of public-opinion polls during Harvard University’s Theodore H. White Seminar on Press and Politics.

The post Why media should think twice about public-opinion polls: Panel discussion appeared first on The Journalist's Resource.

]]>

A panel of experts criticized and offered candid insights on the media’s growing reliance on public-opinion polls during Harvard University’s recent Theodore H. White Seminar on Press and Politics.

The seminar followed a lecture by author and American history professor Jill Lepore, who had called for stronger regulation of the polling industry.

The four panelists were: Lepore, who’s also a staff writer for The New Yorker; Candy Crowley, a former anchor and political correspondent for CNN who is a fellow at the Harvard Institute of Politics; Peter Hart, the founder of Hart Research Associates and a pollster for NBC News and The Wall Street Journal; and Gary Younge, a columnist for The Guardian. The panel was moderated by Thomas Patterson, acting director of Harvard’s Shorenstein Center on Media, Politics and Public Policy.

Hart defended the industry, acknowledging that while change is needed – participation rates are too low, and the public has a poor understanding of public opinion measurement – the data collected by polls is still “exceptionally representative of where the country is at.” Hart argued that polling does not sway public opinion in elections; it measures sentiment rather than influencing it. He cited Vietnam, Watergate, and same-sex marriage as examples of times when public opinion was ahead of politicians, therefore impacting political outcomes. Hart stressed the importance of integrating other forms of data collection, such as focus groups, to understand the “why” behind the numbers.

Crowley also defended the practice of polling. Although she said that horse race numbers are “catnip for political reporters” and that reporters are not always trained to correctly report on polls, there wasn’t a better, readily available way of measuring the nation’s pulse, especially since polling has become a fixture of political reporting. She said that the problem was not the existence of polls, but rather the incorrect use of them.

Younge agreed that the problem is not polls, but rather a problem with journalism. Using polls alone without digging deeper creates a lazy kind of journalism, he said, one that loses nuance and “texture.” He also compared the U.S. election cycle to that of the U.K, which lasts for only five weeks. With a shorter cycle and less campaign spending, polling and market research is a smaller industry there than it is in the U.S.

Lepore responded that she was primarily disavowing horse race polls, and that polling during the Vietnam and Watergate eras was useful. However, in recent years, polling had “teetered off course.” She said the problem is complicated because the public can’t tell the difference between polls with good or bad methodology. Lepore suggested that other methods, such as deliberative polling, whereby people learn about and debate an issue, and are asked for their opinions before and after reflection, could be a more meaningful way of gathering public opinion.

In a question-and-answer session, the panelists also discussed the role of polling for politicians, generally agreeing that following public opinion polls should not replace true leadership. They also discussed the use of polling for admittance to presidential debates.

The post Why media should think twice about public-opinion polls: Panel discussion appeared first on The Journalist's Resource.

]]>