Know Your Research – The Journalist's Resource https://journalistsresource.org Informing the news Mon, 22 Jul 2024 14:34:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://journalistsresource.org/wp-content/uploads/2020/11/cropped-jr-favicon-32x32.png Know Your Research – The Journalist's Resource https://journalistsresource.org 32 32 Carbon offsets: 4 things journalists need to understand https://journalistsresource.org/home/carbon-offsets-4-things-journalists-need-to-understand/ Mon, 22 Jul 2024 14:33:23 +0000 https://journalistsresource.org/?p=78896 From our recent webinar with the nonprofit CarbonPlan, learn how voluntary carbon offset markets work and how journalists can use OffsetsDB, a free data repository, to check companies’ carbon neutrality and "net zero" claims.

The post Carbon offsets: 4 things journalists need to understand appeared first on The Journalist's Resource.

]]>

Many companies emit carbon dioxide in the course of their normal business. The amount emitted may differ — an airline or fossil fuel-based energy producer will emit much more carbon than a textile manufacturer.

To offset those emissions, some companies purchase carbon credits through brokers or exchanges. Those credits are meant to fund projects that reduce or capture pollutants.

Companies can also buy credits from project developers. Forestry projects are common, since trees capture carbon during photosynthesis. Projects might aim to preserve a swath of trees from destruction, or plant new ones. The forests saved or replenished can be anywhere in the world.

Companies buy carbon credits for various reasons. They are often the cornerstone of company public relations statements that they are carbon neutral, or they have “net zero” greenhouse gas emissions.

A big question for journalists investigating the growing voluntary carbon market is whether claims of carbon neutrality are true.

Companies don’t necessarily have to buy offsets to help the environment. They can reduce emissions produced by one part of their business to make up for other parts where they can’t reduce them.

But changing business processes to reduce emissions can be difficult.

That’s a big reason why offsets are so commonly used in carbon neutrality claims. According to a 2021 study of such pledges from 35 large global firms, 66% of them used carbon offsets.

Another wrinkle in carbon neutrality claims: Products companies make may produce pollutants when consumers or other businesses use them. In gas fuel production, for example, there is carbon emitted to refine gas, and carbon emitted to make the energy that powers the refinery.

Then there is the carbon emitted when someone burns the fuel to drive their car. For firms working in those industries, more than 90% of greenhouse gas emissions don’t come from the company itself, according to the 2021 study, from the Columbia Center on Sustainable Development.

The growing voluntary carbon market

Though many companies purchase carbon offsets voluntarily there are two U.S. states, California and Washington, which regulate their biggest polluters using market mechanisms. Most states in the northeastern U.S. have binding regulations that do the same, but they focus on energy producers.

The voluntary market, valued at about $2 billion in 2020, could be worth $250 billion by 2025, according to a 2023 research memo from investment bank Morgan Stanley.

Companies access vital information about offset markets through registries, which are usually nonprofits. Registries record credit ownership, for example. Registries also have protocols, or rules, on how projects measure their carbon reductions. Protocols are meant to ensure carbon offsetting projects produce the environmental benefits claimed.

While records in states that regulate carbon emissions are likely to be subject to public record laws, records for voluntary carbon offsets are not. But the major registries make some data public.

Another question for journalists to ask is whether a carbon-saving project would have happened anyway without carbon credits being purchased.

Did carbon credits save that forest? Or was that forest in no danger of being cut down?

A major challenge in reporting on carbon offsets is disparate sources offer data in various formats — registries may use different phrases to note that a carbon credit has been issued, for example.

Those differences make market-wide analyses difficult and time consuming.

OffsetsDB from CarbonPlan

Enter OffsetsDB, a navigable and standardized repository of carbon offset data from five of the biggest registries operating in the voluntary carbon offset market: American Carbon Registry, ART TREES, Climate Action Reserve, Gold Standard and Verra.

The free tool is updated daily and produced by CarbonPlan, a nonprofit open data organization focusing on climate change solutions. Anyone can download the data and access documentation for carbon offset projects around the world.

To help journalists understand the offset market and get to know OffsetsDB, we recently hosted an hourlong webinar featuring Grayson Badgley, a research scientist at CarbonPlan, and longtime science journalist Maggie Koerth, editorial lead at CarbonPlan.

“I really genuinely believe that at the core of just about every single project there’s this nugget of truth,” Badgley said during the webinar. “There is this good thing that is happening. The real question is, is it being credited appropriately?”

Watch the video to learn more — and read four takeaways from the presentation and conversation.

1. Know these key terms before digging into offset data.

There are lots of specialized words and phrases in the world of carbon offsets.

Here are a few you need to know.

Additionality: This refers to whether a carbon-saving or carbon-reducing project funded by carbon credits is adding environmental benefits that couldn’t have existed without the credits.

Credit: A credit is a financial instrument that represents carbon-removing activities. Credits are often created by projects that remove carbon from the atmosphere, such as reforestation, or prevent it from reaching the atmosphere, such as landfills that capture and burn methane.

“Many [companies] buy credits,” Koerth said. “And that is, somebody else does the carbon removing activity and then sells the company the right to claim that removed carbon.”

Offsets: Credits become offsets when a company or other entity makes a claim, such as carbon neutrality, based on the credits they’ve purchased. The company is claiming that their carbon emitting activities are offset by the credits.

“The terms credit and offset are kind of used interchangeably because the majority of credits are ultimately used to make offsetting claims,” Koerth said.

Registry: A registry is an organization, typically a nonprofit, that tracks carbon credit markets, including the buying and selling, issuance and retirement of carbon credits.

“[Registries] keep a record of all the credits produced by a project when they’re sold, when the sold credits have officially been used,” Koerth said. “Basically, this whole system of the registries exists so that the same credits can’t be sold multiple times to different people.”

Issuance: When a credit has been created, registered, and a registry confirms it exists, the registry issues the credit.

Purchase: When a credit is bought, typically by companies seeking to offset their carbon-producing activities. While credit issuances and retirements are publicly available, purchase information is usually not public.

Retirement: When the buyer of a credit declares they have used the credit and put it toward an offsetting goal, the credit is retired. Retired credits can’t be used again.

Vintage: This is the year a carbon credit was created.

Carbon neutral or net zero: These are claims companies make that their carbon emissions are balanced, for example, by buying credits. Say a small coal plant emits 900 million metric tons of carbon each year. It then buys credits equaling 900 million metric tons in carbon offsetting activities. On net, the coal plant claims to be neutral in their emissions.

There are many more phrases to know in the world of carbon offsets. The U.K.-based news organization Carbon Brief offers an extensive glossary.

2. Understand additionality.

Additionality is a big word with a big question behind it: Would a carbon sequestration or reduction project have happened anyway without funding from carbon credits?

Journalists need a basic understanding of additionality, a crucial concept that carbon offset projects use to justify their existence.

“In practice, determining whether a proposed project is additional requires comparing it to a hypothetical scenario without revenue from the sale of carbon credits,” according to this explainer on additionality from the nonprofit Stockholm Environment Institute.

In other words, is the sale of carbon credits adding environmental benefits by funding a carbon offsetting project that could not have happened without that funding?

“A lot of this is based around both trying to see what you have done and trying to estimate what would have happened if you hadn’t done that thing,” Koerth said during the webinar.

As Badgley and Freya Chay write in a 2023 analysis for CarbonPlan, “Rather than creating new climate benefits, non-additional credits simply reward a landowner for doing what they already planned on doing.”

While there are no hard and fast rules for determining whether a project is adding carbon benefits, journalists and others can use data in OffsetsDB to identify projects that potentially are not.

Think of a hypothetical landfill. Landfills produce methane, which scientists consider a worse air pollutant than carbon dioxide. But when methane is burned, it turns into carbon dioxide. So the landfill burns methane and receives carbon credits it can sell on the offset market because the carbon it is emitting is not as bad for the environment as methane.

Badgley and Chay recently analyzed 14 such carbon offset landfill projects. If a landfill stops receiving carbon credits to sell, the methane burns at those landfills should drop off or stop — if those projects are adding environmental benefits that wouldn’t be possible without the credits.

That’s not what Badgley and Chay found.

“By comparing crediting data from the Climate Action Reserve with landfill gas collection data from the U.S. Environmental Protection Agency, we found that nearly 50 percent of the credits issued under this protocol are likely non-additional,” they write. “These credits do not represent high-quality outcomes for the climate.”

For example, the Resource Recovery Landfill project in Cherryvale, Kansas, received credits from 2006 to 2011 for its gas collection operations, including burning methane, according to the analysis.

In 2012, those credits stopped — the landfill didn’t file paperwork to keep them going, Badgley and Chay found. But the gas collection continued. Crediting began again in 2022. During those 10 years, the landfill’s gas collection didn’t disappear or slow down. It expanded, according to the analysis.

“To be clear, the fact that Resource Recovery’s gas collection system ran continuously is a good thing for the planet,” write Badgley and Chay. “In addition to reducing methane emissions, collecting and treating landfill gas can help reduce smells and other environmental side effects associated with operating a landfill. But offsets must be used to spur new climate action — not just reward existing actions.”

(CarbonPlan)

3. Know that parts of the carbon offset market aren’t disclosed in public registry data — but this information may exist elsewhere.

Offset credit purchases, for example, are not usually public information. Other transaction details are also missing in public data.

For example, OffsetsDB shows that Jaiprakash Hydro Power in India has issued 16.7 million credits and retired nearly 9 million since 2010.

But which entities bought and retired those credits is potentially unavailable.

Sometimes, companies will voluntarily disclose to registries that they have purchased or are retiring credits.

But that doesn’t always happen.

Company sustainability reports are one place to look for information on how many credits a company has retired, and why, Badgley said. A company might, for example, purchase carbon credits to offset executive travel and publicly disclose that for public relations purposes.

“What we’re hoping to do with OffsetsDB is start to pull in a lot of that information,” Badgley said, adding that it’s a manual process of gathering supplementary knowledge from places like sustainability reports and news articles. This appears as a timeline for individual carbon offset projects within the database.

(CarbonPlan)

Recent legislation passed in California requires that any company marketing or selling carbon credits in the state, or any company buying or selling credits, disclose much more information online.

This includes the type of offset project, where the project is located, whether a third party has verified the project, and more. (But it’s unclear when the law will take effect and how it will be enforced, Politico reports.)

“This is something that we’re really excited about,” Badgley said. “And we’re hoping to be able to sort of pull that paperwork and pull that information into OffsetsDB.”

4. Ask about registry buffer pools.

What happens when a carbon-emitting company buys credits from, say, a forestry project in northern California that later is subsumed by wildfire? The registry that issued the credits will likely dip into its buffer pool to make up for the loss.

A buffer pool is a sort of rainy day fund, made up of carbon credits reserved for emergencies. Usually a carbon offsetting project can’t sell all its credits, but has to put a fraction of them into the registry’s buffer pool, Badgley said.

While the practice “makes total sense,” Badley said, it also raises questions. What types of credits are in the buffer pool? Do replacement credits represent similar offsetting projects to the original ones?

Here’s a relevant question for journalists to ask of registries: “Is this buffer pool that this program is administering, is it designed to make good on the liabilities of the program for the next X years?” Badgley said.

Further reading

Another Forest Offset Project is Burning — If You Know Where to Look
Grayson Badgley. CarbonPlan, July 2024.

The First Offset Credits Approved by a Major Integrity Program Don’t Make the Grade
Grayson Badgley and Freya Chay. CarbonPlan, July 2024.

Reporter’s Guide to Investigating Carbon Offsets
Global Investigative Journalism Network. Toby McIntosh, March 2024.

Instead of Carbon Offsets, We Need ‘Contributions’ to Forests
Libby Blanchard, William R.L. Anderegg and Barbara K. Haya. Stanford Social Innovation Review, January 2024.

What Every Leader Needs to Know About Carbon Credits
Varsha Ramesh Walsh and Michael W. Toffel. Harvard Business Review, December 2023.

Glossary: Carbon Brief’s Guide to the Terminology of Carbon Offsets
Daisy Dunne and Josh Gabbatiss. Carbon Brief, September 2023.

In-Depth Q&A: Can ‘Carbon Offsets’ Help to Tackle Climate Change?
Josh Gabbatiss, et. al. Carbon Brief, September 2023.

Action Needed to Make Carbon Offsets From Forest Conservation Work for Climate Change Mitigation
Thales West, et. al. Science, August 2023.

The Voluntary Carbon Market: Climate Finance at an Inflection PointBriefing Paper. World Economic Forum, January 2023.

Corporate Net-Zero Pledges: The Bad and the Ugly
Jack Arnold and Perrine Toledano. Columbia Center on Sustainable Development, November 2021.

The post Carbon offsets: 4 things journalists need to understand appeared first on The Journalist's Resource.

]]>
What’s a nationally representative sample? 5 things you need to know to report accurately on research https://journalistsresource.org/politics-and-government/nationally-representative-sample-research-clinical-trial/ Tue, 09 Jul 2024 17:27:53 +0000 https://journalistsresource.org/?p=78735 Knowing what a nationally representative sample is — and isn't — will help you avoid errors in covering clinical trials, opinion polls and other research.

The post What’s a nationally representative sample? 5 things you need to know to report accurately on research appeared first on The Journalist's Resource.

]]>

Journalists can’t report accurately on research involving human subjects without knowing certain details about the sample of people researchers studied. It’s important to know, for example, whether researchers used a nationally representative sample.

That’s important whether a journalist is covering an opinion poll that asks American voters which presidential candidate they prefer, an academic article that examines absenteeism among U.S. public school students or a clinical trial of a new drug designed to treat Alzheimer’s disease.

When researchers design a study, they start by defining their target population, or the group of people they want to know more about. They then create a sample meant to represent this larger group. If researchers want to study a group of people across an entire country, they aim for a nationally representative sample — one that resembles the target population in key characteristics such as gender, age, political party affiliation and household income.

Earlier this year, when the Pew Research Center wanted to know how Americans feel about a new class of weight-loss drugs, it asked a sample of 10,133 U.S. adults questions about obesity and the effects of Ozempic, Wegovy and similar drugs. Pew designed the survey so that the answers those 10,133 people gave likely reflected the attitudes of all U.S. adults across various demographics.

If Pew researchers had simply interviewed 10,133 people they encountered at shopping malls in the southeastern U.S., their responses would not have been nationally representative. Not only would their answers reflect attitudes in just one region of the country, the individuals interviewed would not represent adults nationwide.

A nationally representative sample is one of several types of samples used in research. It’s commonly used in research that examines numerical data in public policy fields such as public health, criminal justice, education, immigration, politics and economics.

To accurately report on research, journalists must pay close attention to who is and isn’t included in research samples. Here’s why that information is critical:

1. If researchers did not use a sample designed to represent people from across the nation, it would be inaccurate to report or imply that their results apply nationwide.

A mistake journalists make when covering research is overgeneralizing the results, or reporting that the results apply to a larger group of people than they actually do. Depending on who is included in the sample, a study’s findings might only apply to the people in the sample. Many times, findings apply only to a narrow group of people at the national level who share the same characteristics as the people in the sample — for example, individuals who retired from the U.S. military after 2015 or Hispanic teenagers with food allergies.

To determine who a study is designed to represent, look at how the researchers have defined this target population, including location, demographics and other characteristics.

“Consider who that research is meant to be applicable to,” says Ameeta Retzer, a research fellow at the University of Birmingham’s Department of Applied Health Sciences.

2. When researchers use a nationally representative sample, their analyses often focus on what’s happening at a national level, on average. Because of this, it’s never safe to assume that national-level findings also apply to people at the local level.

“As a word of caution, if you’re using a nationally representative sample, you can’t say, ‘Well, that means in California …,” warns Michael Gottfried, an applied economist and professor at the University of Pennsylvania’s Graduate School of Education.

When researchers create a nationally representative sample of U.S. grade school students, their aim is to gain a better understanding of some aspect of the nation’s student population, Gottfried says. What they learn will represent an average across all students nationwide.

“On average, this is what kids are doing, this is how kids are doing, this is the average experience of kids in the United States,” he explains. “The conclusion has to stay at the national level. It means you cannot go back and say kids in Philadelphia are doing that. You can’t take this information and say, ‘In my city, this is happening.’ It’s probably happening in your city, but cities are all different.”

3. There’s no universally accepted standard for representativeness.

If you read a lot of research, you’ve likely noticed that what constitutes a nationally representative sample varies. Researchers investigating the spending habits of Americans aged 20 to 30 years might create a sample that represents this age group in terms of gender and race. Meanwhile, a similar study might use a sample that represents this age group across multiple dimensions — gender, race and ethnicity along with education level, household size, household income and the language spoken at home.

“In research, there’s no consensus on which characteristics we include when we think about representativeness,” Retzer notes.

Researchers determine whether their sample adequately represents the population they want to study, she says. Sometimes, researchers call a sample “nationally representative” even though it’s not all that representative.

Courtney Kennedy, vice president of methods and innovation at Pew Research Center, has questioned the accuracy of election research conducted with samples that only represent U.S. voters by age, race and sex. It’s increasingly important for opinion poll samples to also align with voters’ education levels, Kennedy writes in an August 2020 report.

“The need for battleground state polls to adjust for education was among the most important takeaways from the polling misses in 2016,” Kennedy writes, referring to the U.S. presidential election that year.

4. When studying a nationwide group of people, the representativeness of a sample is more important than its size.

Journalists often assume larger samples provide more accurate results than smaller ones. But that’s not necessarily true. Actually, what matters more when studying a population is having a sample that closely resembles it, Michaela Mora explains on the website of her research firm, Relevant Insights.

“The sheer size of a sample is not a guarantee of its ability to accurately represent a target population,” writes Mora, a market researcher and former columnist for the Dallas Business Journal. “Large unrepresentative samples can perform as badly as small unrepresentative samples.”

If a sample is representative, larger samples are more helpful than smaller ones. Larger samples allow researchers to investigate differences among sub-groups of the target population. Having a larger sample also improves the reliability of the results.

5. When creating samples for health and medical research, prioritizing certain demographic groups or failing to represent others can have long-term impacts on public health and safety.

Retzer says that too often, the people most likely to benefit from a new drug, vaccine or health intervention are not well represented in research. She notes, for example, that even though people of South Asian descent are more likely to have diabetes than people from other ethnic backgrounds, they are vastly underrepresented in research about diabetes.

“You can have the most beautiful, really lovely diabetes drug,” she says. “But if it doesn’t work for the majority of the population that needs it, how useful is it?”

Women remain underrepresented in some areas of health and medical research. It wasn’t until 1993 that the National Institutes of Health began requiring that women and racial and ethnic minorities be included in research funded by the federal agency. Before that, “it was both normal and acceptable for drugs and vaccines to be tested only on men — or to exclude women who could become pregnant,” Nature magazine points out in a May 2023 editorial.

In 2022, the U.S. Food and Drug Administration issued guidance on developing plans to enroll more racial and ethnic minorities in clinical trials for all medical products.

When journalists cover research, Retzer says it’s crucial they ask researchers to explain the choices they made while creating their samples. Journalists should also ask researchers how well their nationally representative samples represent historically marginalized groups, including racial minorities, sexual minorities, people from low-income households and people who don’t speak English.

“Journalists could say, ‘This seems like a really good finding, but who is it applicable to?’” she says.

The Journalist’s Resource thanks Chase Harrison, associate director of the Harvard University Program on Survey Research, for his help with this tip sheet.  

The post What’s a nationally representative sample? 5 things you need to know to report accurately on research appeared first on The Journalist's Resource.

]]>
Journalists should report on lax oversight of research data, says data sleuth https://journalistsresource.org/media/preregistration-research-data-colada-uri-simonsohn/ Tue, 14 May 2024 15:02:10 +0000 https://journalistsresource.org/?p=78283 Uri Simonsohn, a behavioral scientist who coauthors the Data Colada blog, urges reporters to ask researchers about preregistration and expose opportunities for fraud.

The post Journalists should report on lax oversight of research data, says data sleuth appeared first on The Journalist's Resource.

]]>

Uri Simonsohn is an outspoken advocate for open science — adding transparency to the research process and helping researchers share what they’ve learned in greater detail with a broad audience.

Many people know Simonsohn for his data analyses on Data Colada, a blog about social science research he writes with two other behavioral scientists, Leif Nelson and Joseph Simmons. The three scholars, who co-direct the Wharton Credibility Lab at the University of Pennsylvania, occasionally use the blog to spotlight evidence of suspected fraud they’ve found in academic papers.

In his role at the Credibility Lab and as a professor at Esade Business School in Barcelona, Simonsohn travels to speak on issues around scientific integrity and data science. During his recent visit to Harvard University, The Journalist’s Resource asked for his thoughts on how journalists can improve their coverage of academic fraud and misconduct.

Here are three big takeaways from our conversation.

1. Before covering academic studies, ask researchers about preregistration.

Preregistration is “the practice of documenting your research plan at the beginning of your study and storing that plan in a read-only public repository such as OSF Registries or the National Library of Medicine’s Clinical Trials Registry,” according to the nonprofit Center for Open Science. Simonsohn says preregistration helps prevent research fraud. When researchers create a permanent record outlining how they intend to conduct a study before they start, they are discouraged from changing parts of their study — for instance, their hypothesis or study sample — to get a certain result.

Simonsohn adds that preregistration also reduces what’s known as “p-hacking,” or manipulating an analysis of data to make it seem as though patterns in the data are statistically significant when they are not. Examples of p-hacking: Adding more data or control variables to change the result or deciding after the analysis is complete to exclude some data. (For more on statistical significance, read our tip sheet on the topic.)

Preregistration is particularly important when researchers will be collecting their own data, Simonsohn points out. It’s easier to alter or fabricate data when you collect it yourself, especially if there’s no expectation to share the raw data.

While preregistration is the norm in clinical trials, it’s less common in other research fields. About half of psychology research is preregistered as is about a quarter of marketing research, Simonsohn says. A substantial proportion of economic research is not, however, because it often relies on data collected by other researchers or nonprofit organizations and government agencies such as the U.S. Census Bureau.

Simonsohn urges journalists to ask researchers whether they preregistered their studies before reporting on them. He likened reporting on research that isn’t preregistered to driving a car that hasn’t been inspected. The car might be perfectly safe, but you can’t be sure because no one has had a chance to look under the hood.

“If the person says ‘no,’ [the journalist] could ask, ‘Oh, how come?’” he says. “And if they don’t provide a compelling reason, the journalist could say ‘You know, I’m not going to cover work that hasn’t been preregistered, without a good rationale.’”

Research registries themselves can be a helpful resource for journalists. The Center for Open Science lets the public search for and read the thousands of preregistered research plans on its Open Science Framework platform. Researchers who preregister their work at AsPredicted, a platform Simonsohn helped create for the Wharton Credibility Lab, can choose whether and when to make their preregistered research plan public.

2. Report on the lack of oversight of research data collection.

Journalists and the public probably don’t realize how little oversight there is when it comes to collecting and analyzing data for research, Simonsohn says. That includes research funded by the federal government, which gives colleges, universities and other organizations billions of dollars a year to study public health, climate change, new technology and other topics.

Simonsohn says there’s no system in place to ensure the integrity of research data or its analysis. Although federal law requires research involving human subjects to be reviewed by an Institutional Review Board, the primary goal of these independent committees is protecting the welfare and rights of study participants.

Academic papers are reviewed by a small group of experts before a scholarly journal will publish them. But the peer-review process isn’t designed to catch research fraud. Reviewers typically do not check the authors’ work to see if they followed the procedures they say they followed to reach their conclusions.

Simonsohn says journalists should investigate the issue and report on it.

“The lack of protection against fraud is a story that deserves to be written,” he says. “When I teach students, they’re shocked. They’re shocked that when you submit a paper to a journal, [the journal is] basically trusting you without any safeguards. You’re not even asked to assert in the affirmative that you haven’t done anything wrong.”

Journalists should also examine ways to prevent fraud, he adds. He thinks researchers should be required to submit “data receipts” to organizations that provide grant funding to show who has had access to, changed or analyzed a study’s raw data and when. This record keeping would be similar to the chain of custody process that law enforcement agencies follow to maintain the legal integrity of the physical evidence they collect.  

“That is, by far, the easiest way to stop most of it,” Simonsohn says.

3. Learn about open science practices and the scientists who expose problematic research.

Nearly 200 countries have agreed to follow the common standards for open science that UNESCO, the United Nations’ scientific, educational and cultural organization, created in 2021. In December, UNESCO released a status report of initiatives launched in different parts of the globe to help researchers work together in the open and share what they’ve learned in detail with other researchers and the public. The report notes, for example, that a rising number of countries and research organizations have developed open data policies.

As of January 2024, more than 1,100 open science policies were adopted by research organizations and research funders worldwide, according to the Registry of Open Access Repositories Mandatory Archiving Policies, which tracks policies requiring researchers to make their “research output” public.

In the U.S., the universities and university departments that have adopted these policies include Johns Hopkins University, University of Central Florida, Stanford University’s School of Education and Columbia University’s School of Social Work. Such policies also have been adopted at Harvard Kennedy School and one of its research centers, the Shorenstein Center on Media, Politics and Public Policy, which is where The Journalist’s Resource is housed.

Simonsohn recommends journalists learn about open science practices and familiarize themselves with research watchdogs such as Nick Brown, known for helping expose problems in published studies by prominent nutrition scientist Brian Wansink.

Retraction Watch, a website that tracks research retractions, maintains a list of more than two dozen scientific sleuths. Elisabeth Bik, a microbiologist and science integrity consultant who has been called “the public face of image sleuthing,” was a guest speaker in The Journalist’s Resource’s recent webinar on covering research fraud and errors.

Here are some of the open science organizations that journalists covering these issues will want to know about:

The post Journalists should report on lax oversight of research data, says data sleuth appeared first on The Journalist's Resource.

]]>
10 ways researchers can help journalists avoid errors when reporting on academic studies https://journalistsresource.org/home/10-ways-researchers-journalists-avoid-errors/ Wed, 24 Apr 2024 20:32:50 +0000 https://journalistsresource.org/?p=78151 This tip sheet outlines some of the many ways researchers can help the news media cover research accurately, starting with the journalists who interview them about their own work.

The post 10 ways researchers can help journalists avoid errors when reporting on academic studies appeared first on The Journalist's Resource.

]]>

A common complaint I hear from researchers is that journalists make a lot of mistakes when they report on academic studies. They often describe study findings incorrectly, for example, or claim that a new paper proves something when it doesn’t.

I’ve written dozens of tip sheets in recent years to help journalists fine-tune their skills in choosing, vetting, understanding and explaining research as part of their reporting process. This tip sheet, however, is for researchers, who also play a role in helping journalists get it right.

Our main goal at The Journalist’s Resource is bridging the gap between newsrooms and academia to ensure news coverage of public policy issues is grounded in high-quality evidence — peer-reviewed research in particular.  Everyone benefits when journalists report accurately on research findings, especially the everyday folks who make decisions about their health and safety and their children’s futures based on that information.

When I speak to groups of researchers about the best ways to build relationships with journalists, I often share these 10 tips with them. They represent some of the many ways researchers can help the news media avoid errors, starting with the journalists who interview them about their own work.

1. Use plain language.

Many journalists haven’t studied research methods or statistics and often don’t fully understand the technical terms researchers use to communicate with one another about their work. That’s why it’s important to use plain language when discussing the details of a research paper.

For example, instead of saying you found a positive association between air pollution and dementia in older adults, say you found that older adults exposed to higher levels of air pollution are more likely to develop dementia.

Another example: Instead of saying there’s heterogeneity in the results of three experiments, you could say the trials produced different results.

2. Make sure any press releases written about your research are accurate.

When journalists cover research, they sometimes look to press releases for guidance in describing research findings and to double-check key names, facts and figures. Unfortunately, the press releases that higher education institutions and research organizations issue to promote their researchers’ work sometimes contain errors.

When possible, review the final version of a press release about your research before it’s shared with news outlets. If you spot problems after it’s distributed, ask for a correction and note the error in the initial press release when speaking with journalists.

3. Offer examples of the right and wrong ways to explain relationships among the key variables that were studied.

Not all journalists know what causal language is, or when they should and shouldn’t use it to describe the relationship between key variables in a research study. When speaking with a journalist about a study’s findings, point out whether there’s evidence of a causal relationship between or among certain variables. If there isn’t, offer the journalist examples of correct and incorrect ways to explain this relationship

An example: Let’s say a study finds that crime rates increased in 10 cities in 2020 immediately after local police departments implemented a new crime-fighting program. Let’s also say researchers found no evidence that this new program caused crime rates to rise.

A researcher discussing these findings with a journalist could help them avoid errors by explicitly pointing out the right and wrong ways to report on them. In this case, tell the reporter it would be inaccurate to say this study finds that this program causes crime rates to rise, might cause crime rates to rise or leads to higher crime. It also would be inaccurate to say that introducing this program contributed to higher crime rates in those 10 cities in 2020.

Then share examples of accurate ways to describe what the authors of the study learned: They discovered a relationship, correlation or link between this new program and increased crime rates in these specific cities in this one year. However, researchers found no evidence that the program caused, led to or contributed to the increase.

4. Note the generalizability of findings.

Many news stories and news headlines overgeneralize study findings, reporting that the findings apply to a much larger group than they actually do. Researchers can help journalists get it right by noting how generalizable a study’s findings are.

Example: Let’s say an academic paper concludes that 25% of student athletes at public universities in one state reported using marijuana during the past year. A researcher doing an interview about this study could help ensure the journalist reports on it accurately by stressing that the findings apply only to student athletes at these specific universities. It would be helpful to also point out that it would be incorrect to insinuate these findings apply or might apply to other types of students or to student athletes at any other higher education institution.

5. If a study’s results are expressed as standard deviations, be prepared to help journalists explain those findings using common measurements the public will understand.

A lot of journalists will need assistance describing results reported as standard deviations. Mainstream news outlets will generally avoid the term in news stories because it’s unfamiliar to the general public. Also, even when it’s explained using plain language, the concept can be difficult for even the most educated audience members to grasp.

One way to help audiences comprehend results expressed as a standard deviation, or SD, is by describing the results using more common units of measurement such as points, percentiles, dollars and years. Santiago Pinto, a senior economist and policy advisor at the Federal Reserve Bank of Richmond, does a good job explaining standard deviation changes in student test scores in his August 2023 report, “The Pandemic’s Effects on Children’s Education.”

The report looks at how U.S. eighth-grade students performed on the National Assessment of Educational Progress between 2019 and 2022. Pinto writes:

“What does a decline of 8 points in the NAEP math test mean? In math, one SD of individual student scores is about 40 points and is roughly equivalent, as a rule of thumb, to three years of schooling. The national average loss of 8 points is equivalent to 0.2 SD, which implies 0.6 years of schooling lost.”

6. Ask journalists to briefly summarize the key takeaways of your interview with them.

Doing this at the end of an interview is a good way to gauge how well a journalist understands the research so you can correct any errors and misunderstandings. Keep in mind, though, that journalists working on a tight deadline will have limited time to go over the key points of your conversation.

7. Offer to answer follow-up questions and review word choices.

At the end of an interview, invite journalists to contact you if they have additional questions, including questions about whether they’ve explained something correctly in their story. Journalists generally won’t share copies of their work before it runs — news outlets tend to discourage or prohibit it. But they might share a few sentences or paragraphs when they ask for help making sure there aren’t mistakes in those parts of the story.

Although they probably won’t share the direct quotes they plan to use from sources they’ve interviewed, some journalists will read your own quotes back to you. It’s worth asking about, and could be another way to prevent errors.

When you offer to answer follow-up questions, point out the best ways and times to reach you.

8. Provide examples of accurate news coverage and summaries.

You can also help a journalist check their work by sharing news stories, reports and summaries that correctly characterize the research the journalist is reporting on.

9. Share The Journalist’s Resource’s tip sheets.

You’ll find a link to our “Know Your Research” section on the right side of our homepage. We’ve created tip sheets, explainers and other resources to help journalists build research literacy and numeracy. Our tip sheets cover topics such as statistical significance, standard deviation, the purpose of peer review and how to interpret data from polls and surveys.

All our written materials are free. We publish them under a Creative Commons license so anyone anywhere can share and republish them as much as they like.

10. After the journalist’s story runs, give feedback.

Good journalists want to know if they got something wrong. They correct their mistakes and try to learn from them.

Too often, when researchers and others spot errors in news stories, they do not alert the journalist. If no one raises an issue with a news story, the journalist who reported it — and all the other journalists who will use it as a reference in the future — will assume it’s accurate.

If a journalist covers an academic paper well, they’ll want to hear about that, too. One way to let them know they got it right: Share their story on social media. Also, reach out with new research you think they’d be interested in reading.

The post 10 ways researchers can help journalists avoid errors when reporting on academic studies appeared first on The Journalist's Resource.

]]>
How to cover academic research fraud and errors: 4 big takeaways from our webinar https://journalistsresource.org/media/how-to-cover-academic-research-fraud-errors-webinar/ Tue, 05 Dec 2023 16:17:07 +0000 https://journalistsresource.org/?p=76886 Read on for great tips from Ivan Oransky, Elisabeth Bik and Jodi Cohen, three experts who have covered research misconduct or have hands-on experience monitoring or detecting it.

The post How to cover academic research fraud and errors: 4 big takeaways from our webinar appeared first on The Journalist's Resource.

]]>

In 2022, academic journals retracted more than 4,600 scientific papers, often because of ethical violations or research fraud, according to the Retraction Watch blog and database.

Although retractions represent a tiny fraction of all academic papers published each year, bad research can have tremendous impacts. Some studies involve new drugs, surgical procedures and disease prevention programs — all of which directly affect public health and safety. Also, government leaders rely on scholarly findings to help guide policymaking in areas such as crime, education, road safety, climate change and economic development.

On Nov. 30, The Journalist’s Resource hosted a free webinar to help journalists find and report on problematic research. Three experts who have covered research misconduct or have hands-on experience monitoring or detecting it offered a variety of tips and insights.

“How to Cover Academic Research Fraud and Errors” — a video of our Nov. 30 webinar

For those of you who missed the webinar, here are four of the big takeaways from our presenters, Ivan Oransky, a former president of the national Association of Health Care Journalists who teaches medical journalism at New York University and co-founded Retraction Watch; Elisabeth Bik, a microbiologist and science integrity consultant who has been called “the public face of image sleuthing;” and Jodi Cohen, an award-winning investigative reporter at ProPublica whose series “The $3 Million Research Breakdown” exposed misconduct in a psychiatric research study at the University of Illinois at Chicago.

1. Retraction Watch and PubPeer are two online resources that can help journalists identify and track research fraud and errors.

Retraction Watch, a blog launched in 2010, is a treasure-trove of information about research papers that have been removed from academic journals. The website features:

  • The Retraction Watch Database, which journalists can use to search for retractions connected to a specific researcher, university or research organization. Use it to look for patterns — for example, retractions among groups of researchers who tend to work together or among multiple researchers working at the same institution.
  • The Retraction Watch Leaderboard, an unofficial list of researchers with the highest number of paper retractions.
  • A list of scientific sleuths, including self-described “data thug” James Heathers and Michèle B. Nuijten, who, along with Chris Hartgerink, created statcheck, designed to find statistical mistakes in psychology papers. Some of these experts use aliases to protect against retaliation and harrassment.

Retraction Watch helped Cohen report on and provide context for a ProPublica investigation into the work of prominent child psychiatrist Mani Pavuluri.

It “was a huge resource in trying to understand this,” Cohen told webinar viewers. “The amount of information there and the ability to use that database — completely amazing.”

In her series, co-published in The Chronicle of Higher Education in 2018, Cohen revealed that Pavuluri “violated research rules by testing the powerful drug lithium on children younger than 13 although she was told not to, failed to properly alert parents of the study’s risks and falsified data to cover up the misconduct, records show.” The University of Illinois at Chicago, Cohen wrote, “paid a severe penalty for Pavuluri’s misconduct and its own lax oversight.” The federal government required the school to return the $3.1 million the National Institutes of Health gave it to fund Pavuluri’s study.

PubPeer is a website where researchers critique one another’s work. Comments are public, allowing journalists to observe part of the scientific process and collect information that could be useful in a news story.

Bik noted during the webinar that PubPeer is “heavily moderated” to reduce the likelihood of name-calling and speculation about a researcher’s work. The website explains its commenting rules in detail, warning users to base their statements on publicly verifiable information and to cite their sources. Allegations of misconduct are prohibited.

“You cannot just say, ‘You’re a fraud,’” Bik explained. “You have to come with evidence and arguments similar to a peer review report.”

PubPeer played a key role in student journalist Theo Baker’s investigation of academic papers co-authored by Stanford University President Marc Tessier-Lavigne. Tessier-Lavigne ultimately resigned and Holden Thorp, the editor-in-chief of the Science family of journals, announced in late August that two of Tessier-Lavigne’s papers had been retracted.

The Journalist’s Resource created a tip sheet on using PubPeer in August. Tip #1 from that tip sheet: Install a free PubPeer browser extension. When you look up a published research paper, or when you visit a website that links to a research paper, the browser extension will alert you to any comments made about it on PubPeer.

2. Early in the reporting process, ask independent experts to help you confirm whether a research study has problems.

Getting guidance from independent experts is critical when reporting on research fraud and errors. Experts like Elisabeth Bik can help you gauge whether problems exist, whether they appear to be intentional and how serious they are.

During the webinar, Bik advised journalists to ask for help early in the reporting process and seek out experts with the specific expertise needed to assess potential problems. Bik specializes in spotting misleading and manipulated images. Others specialize in, for example, statistical anomalies or conflicts of interest.

Bik’s work has resulted in 1,069 retractions, 1,008 corrections and 149 expressions of concern, according to her Science Integrity Digest blog. Journal editors typically issue an expression of concern about an academic paper when they become aware of a potential problem, or when an investigation is inconclusive but there are well-founded indicators of misleading information or research misconduct.

Bik stressed the importance of journalists helping correct the scientific record and holding researchers accountable.

“It seems that there’s relatively very few papers that have big problems that get corrected or retracted,” she said. “Institutional investigations take years to perform and there’s very rarely an action [as a result]. And senior researchers, who are the leaders, the mentors, the supervisors and the responsible people for these things happening in their lab, they are very rarely held accountable.”

Oransky encouraged journalists to get to know the scientific sleuths, some of whom are active on X, formerly known as Twitter.

“You can find dozens of people who do this kind of work,” he said. “It’s like any kind of whistleblower or source that you can develop.”

Oransky also highlighted common types of misconduct that journalists can look out for:

  • Faked data.
  • Image manipulation.
  • Plagiarism.
  • Duplication or “self-plagiarism” — when researchers reuse their own writings or data, taking them from a study that has already been published and inserting them into a newer paper.
  • Fake peer review — a peer review process that has, in whole or in part, been fabricated or altered to ensure a paper gets published.
  • Paper mills — organizations that create and sell fraudulent or potentially fraudulent papers.
  • Authorship issues.
  • Publisher errors.

3. One of the best ways to get tips about research fraud is to report on research fraud.

Oransky shared that he and other people at Retraction Watch continually receive tips about research misconduct. Tipsters will come to journalists they think will report on the issue, he said.

“You write about it and then people come to you,” Cohen added. “They don’t know you’re there unless you’re covering it regularly. And not even regularly, but like you start writing about it and show it’s something your interested in, you’re going to get more ideas.”

Another place journalists can go to check for allegations of research misconduct: court records, including subpoenas. They can also ask public colleges and universities for copies of records such as investigative reports and written communication between researchers and their supervisors, Cohen pointed out. If the research involves human subjects, journalists could request copies of reports and communications sent to and from members of the Institutional Review Board, a group charged with reviewing and monitoring research to ensure human subjects’ safety and rights are protected.

Cohen suggested journalists ask local colleges and universities for records tied to research funding and any money returned to funders. The National Institutes of Health maintains a database of organizations that receive federal grant money to conduct biomedical research.

“You could just start digging around a little bit at the institutions you cover,” Cohen said. “Be skeptical and ask questions of the data and ask questions of the people you cover.”

4. Discuss with your editors whether and how you’ll protect the identities of whistleblowers and experts who want to remain anonymous.

Many experts who leave comments on PubPeer or raise questions about research on other online platforms use aliases because they don’t want their identities known.

“You can imagine that not everybody wants to work under their full name so some of them are using all kinds of pseudonyms, although recently some of these people have come out under their full names,” she said. “But it is work obviously that doesn’t leave you with a lot of fans. Especially the people whose work we criticize are sometimes very mad about that, understandably so. But some of them have sued or threatened to sue some of us.”

Oransky said he has no issues letting scientific sleuths stay anonymous. They can explain their concerns in detail and show journalists their evidence. As with any source, journalists need to check out and independently confirm information they get from an anonymous source before reporting on it.

“Anonymous sources that are vulnerable — which a whistleblower is, which someone in a lab whose pointing out problems is, especially a junior person — as long as you know who they are, your editor knows who they are, that’s my rule,” he said. “We want to understand why they want anonymity, but it’s usually pretty obvious.”

Download Oransky’s slides from his presentation.

Download Bik’s slides from her presentation.

The post How to cover academic research fraud and errors: 4 big takeaways from our webinar appeared first on The Journalist's Resource.

]]>
How do science journalists decide whether a psychology study is worth covering? https://journalistsresource.org/home/science-journalists-psychology-research/ Wed, 15 Nov 2023 16:00:42 +0000 https://journalistsresource.org/?p=76684 A recent study finds that sample size is the only factor having a robust influence on 181 science journalists’ ratings of the trustworthiness and newsworthiness of a study. But they note that, overall, these journalists are doing a 'very decent job' vetting research. Here's how they do it.

The post How do science journalists decide whether a psychology study is worth covering? appeared first on The Journalist's Resource.

]]>

Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.

Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.

But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.

Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.

The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.

University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.

But there’s nuance to the findings, the authors note.

“I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.

Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.

Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.

“Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)

“This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.

More on the study’s findings

The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.

“As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.

Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”

The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.

Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.

Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.

“Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.

Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.

For instance, one of the vignettes reads:

“Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”

In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”

Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.

Considering statistical significance

When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.

Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.

“Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.

Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.

In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:

  • “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
  • “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
  • “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
  • “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”

Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”

What other research shows about science journalists

A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”

A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.

More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.

Advice for journalists

We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:

1. Examine the study before reporting it.

Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.

Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”

How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.

Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.

“Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.

Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.

Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.

Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology,Predatory Journals: What They Are and How to Avoid Them.”

2. Zoom in on data.

Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”

What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.

But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.

How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.

Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.

3. Talk to scientists not involved in the study.

If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.

Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.

4. Remember that a single study is simply one piece of a growing body of evidence.

“I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”

Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.

Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.

“We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”

5. Remind readers that science is always changing.

“Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”

Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”

Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could. 

The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”

Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”

Additional reading

Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.

The Problem with Psychological Research in the Media
Steven Stosny. Psychology Today, September 2022.

Critically Evaluating Claims
Megha Satyanarayana, The Open Notebook, January 2022.

How Should Journalists Report a Scientific Study?
Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.

What Journalists Get Wrong About Social Science: Full Responses
Brian Resnick. Vox, January 2016.

From The Journalist’s Resource

8 Ways Journalists Can Access Academic Research for Free

5 Things Journalists Need to Know About Statistical Significance

5 Common Research Designs: A Quick Primer for Journalists

5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct

Percent Change versus Percentage-Point Change: What’s the Difference? 4 Tips for Avoiding Math Errors

What’s Standard Deviation? 4 Things Journalists Need to Know

The post How do science journalists decide whether a psychology study is worth covering? appeared first on The Journalist's Resource.

]]>
5 reasons news stories about research need source diversity https://journalistsresource.org/race-and-gender/5-reasons-news-stories-about-research-need-source-diversity/ Wed, 08 Nov 2023 21:14:30 +0000 https://journalistsresource.org/?p=76637 Many journalists work hard to include people from different backgrounds in stories about local issues and events, but might not realize source diversity is also important in stories about science and research. Here are five reasons why.

The post 5 reasons news stories about research need source diversity appeared first on The Journalist's Resource.

]]>

Many news outlets aim to produce news that reflects the communities they serve. That’s why journalists often seek people from different demographic groups to include in stories about local politics, education trends, holiday shopping and other issues and events. Some newsrooms even conduct source diversity audits to get a better sense of who’s featured in their coverage and who’s getting left out.

Journalists might not always realize source diversity is equally important in stories about science and research. In some ways, it’s more crucial. Here are five big reasons why:

1. Scholars who are racial and gender minorities often provide new perspectives and approaches to the problems they study, research finds.

When a team of researchers analyzed doctoral dissertations completed in the U.S. from 1977 to 2015, they found that historically underrepresented groups innovate at higher rates. Innovation, they point out, drives scientific progress.

The researchers looked at dissertations across scientific fields — the so-called “natural” sciences such as biology and physics as well as computer science, psychology and the social sciences, which include sociology and economics.

“Scholars from underrepresented groups have origins, concerns, and experiences that differ from groups traditionally represented, and their inclusion in academe diversifies scholarly perspectives,” the researchers write in a paper published in the Proceedings of the National Academy of Sciences in April 2020. “In fact, historically underrepresented groups often draw relations between ideas and concepts that have been traditionally missed or ignored.”

Diversity within the scientific community isn’t limited to race, ethnic or gender diversity, however. It includes differences related to culture, social class, religion, sexual orientation, geography and disability status, notes Understanding Science, a website the University of California Museum of Paleontology created to help educators explain science to students.

“While science can investigate any part of the natural world, progress is only made on those questions that scientists think to ask,” the website explains. “Our backgrounds and identities shape the questions we ask about the world.”

2. Scholars featured in news stories can help shape news coverage, which can, in turn, affect how audiences think about issues.

Dozens of academic studies demonstrate the news media’s influence on public opinion and policymaking.

“Media narratives matter because they shape and are bellwethers of solutions to public policy problems,” researchers from Harvard University, MIT and Western University write in a 2019 paper that examines differences in how newsrooms framed and how policymakers responded to the eras of increased crack cocaine use in the 1980s and opioids in more recent decades.

Historically, mainstream news outlets tended to prioritize the experiences and views of white people and men. By interviewing researchers from diverse backgrounds and amplifying their work, journalists can provide a fuller understanding of the issue or problem they’re reporting on.

Longtime education and health journalist Melba Newsome stresses that inclusive reporting is essential. She created a 10-step guide to help.

“Increasing the diversity of the sources we use and the people we feature is the first and most significant step in creating journalism that paints a more complete picture and is more relevant to audiences,” she writes in a 2021 essay in Nieman Reports.

3. News stories about research and academia should reflect the diversity of the scientific community.

Although the scientific community in the U.S. is still mostly white men, it has grown more diverse as more women and racial and ethnic minorities have pursued careers in research. At selective public universities in the U.S., diversity among tenured and tenure-track professors “modestly but persistently” increased between 2002 and 2022, an August 2023 working paper from the Annenberg Institute at Brown University reveals.

As the share of white faculty fell from 83% to 66%, the proportion of Asian, Black and Hispanic faculty rose. The percentages of assistant professors who were Black or Hispanic “have seen accelerated growth since the 2015-16 academic year.”

Nationally, a substantial proportion of scholars in some fields are women or people of color. In 2021, about one-third of people with doctoral degrees in social science who worked in academia in the U.S. were racial or ethnic minorities, according to the National Science Foundation. That year, people of color made up an even larger share of scholars working in academia with doctoral degrees in the life sciences, a broad category that includes biological, medical and agricultural sciences.

In 2022, women earned more than 60% of all doctoral degrees awarded in the U.S. in environmental science, food science and technology, criminology, anthropology, public policy analysis and several other fields, federal data show.

It’s unclear what the global scientific community looks like, in part because few countries monitor the race and ethnicity of individuals entering science-related fields, Scientific American has reported. However, in many regions of the world, scientists are tracked by gender and male scientists outnumber female scientists.

In South Korea, an estimated 87% of all authors of academic papers published between 1950 and 2020 were men, a recent analysis finds. The number was a little lower in Japan: 83%. In China, more than 3 out of 4 authors were men.

As the American research community has changed, scholars from different identity groups have formed dozens of professional and advocacy organizations, including the American Society of Hispanic Economists, 500 Queer Scientists, African American Women in Physics, American Indian Science and Engineering Society, Lesbians Who Tech, and People of Color Also Know Stuff. These organizations are excellent resources for journalists seeking expertise from researchers from diverse backgrounds.

4. Focusing on source diversity helps journalists overcome biases in source selection.

Since 2013, NPR has tracked the demographics of sources appearing on its largest weekday radio programs, “Morning Edition” and “All Things Considered.” It later added “Weekend Edition” to the project. Conducting source audits is how NPR learned its reporters tended to choose sources who shared their racial or ethnic identity.

For example, in 2015, 40% of the sources Black reporters quoted were Black, and 10% of white reporters’ sources were Black, according to a 2018 report from NPR’s public editor, Jeanine Santucci. Slightly more than half of the people Latino reporters quoted were Latino, compared with 5% of the people white reporters quoted.

When NPR began tracking source diversity, 77% of sources were white. That number fell as NPR began to feature more people of color on its shows, the news organization states on its website. In fiscal year 2021, 61% of on-air sources were white.

Journalists such as Pulitzer Prize-winning science writer Ed Yong have taken it upon themselves to audit the source diversity in their own work. Several years ago, Yong wrote about his efforts to fix the gender imbalance in his stories for The Atlantic.

He used a spreadsheet to count his sources and, over two years, doubled the percentage of women included in his coverage — from around 24% to around 50%.

“How do you even know who your sources are if you’re not tracking it?” Doris Truong, director of teaching and diversity strategies at the Poynter Institute, is quoted as saying in a 2021 report about diversity and inclusion in journalism from the Global Future Council on Media, Entertainment and Sport. “If you’re not asking someone for their pronoun because you think you know it, how do you know it? If you don’t ask someone their race because you think you know it, how do you know it? If you just presume Kamala Harris is Black, you might be wrong.”

Academic publishers also are focusing on source diversity. Nature, a prominent academic journal that also provides news about research, began conducting source audits in 2021. It tracks the gender, geographic location and career stages of sources that appear in its news articles, podcasts and videos. About 90% of sources featured in Nature’s journalism are researchers.

The results of Nature’s first audit show that the majority of sources its newsroom quoted or paraphrased from April 2021 to January 2023 were men, people based in North America and Europe, and people in later parts of their careers.

“We will continue to record our data, and we aim to improve on these figures, proactively seeking out and trying to better represent voices from historically less-represented peoples and parts of the world,” the journal’s editors write in an editorial published early this year.

They note that while Nature did not collect data on sources’ race or ethnicity, the publication is “working to widen the racial and ethnic diversity of our sources to make our reporting more representative of global science.”

5. Source diversity can help journalists reach key segments of their audience and build trust in news outlets.

When public health officials want to launch a public education campaign in a marginalized community, they frequently enlist help from “trusted messengers,” or individuals whom community members consider credible sources of information. A trusted messenger might be a local physician or religious leader with deep ties to the area. Often, trusted messengers are racial or ethnic minorities.

Public health agencies and news outlets share a serious problem: Research studies over the years have repeatedly shown that many people of color don’t trust them. That can be dangerous during a pandemic, hurricane or other natural disaster, when government leaders rely on news organizations to help them get potentially life-saving information out quickly to the public.

Including trusted messengers in news stories may help news outlets reach certain demographic groups. But first, journalists need to know who the trusted messengers are for different segments of their audience.

Black adults trust scientists — especially medical scientists — more than they trust religious leaders, elected officials, the military, police officers, public school principals and business leaders to act in the public’s best interest, a 2022 report from the Pew Research Center concludes. Hispanic adults also tend to rate medical scientists, as well as scientists broadly, as more trustworthy than other prominent groups in society, a separate Pew report shows.

Generally speaking, physicians, nurses, scientists and pharmacists were the most trusted sources of health information during the COVID-19 pandemic, according to a medical brief JAMA published in March.

When researchers organized 41 focus groups with people from marginalized groups to better understand their lack of trust in news outlets, most participants said they “saw news media as not only out of touch but at times an especially harmful force that did real damage to their communities, either through neglecting them altogether or exploiting them, reinforcing harmful stereotypes, or sensationalizing in divisive and polarizing ways.”

The resulting paper, released by the Reuters Institute for the Study of Journalism in April, outlines several recommendations for building trust. One of them: Report news that more fully, faithfully and fairly captures diverse perspectives.

The post 5 reasons news stories about research need source diversity appeared first on The Journalist's Resource.

]]>
FBarchive: A searchable repository of Facebook whistleblower documents https://journalistsresource.org/home/fbarchive-new-reporting-tool/ Wed, 18 Oct 2023 13:57:41 +0000 https://journalistsresource.org/?p=76470 Journalists, researchers and the public can now access a searchable database of Facebook whistleblower documents, including internal chat threads, reports, presentations and more.

The post FBarchive: A searchable repository of Facebook whistleblower documents appeared first on The Journalist's Resource.

]]>

In September 2021, the Wall Street Journal began publishing a series of articles exposing the inner workings of Facebook and subsidiaries such as Instagram, including evidence that company insiders knew Instagram made teen girls’ body image issues worse and that Facebook leaders did little to curb recruitment activities of human traffickers and drug cartels.

Much of that reporting was based on a trove of documents and images leaked by former Facebook product manager Frances Haugen, who came forward publicly several weeks after the series published.

(In October 2021, Facebook Inc., the parent company, was rebranded as Meta Platforms Inc., an effort at least six months in the making that some commentators in the news media noted might have had the effect of blunting public backlash following Haugen’s leaks.)

In November 2021, Harvard Kennedy School’s Public Interest Tech Lab received an anonymous drop of information from the Haugen leak, comprising roughly 20,000 images and more than 800 internal Facebook documents, such as chat threads and research, starting from 2016.

As of October 18, 2023, that information is available to the public, in a searchable format, via a virtual tool called FBarchive. Users need to register for a free account to access the archive.

FBarchive is designed to help researchers, journalists and policymakers understand how, why and when decisions have been made at some of the most influential social media platforms in the world. The project is led by Latanya Sweeney, a technology professor at Harvard who heads the Public Interest Tech Lab.

Sweeney says making these internal deliberations and thought processes public will help policymakers and technology researchers discover solutions to the problem of moderating content on social media platforms that billions of people use.

“We just don’t know how to do moderation at scale — we don’t have the technology, we don’t have the know-how — and that’s something that’s true on all of these platforms where we try to do moderation,” says Sweeney, a pioneer in the field of data privacy. “So, the question is, how should we do that? Can we look at these documents to see where the fault lines are and inspire new technologies, or new technological approaches?”

How to use FBarchive

Go to fbarchive.org and hit “Enter.” This will bring you to a sign-in page. First-time visitors will receive directions to sign up for a new account via the Public Interest Tech Lab’s MyDataCan platform. Harvard-affiliated users can sign in with their university ID. All other users can click “sign up” to create a username and password.

The primary gateway to accessing the FBarchive materials is a Boolean search bar, meaning certain operators, such as “and,” “or” or “not” will either broaden or restrict results. Anyone who wants to view a document in FBarchive needs to be logged in.

The search bar is useful for researchers and reporters who already have some focus on what they are interested in — for example, specific keywords or phrases related to body image, gender issues or global conflicts. Journalists and researchers can also get a general sense of what is in the archive by using broader terms — “drug cartels” or “human trafficking,” for example.

Users can also search for information about particular people, such as executives at Meta. The FBarchive team redacted names of people who likely have an expectation of privacy — a software engineer outside of top management, for example. Names of public figures, such as C-suite executives, politicians and celebrities, are not redacted.

To help users understand what they’re reading, Sweeney and her team created a glossary of terms and phrases found in the documents. The “audience problem,” for example, is “a term used internally to describe the years-long trend of declining post numbers on Facebook,” according to the glossary.

“There’s a lot of inside Facebook language in there,” Sweeney says.  

Fbarchive
When using FBarchive, click the book icon, circled above, to see the glossary.

Users can bookmark particular documents and images, and create their own tags, which can be used to curate collections of images and documents. For example, a journalist reporting on how social media affects body image could collect relevant images and documents by adding a “bodyimage” tag to them.

Enter a phrase then click the plus button to create a tag and apply it to a document you’re viewing.

FBarchive story ideas and research angles

The FBarchive is full of unexplored investigative story ideas and scholarly research topics. To get you started, Sweeney has offered questions needing more journalistic and academic attention, including the following, among others:

  • Is viral content more likely to increase Facebook’s revenues? How does Facebook handle this tension? Under what circumstances are the needs of human users traded for corporate revenue?

  • At least 95 countries are identified in the Facebook documents. What are the top concerns Facebook considers for people in these countries on the platform? Are the concerns and the way Facebook addresses them similar or different across countries?

  • Violence and political unrest exists around the world and is evidenced within the Facebook documents. What is the nature and extent that Facebook itself plays in the proliferation of these tensions, if any? What role could Facebook play to help reduce these tensions?

Informing future regulation

The stories and studies prompted by the archive, along with the content of the archive itself, could inform potential regulation.

For legislators and officials interested in regulating tech, trying to understand how Facebook functions has, so far, been like trying to see what’s going on in a “black box,” Sweeney says.

She likens FBarchive to taking an opaque case off an overheating radio and replacing it with a clear one. Everyone can now see the hot spots inside causing problems.

“I just don’t think policymakers have ever had the opportunity to understand where real leverage points were,” Sweeney says. “They always had to depend on what the tech companies themselves said was possible, not possible. And seeing the inside content gives you a much better sense of, how does this really operate?”

FBarchive
FBarchive
In these images from FBarchive, Facebook employees discuss Breitbart being included as part of a 2019 test launch of the platform’s News tab.

The post FBarchive: A searchable repository of Facebook whistleblower documents appeared first on The Journalist's Resource.

]]>
5 tips for using PubPeer to investigate scientific research errors and misconduct https://journalistsresource.org/home/pubpeer-research-misconduct-tips-journalists/ Tue, 01 Aug 2023 20:33:43 +0000 https://journalistsresource.org/?p=75862 PubPeer, a website where scholars critique one another’s work, is an excellent investigative reporting tool. These five tips will help you make the best use of it.

The post 5 tips for using PubPeer to investigate scientific research errors and misconduct appeared first on The Journalist's Resource.

]]>

PubPeer, a website where researchers critique one another’s work, has played a key role in helping journalists uncover scientific misconduct in several prominent investigative stories in recent years — including the student newspaper series that led to Stanford University President Marc Tessier-Lavigne’s recent resignation.

The platform was created 10 years ago to encourage discussion of individual academic studies and “accelerate the correction of science,” cofounder Boris Barbour, a neuroscience researcher in France, told The Journalist’s Resource. Conversations generally center on papers that already have been peer reviewed and published in academic journals. When a discussion of a paper begins, PubPeer automatically invites an author to respond, sometimes spurring lengthy, detailed exchanges.

Comments are public, allowing journalists to observe part of the scientific process and collect information that could become the basis of an important news story. The vast majority of comments, however, are made anonymously, allowing scholars to raise questions and concerns without risking retaliation, Barbour noted.

“Although all forms of scientific discussion are welcome on PubPeer, the site has become known as the channel by which an astonishing volume of research misconduct has come to light,” he wrote in an email.

Last year, Stanford student journalist Theo Baker discovered allegations of altered images in Tessier-Lavigne’s research on PubPeer and began to investigate. Had journalists checked the website earlier, they would have found criticisms dating back to at least 2015, Baker writes in a July 30 essay for The New York Times.

 “Reporters did not pick up on the allegations, and [academic] journals did not correct the scientific record,” he writes. “Questions that should have been asked, weren’t.”

While PubPeer is an excellent reporting tool for journalists across beats, it’s crucial they use the information they find there responsibly. For example, don’t assume a comment is accurate or a calculation is correct because it came from an academic. Likewise, don’t shrug off serious accusations because the people making them don’t use their real names.

The platform is relatively easy to navigate. On the homepage, you’ll find a list of discussions organized according to their most recent comment.  You can search the PubPeer database using an author’s name, key words, a paper’s title or a paper’s numeric or alphanumeric identifier, such as a Digital Object Identifier, or DOI.

This tip sheet aims to help journalists make the most of the website. We created the five tips listed below based on advice several journalists familiar with PubPeer shared with us during phone and email interviews.

Keep reading to learn more from Julia Belluz, a former senior health correspondent at Vox who’s working on a book about nutrition and obesity; Stephanie M. Lee, a senior reporter at The Chronicle of Higher Education who covers research and the academic community; and Charles Piller, an investigative journalist for Science magazine and founding board member of the Center for Public Integrity.

1. Install the PubPeer browser extension.

PubPeer offers browser extensions for the four major web browsers. Once installed, the extension will alert you if PubPeer comments exist for any research paper you’re reading online.

There’s also a PubPeer plugin for Zotero, a web tool some journalists use to organize and share research. The plugin lets you see whether there are PubPeer comments on any of the papers saved in your Zotero collection.

2. Don’t publish a news story simply to point out people have made negative comments about a research study. 

The fact that an individual study has drawn a certain number of probing comments on PubPeer is not, on its own, newsworthy, Lee says. She uses PubPeer as a barometer of sorts, a quick way to get a cursory read on the credibility of certain researchers and studies.

“Basically, when I start looking into a group of scientists or a scientist whose work I’m interested in for whatever reason, PubPeer is the first place I’ll go to see if questions have been raised,” says Lee, who received the 2022 Victor Cohn Prize for Excellence in Medical Science Reporting for her investigative reporting about scientific misconduct.

Because PubPeer allows the whole research community to weigh in on a paper, it’s not surprising some scholars have spotted problems peer reviewers missed. The peer review process typically involves a small number of academics whose evaluation is limited to, for example, making sure the authors write clearly, their research design is sound, they cite other researchers’ work correctly and that what they have written corresponds with the information presented in their tables and figures.

“We know peer review can fail to catch errors or even outright fraud in research before it’s published,” Belluz wrote.

The website, she added, has become “a place where scientists go to whistleblow about problematic research, so anyone interested in scientific misconduct might find sources or ideas for stories on PubPeer.”

3. Verify all claims made on PubPeer before relying or reporting on them.

Piller describes PubPeer as “more of a tip sheet than an authoritative source.” 

“I’m not casting aspersions on it — it’s great,” he says. “It’s extremely valuable, and a lot of information on it is correct. The problem is, there’s a lot of anonymous sources.”

Barbour, who helped create PubPeer, estimated that “maybe 90%” of comments are posted anonymously.

 “Any criticism of the work of colleagues tends to be badly received, and you never know who will be making — often anonymous — decisions about your future,” Barbour wrote in an email to The Journalist’s Resource. “So, understandably, people worry. Furthermore, even an implicit (but convincing) suggestion of misconduct immediately creates an extraordinarily high-stakes situation. A career might be on the line. It’s unsurprising that those threatened would fight with every weapon at their disposal.”

Piller urges journalists to thoroughly verify everything they find on PubPeer before using it. He recommends enlisting the help of multiple experts with deep experience, including scholars with subject-area expertise and technical experts such as Elisabeth Bik, a well-known image analyst.

Bik helped Baker, the Stanford student journalist, with his investigation. Last year, she helped Piller look into possible image tampering in dozens of Alzheimer’s studies.

“You need to treat anonymous postings [on PubPeer] the same as you would treat anonymous postings of any kind,” Piller notes. “You have no way of knowing if that person has some sort of ulterior motive.”

4. Use caution when describing the likelihood that researcher misconduct happened.

Piller warns journalists to carefully choose words and phrases to convey how certain they are that a study’s findings are erroneous or that a researcher has intentionally participated in some form of misconduct. It can be difficult to establish with complete confidence that data or images have been fabricated or manipulated, especially if journalists lack access to the original data and cannot compare the images that appear in an academic article against the unedited, uncropped, high-resolution, earliest version of those images.

Piller suggests using language that reflects some level of uncertainty — for example, “apparent fabrication” or “potential errors.”

“You can wreck someone’s career, so you have to be really careful and really fair-minded about it, obviously,” he says. “Even if the evidence appears to be incontrovertible, I would use qualifiers.”

He stresses the importance of journalists explaining and making available the evidence they have found.

“Show evidence,” he advises. “It’s better to let the evidence speak for itself most of the time.”

5. No matter how small a role a researcher plays in your story, check PubPeer before including them.

“If you’re looking at a scientist as a source — a credible source of authority — it’s good to know whether that person is regarded as above reproach,” Piller points out.

Many journalists already know they need to look into possible conflicts of interest.

 “Another [red flag] would be credible evidence a person has engaged in data manipulation or image manipulation or other ways of operating scientifically that would cause you to want to question their credibility as a source for a story,” Piller adds.

Lee’s advice: Make a habit of checking PubPeer for the researchers and studies you’re considering including in your coverage, keeping in mind that a lack of comments does not mean there aren’t problems.

“If [journalists] come across research that is influential or controversial in whatever field they’re covering, it’s a good, proactive step to plug the URL into PubPeer and see what they get,” Lee says.

Because PubPeer’s search function does not work for all URLs, Barbour recommended using a DOI or other unique identifier.

The post 5 tips for using PubPeer to investigate scientific research errors and misconduct appeared first on The Journalist's Resource.

]]>
Story ideas from the Federal Reserve’s Beige Book: July 2023 https://journalistsresource.org/economics/beige-book-story-ideas-july-2023/ Wed, 26 Jul 2023 14:58:42 +0000 https://journalistsresource.org/?p=75839 From a booming bicycle industry in Little Rock, Arkansas to a central Minnesota restaurant that bought an apartment building for its employees to live in, our semi-regular rundown of the Federal Reserve’s Beige Book is full of fresh story ideas.

The post Story ideas from the Federal Reserve’s Beige Book: July 2023 appeared first on The Journalist's Resource.

]]>

The U.S. central banking system, the Federal Reserve, is a data-heavy organization. Economists at the bank regularly crunch numbers to inform national decisions, such as setting target interest rates. Meanwhile, economists at the Federal Reserve’s district banks — there are 12 across the country — analyze local and regional data to provide research insights on specialized topics, such as economic inequality.

Eight times yearly, the Federal Reserve’s Beige Book offers a high-level, anecdotal glimpse of current economic sentiment in each of the central bank’s 12 regions. For journalists, it can serve as an extensive tip sheet with insights on local, regional and national story angles. Here, we extend our semi-regular series of Beige Book rundowns with fresh story ideas from the July 12 edition.

beige book

The Beige Book was first publicly published in the 1980s with a beige colored cover. Find the archives here.

The Beige Book is especially valuable for business journalists, or any journalist assigned to cover business topics, because national and regional economies can change rapidly. The book is compiled from reports from Federal Reserve district directors and interviews and surveys of business owners, community groups and economists.

The book refers to individuals surveyed as “contacts” and they are quoted anonymously. There’s no particular number of contacts needed to produce the Beige Book, but each release is based on insights from hundreds of contacts.

Economists and analysts at district banks seek to cultivate contacts that can give a broad economic view — think the head of a trade group who regularly talks with many business owners — along with other contacts representing a variety of industries and company sizes.

Research from the Federal Reserve Bank of St. Louis suggests the anecdotes in the Beige Book accurately reflect what’s happening with employment and inflation in the U.S. economy.

After performing a simple textual analysis of every Beige Book from January 2000 to April 2022, the authors find, for example, that mentions of rising prices tend to closely track with official inflation data.

“Of course, the Beige Book — with its emphasis on qualitative and anecdotal information — is written with the belief that those anecdotes provide a deeper understanding of the economy, which simple word counts cannot capture,” write St. Louis Fed senior economist Charles Gascon and senior research associate Devin Werner in their June 2022 analysis.

The July 12 release of the Beige Book compiles information gathered in May and June of 2023. Housing demand remained elevated nationally, despite historically high mortgage rates. Labor demand was strong for high-skilled workers as well as in the health care, transportation and hospitality industries.

Inflation has cooled recently, and some businesses were reticent to raise prices as customers slowed their spending. Other firms reported steady consumer demand, allowing them to sustain their profit margins.

Keep reading for quick summaries and story ideas from the latest edition of the Beige Book.

District 1, Boston

Newfound efficiency on Cape Cod

Hotel rooms in Boston were pricey, even considering spring and summer are usually busy — rates were up 12% year-over-year. In Connecticut, tenants at one high-end office building declined to re-sign their leases once they expired, forcing the building into foreclosure.

Single family and condo sales slowed across the district because of a lack of inventory and rising interest rates. Even with “tepid sales,” housing demand is strong relative to the lack of supply, meaning “prices continued to rise, even as the pace of appreciation slowed gradually in recent months.” The state of play in the District 1 residential housing market won’t change until interest rates go down, contacts said.

One semiconductor producer expected a strong 2024, with higher demand coming in part from the spread of consumer products that use artificial intelligence.

Story idea: Some restaurants and hotels on Cape Cod have “achieved efficiencies,” meaning they are meeting demand while getting by with fewer staff after two straight summers of worker shortages. Talk to business owners in the New England leisure industry to find out exactly how they have managed to improve efficiency with fewer workers — and what do workers think? Are they working overtime? Were there technological investments and their work hours reduced? There could be an interesting business enterprise story here.

District 1 covers Maine, Massachusetts, New Hampshire, Rhode Island, Vermont and most of Connecticut.

beige book

District 2, New York

Used cars underwater

The tourism industry in the Adirondack Mountains got an influx of seasonal workers thanks to the lifting of COVID-19-related visa restrictions. Prices for inputs — the components businesses use to make finished goods — remained high due to inflation, though prices receded in some industries. A contact in the construction industry said the prices of doors, windows and other key building materials were improving.

Residential rents in New York City “reached new highs” with upstate New York renters also experiencing an increase in housing costs. Small and midsized banks saw reduced demand for loans of all types.

Story idea: Used car values that spiked during the pandemic have come back to earth in District 2, with used car sales “subdued.” The loan-to-value ratio on some outstanding used car loans reached 120%, according to contacts, meaning the value of the loan is greater than the resale value of the car. A high loan-to-value ratio can be a precursor to delinquency. Talk with auto financers and loan holders and see if there is a story to be told about underwater loans in the used car financing market.    

District 2 covers New York, western Connecticut, northern New Jersey, Puerto Rico and the U.S. Virgin Islands.

beige book

District 3, Philadelphia

211 specialist solutions

Few firms across District 3 carried out widespread layoffs, but some firms across industries did conduct targeted layoffs, were more selective in their hiring and reduced employee hours. Still, there were labor shortages in certain industries and jobs, such as housekeeping staff and cooks. Consumers overall were “more careful in their spending.”

Banks in the district saw a hike in mortgage origination, though home equity loans plateaued. Businesses faced tighter credit in the wake of bank failures during the spring and contacts “described an environment of elevated caution in which most banks want to extend credit only to customers with whom they already have a relationship.”

Story idea: In New Jersey and Pennsylvania, calls increased to 211, a phone number for finding government and nonprofit assistance with housing costs, utility bills and other resources. Nearly 95% of people in the U.S. have access to 211 across every state and Puerto Rico. Spend a day with specialists who field 211 calls for a solutions-based journalism story. Why do 211 specialists do this work? What are their personal stories? How does the process of connecting people in need with available resources play out? What do these specialists need to do their jobs better? The United Way operates 211 in New Jersey and Pennsylvania.

District 3 covers most of Pennsylvania, southern New Jersey and Delaware.

beige book

District 4, Cleveland

Food pantries in crisis

Business owners in District 4 were cautiously optimistic about their prospects over the rest of the year and many saw less of a chance than in the first half of the year that the U.S. would fall into a recession.

Consumer spending overall was strong, though one “large general merchandiser” said the reduction of pandemic-related Supplemental Nutrition Assistance Program benefits was constraining household budgets. Some firms paused expansion projects, except those subsidized with government funds, according to one banker.

Story idea: Since March, the number of families needing help with food increased 35%, according to contacts at District 4 community organizations. One contact at a food pantry said, “‘people are experiencing food insecurity more now than I have seen in my seven years with the organization.’” Are local government officials aware of this increase? If so, what are they doing to help constituents get enough food on their tables?

District 4 covers Ohio, eastern Kentucky, western Pennsylvania and northern West Virginia.

beige book

District 5, Richmond

Toothaches in dental care

Worker availability varied across District 5. One software company had an easier time hiring IT workers, but a tour bus company couldn’t find enough drivers and mechanics. Some businesses increased prices because of the rising cost of capital, driven by recent Federal Reserve interest rate hikes, while other firms said they could not pass along the full costs of capital to consumers, lowering their profit margins.

In residential real estate, some buyers were faced with bank appraisals not meeting the sales price, meaning they had to renegotiate with the sellers to a lower price, pay the difference in cash, or walk away from the purchase.

Story idea: A line from the District 5 report stands out: “A dental laboratory reported not meeting their numbers for the past six months due to a significant slowdown in the dental market.” Are District 5 residents seeking less dental care for some reason? Are they foregoing more expensive treatments, such as bridges and dentures, which labs usually fulfill? See what your dentist has to say and if they can put you in touch with others in the dental care field to find out what’s going on.

District 5 covers Virginia, Maryland, the Carolinas, most of West Virginia and the District of Columbia.

beige book

District 6, Atlanta

A glut of cheese

A downturn in domestic tourism was offset by business and international travel to District 6. Beach resorts dealt with falling occupancy while “those hoteliers reported some diminished pricing power,” meaning they had less ability to set room rates at higher levels while maintaining demand.

Rail freighters saw “significant” declines in year-over-year volumes while shipping ports likewise reported slowdowns in container traffic, “owing to inventory destocking by retailers and weaker global demand.”

The citrus market had low crop yields and weak profits, but cattle farms saw strong sales “as demand for beef remained high amid low supply.”

Story idea: Demand for milk fell due to “oversupplies of cheese.” What happened to make cheese producers supply too much cheese? This could be an interesting case study and explainer on how business inventories work — firms have imperfect information to predict their economic futures. Sometimes they stock up, anticipating higher demand, but that demand does not materialize. These decisions often have consequences that spill over and affect other firms, such as milk producers in this case.

District 6 covers Alabama, Florida, Georgia, eastern Tennessee, southern Louisiana and southern Mississippi.

beige book

District 7, Chicago

Community development capital crunch

Amusement parks and tourist attractions saw an uptick in activity, with contacts in those industries reporting consumers in general unlikely to give up their leisure plans, instead cutting back elsewhere in their nonessential spending, if needed.

Some retail contacts reported having a bit too much product in stock, particularly those selling apparel, beauty items, and sports and outdoor goods. In residential real estate, cash deals were increasingly common as high interest rates reduced the number of buyers, one contact in Iowa said. Drought put farm crops behind schedule, especially corn, but it was “too soon to panic,” according to one agricultural contact.

Story idea: Here is a story on how the high cost of borrowing money is keeping neighborhoods from flourishing: Community Development Finance Institutions in District 7 report having trouble providing affordable loans to low- and moderate-income small businesses and homebuyers. CDFIs are usually banks or credit unions that focus on providing capital to low-income communities. Reach out to CDFI officers to find out what’s happening on the ground and whether they can put you in touch with clients to interview.

District 7 includes Iowa, most of Indiana, northern Illinois, southern and central Michigan and southern Wisconsin.

beige book

District 8, St. Louis

Bicycle boom in Little Rock

Across industries, labor costs were up and certain businesses that work on tight margins, such as supermarkets, couldn’t fully pass those costs onto customers. One grocery store contact “reported that they would pass about 25%-33% of higher costs to consumers.”

In Arkansas, one retailer reaped smaller profit margins as consumers focused on essential groceries rather than merchandise with higher margins. Also in Arkansas, a boat seller saw higher demand for low- and high-end boats while “sales for middle-market boats” had “collapsed.” In Memphis, “a major [electronic vehicle] manufacturing project” has meant more business for real estate and construction firms in the area.

Story idea: Has your news outlet covered the bicycle boom happening in Little Rock, Arkansas? “The Little Rock bicycling industry has seen significant growth in recent quarters, generating more than $150 million in total economic impact from jobs to tourism to taxes.” Interview local bike shop owners and see if their sales mirror overall growth. Why are people in the region taking up bicycling? Are recent infrastructure improvements part of the reason for this boom? Is there more growth in road biking than mountain biking? Is this industry also attracting dollars from outside the region?

District 8 includes Arkansas, southern Illinois, southern Indiana, western Kentucky, northern Mississippi, central and eastern Missouri and western Tennessee.

beige book

District 9, Minneapolis

New kind of company town

While some firms have handed down layoffs in Minneapolis, most workers who lose their jobs “‘are doing OK finding new jobs on their own,’” said one contact in workforce development. Some professionals in the area were thinking of moving out of state for a time to take advantage of remote work policies.

Vacancy rates in downtown Minneapolis remained high. Two office towers there recently sold “at steep discounts.” Heavy equipment sales were down in District 9, with “more customers choosing to repair rather than replace equipment due to higher financing costs.”

Story idea: Here is something you don’t hear every day: “A restaurant in central Minnesota reported that it bought an apartment building to provide workers with nearby housing.” Whether this is a trend or one-off project, the details could be fascinating — it’s reminiscent of the company towns built during the Industrial Revolution. Unfortunately, “central Minnesota” is a large area, so it would take some digging through property records to find the restaurant. Analysts working at District 9 might be willing to provide more geographic precision.

District 9 includes Minnesota, Montana, the Dakotas, Michigan’s Upper Peninsula and northern Wisconsin.

District 10, Kansas City

Small business credit woes

Firms in District 10 are generally not expecting to lay off workers, but some are reducing or pausing hiring. Manufacturers in particular slowed their production, with some expecting business conditions to “soften in the coming months, noting continued weakening in order back logs and a further deterioration in demand.”

Drought reduced grass and feed available for cattle producers, while 15% of corn and soybean acres and 30% of winter wheat acres “were in poor or very poor condition” across District 10. The energy sector saw a large decline in the number of active rigs, “as weak oil and gas prices continued to squeeze profitability.”

Story idea: Several small business contacts reported having missed credit card payments or paying only the minimum due, “which has negatively impacted their credit reports.” Some have even turned to online lenders and other non-traditional financial institutions that usually offer loans with less-than-favorable interest rates. Find out whether this trend holds true for local businesses in your coverage area — and whether it portends coming small business closures.

District 10 includes Colorado, Kansas, Nebraska, Oklahoma, Wyoming, western Missouri and northern New Mexico.

District 11, Dallas

More than meets the eye

There was steady hiring in the service sector, but job growth “slowed to a crawl” in manufacturing. Bankers are “pessimistic,” with an eye toward business activity declining during the rest of the year and “an increase in nonperforming loans,” where borrowers miss payments for several consecutive months.

A survey the Federal Reserve Bank of Dallas conducted in June of more than 350 business executives found 44% were understaffed and wanting to hire, while 12% were understaffed but not expecting to hire. Retail sales were down for clothing stores and stores that sell food and beverages, while sales were up at pharmacies, building material retailers and garden supply stores.

Story idea: Houses are being built on time, by and large, but some sites are being slowed by a shortage of transformers. These are devices that may be used, for example, to convert power from a large generator into power that can be used for handheld tools. Here is a chance to localize an international story. Transformers are made using copper wire, and there is a global copper shortage expected to last for the rest of the decade.

District 11 includes Texas, northern Louisiana and southern New Mexico.

District 12, San Francisco

A clogged arts pipeline?

While tourism kept retail sales strong in Utah and Hawaii, in other parts of District 12 consumers increasingly turned to low-cost goods as state-level fiscal stimulus measures ended. Leasing for office space in downtown districts was hurt by slow sales at physical retail locations as well as “safety concerns,” according to one contact in Northern California.

“Expanded ocean freight capacity” meant more exports from District 12 to overseas buyers, “but lingering backlogs, the war in Ukraine, and a strong dollar limited access to some international markets.” In one example of the push and pull of forces inside and outside of markets, agriculture production got more costly because of higher labor and insurance costs, but more rain than usual “somewhat offset irrigation costs.”

Story idea: Striking TV and movie workers attract national headlines, but art institutions face “significant headwinds due to smaller audiences and declining donations.” The economy has been surprisingly resilient during this high inflationary period, meaning there should still be art patrons. Could a lack of arts education in schools be behind shrinking audiences for the arts? Many people learn about the arts in their elementary, secondary and high school curricula. The California Arts Education Data Project, part of a nonprofit that advocates for more arts education in the state, has information and analysis to get you started on this story. There is also academic research on the topic — for example, nationally, public charter high schools are least likely to offer arts education courses, finds a 2020 paper published in the Arts Education Policy Review.

District 12 includes Alaska, Arizona, California, Hawaii, Idaho, Nevada, Oregon, Utah, Washington, American Samoa, Guam and the Northern Mariana Islands.

The post Story ideas from the Federal Reserve’s Beige Book: July 2023 appeared first on The Journalist's Resource.

]]>
Academic journals that give journalists free access https://journalistsresource.org/media/academic-journals-journalists-free-access/ Wed, 12 Jul 2023 12:33:36 +0000 https://journalistsresource.org/?p=75710 Some journalists might not realize that many academic journals let them bypass their paywalls. We show you which ones and how to set up free accounts.

The post Academic journals that give journalists free access appeared first on The Journalist's Resource.

]]>

When we surveyed our audience in 2021 to ask why journalists don’t use academic research more often, 60% of journalists who responded cited academic journal paywalls as a barrier. 

Some journalists might not realize that several of the world’s largest journal publishers will give them free online access to thousands of journals and other research-related resources. They simply have to ask, or complete a short form to register for a special account.

Other publishers and research groups, including the National Academy of Sciences and National Bureau of Economic Research, also provide complimentary access.

To help journalists find them quickly, we’ve listed a number of these organizations below, along with information on how to set up free accounts. We’ll add to this list as we learn of other publications that let news media professionals bypass their paywalls.

If you’re interested in reading articles in journals that charge journalists to read or download them, keep in mind there are other ways to get free copies, legally. We outline them in our tip sheet, “8 Ways Journalists Can Access Academic Research for Free.”

To better understand the different types of academic research, check out our tip sheet, “White Papers, Working papers, Preprints, Journal Articles: What’s the Difference?

———-

American Economic Association

The American Economic Association gives journalists free, online access to its eight journals, including the American Economic Review, one of the country’s oldest and most respected journals in economics. To ask for press access, contact Doug Quint at 412-432-2301 or dquint@aeapubs.org.

A bonus: After setting up a press account, journalists can sign up for customized email alerts according to topic and journal.

American Educational Research Association

The American Educational Research Association offers journalists complimentary digital subscriptions to its seven journals, including its flagship publication, the American Educational Research Journal. To set up an account, send an email to communications@aera.net.

Worth noting: Journalist subscribers can choose to receive email alerts when new studies are posted to a journal’s website, even before they appear in the print edition.

American Medical Association

The JAMA Network comprises 13 journals published by the American Medical Association. The Journal of the American Medical Association, commonly known as JAMA, is “the most widely circulated general medical journal in the world,” according to its website. Journalists can create a free account to access articles across the JAMA Network.

Don’t forget: Journalists must renew their accounts every year. If they don’t, they will lose access.

Elsevier

Elsevier, one of the world’s largest science publishers, offers journalists free access to a variety of research-related tools and databases. One is ScienceDirect, which Elsevier describes as its “leading platform offering the full-text scientific, technical and medical content of over 14 million publications from nearly 3,800 journals and more than 35,000 books from Elsevier, our imprints and our society partners: a quarter of the world’s published academic content.”

Send an email to newsroom@elsevier.com to request access.

Good to know: Elsevier distributes a biweekly newsletter for journalists called the Elsevier Research Selection to update them about new research.

Massachusetts Medical Society

The NEJM Group, a division of the Massachusetts Medical Society, publishes The New England Journal of Medicine, which describes itself as the “most widely read, cited, and influential general medical journal and website in the world.” Journalists can apply for one of two levels of free access — “advance” access and “regular” access.

With advance access, journalists can read the newest issue of The New England Journal of Medicine, published weekly, in advance. They also can read past issues online. To qualify, journalists must produce news content on a daily or weekly basis and agree to the publication’s embargo policy in writing. Use this link to apply.

With regular access, journalists can read the journal’s current and past issues. To qualify, journalists must produce content regularly for a bi-monthly, monthly or quarterly publication. Use this link to apply.

National Academy of Sciences

The Proceedings of the National Academy of Sciences, often referred to as PNAS, is one of the most prestigious general-science journals. To request a free online subscription, contact the PNAS News Office.

Worth mentioning: Journalists can apply for access to embargoed PNAS materials through the EurekAlert! website.

National Bureau of Economic Research

Everyone gets to read and download up to three working papers a year from the National Bureau of Economic Research’s website. Journalists must register to qualify for unlimited downloads. Fill out this form to request access.

Remember: NBER studies have not been peer reviewed. The organization circulates working papers for comments and discussion, and many are eventually published in academic journals.

Sage Journals

Another of the world’s largest journal publishers, Sage Journals publishes more than 1,100 academic journals focusing on disciplines such as gender studies, media studies, urban planning, social work and engineering. Ask about free access to journal articles by contacting Sage’s communications team at pr@sagepub.co.uk.

Keep in mind: Sage publishes Sage Open, which is an open access “mega-journal” — a journal that publishes hundreds to thousands of studies per year and makes them available for free to anyone.

Springer Nature Group

The Springer Nature Group provides journalists with free online access to more than 3,000 journals published on Springer Link and nature.com, which cover research disciplines such as science, technology, medicine and the social sciences. To register for a free account, complete this form.

Important to know: Journalists need to submit documents as part of the registration process, including an email from an editor confirming their status as a journalist. Once registered, they do not have to re-register, provided their work email address remains in the same.

Taylor & Francis

Taylor & Francis, another world leader in academic publishing, publishes more than 2,700 journals featuring research on public health, education, the environment, technology and other fields. Journalists can apply for a digital Press Pass, which lets them access 50 online articles for free. Sign up for a Press Pass.

Note: Besides academic articles, journalists can apply for free access to other types of research content: books and embargoed books, press releases and academic articles.

The post Academic journals that give journalists free access appeared first on The Journalist's Resource.

]]>
8 ways journalists can access academic research for free https://journalistsresource.org/media/academic-research-free-journalists/ Fri, 07 Jul 2023 17:40:00 +0000 https://live-journalists-resource.pantheonsite.io/?p=57444 A lot of academic research exists behind paywalls. We outline eight ways reporters can get free access to high-quality scholarship.

The post 8 ways journalists can access academic research for free appeared first on The Journalist's Resource.

]]>

This tip sheet outlining ways journalists can access academic research for free, originally published in September 2018, has been updated with new information.

Here at The Journalist’s Resource, we’re big fans of research — especially the peer-reviewed kind. We know academic research is one of journalism’s most valuable tools for covering public policy issues and fact-checking claims.

Unfortunately, journalists often have trouble accessing studies published in academic journals. Many journals keep scholars’ work behind paywalls, and subscriptions can be prohibitively expensive for newsrooms and individual journalists. For example, a subscription to the Proceedings of the National Academy of Sciences, a journal of the National Academy of Sciences, is more than $200 a year for one person for personal use only. There are thousands of journals worldwide.

Resourceful journalists find other ways to get that information. Here are eight of them:

  1. Go to the library.

Public libraries often subscribe to academic journals and anyone with a library card can read them. The good news for busy journalists is some libraries allow their users to access online databases of peer-reviewed research from any location.

U.S. colleges and universities provide online access to academic journals through their academic libraries. State university libraries generally are open to the public. Private institutions often extend library privileges to alumni.

  1. Ask academic journals for a free account.

Many of the most popular journals give journalists complimentary access, although some limit free accounts to journalists covering specific topics or beats. The American Economic Association (AEA), for instance, offers news media professionals free access to all eight of its journals, including the American Economic Review. You can request an account through the association’s press page.

“I don’t think it’s something that’s widely known, but it’s a message we want to get out there,” says Chris Fleisher, the AEA’s web editor. “We want journalists to know they can access our journals if they like.”

It’s worth noting that many journals will share embargoed copies of research articles with journalists and alert them to new research on a topic of interest. Contact the journals you’re interested in to learn more.

  1. Search open access journals and platforms.

A growing number of scholarly journals known as open access, or OA, journals offer their online content for free to the public. Be aware that while there are many high quality OA publications, some engage in unethical practices. A trusted source of reputable OA journals is the Directory of Open Access Journals.

Examples of top OA journals include PLOS One, the world’s first multidisciplinary OA journal, and BMC Biology.

Several online platforms also allow the public to access research at no cost. One is Unpaywall.org, a free database of almost 48 million free-to-read academic articles.

  1. Check Google Scholar.

Google Scholar is a web search engine that indexes research from various sources. Often, Google Scholar will provide PDF documents of research articles in its search results. However, some PDFs contain earlier versions of an article, including drafts that have not been peer reviewed or published.

While these earlier versions can be helpful, it’s important to contact the author before reporting on their findings. The findings highlighted in working papers are preliminary and may differ substantially from the final version published in a journal article. (To better understand the differences between a working paper and an academic article, check out our explainer.)

  1. Install browser extensions. 

Browser extensions can help you check the web for free versions of academic articles. The Unpaywall browser extension gathers content from more than 50,000 journals and open-access repositories worldwide. The Open Access Button searches “millions of articles” from sources that include “all of the aggregated repositories in the world, hybrid articles, open access journals, and those on authors’ personal pages,” according to its website.

If the Open Access Button does not find free versions of the articles you’re looking for, it will contact the authors and ask them to share their work by putting it into an open access repository.

  1. Reach out to the people who did the research.

If you find a research article you’re interested in reading but can only access the abstract online, call or e-mail the authors and ask for a complete copy. Journal abstracts generally include contact information for the authors or, at the very least, an e-mail address for the corresponding author.

Researchers usually will share copies of their work with journalists. If a scholar shares a pre-published version of an academic article, be sure to ask how closely it resembles the published version and whether the findings are the same.

Another option: Researchers often post links to their academic research on their personal websites. Those who work for colleges and universities tend to list their published articles on their faculty pages.

  1. Call the media relations office.

The media relations office of a university or research organization can help you track down a copy of an article written by one of its researchers. It can help you reach the authors as well.

The main drawback: While media relations offices generally are sensitive to newsroom deadlines, they may be busy helping many journalists at the same time. It’s often faster and easier to reach out to authors directly. If you have trouble getting researchers to respond, media relations staff members are usually willing to give them a nudge.

Universities also send out press releases promoting new academic research conducted by their faculty and research centers. Ask how to receive alerts about topics key to your beat.

  1. Sign up for newsletters and press releases from organizations that promote the scholarly work of various colleges, universities, research centers and other groups.

A quick way to get information about new research from a bunch of different research entities is by signing up for emails from organizations such as Futurity and EurekAlert!

Futurity is a partnership of 47 universities in the U.S., Canada, Europe, Asia and Australia. It highlights the work of scholars in four broad topic areas: culture, health, environment and science.

EurekAlert! is a news-release distribution platform run by the American Association for the Advancement of Science. It hosts news releases from higher education institutions, medical centers, government agencies, academic journal publishers, corporations and other groups involved in research across all fields.

If you’re looking for more help covering academic research, check out the “Know Your Research” section of our web site. We’ve created a series of tip sheets to help you get it right, whether you’re trying to make sense of key terms such as “statistical significance” and “standard deviation” or need guidance on spotting weaknesses in research and determining whether scholars have reached a scientific consensus on an issue. 

The post 8 ways journalists can access academic research for free appeared first on The Journalist's Resource.

]]>
Using ‘per capita’ to describe data: 4 things journalists need to know https://journalistsresource.org/economics/per-capita-right-wrong-journalists-tips/ Fri, 16 Jun 2023 17:07:45 +0000 https://journalistsresource.org/?p=75463 An economist and a statistician help us explain the right and wrong ways to use 'per capita' to describe data related to economics, public health and other news topics.

The post Using ‘per capita’ to describe data: 4 things journalists need to know appeared first on The Journalist's Resource.

]]>

If you report on economic research and government reports, you’ve almost certainly encountered the statistical term “per capita,” a Latin phrase that means “by heads” or, essentially, “per person.”

A country’s gross domestic product, a popular measure of economic health, often is expressed in terms of per capita — the value of goods and services produced by that nation per individual within the population. The United States’ GDP was just under $26.5 trillion during the first quarter of 2023, according to the U.S. Bureau of Economic Analysis. The GDP per capita: $79,148.            

Scholars, statisticians and government officials also use the term when examining data for a range of policy issues, including public health, education funding, local crime and public transportation usage. A researcher might, for example, track per capita sugar consumption in a certain city to gauge how much sugar each resident consumes in a given year, on average.

Although per capita is a specific type of rate, news outlets sometimes use it incorrectly to describe other rates. Also, journalists frequently report on data without breaking it down per capita, even when doing so would provide audiences with crucial context.

One of the most common mistakes news outlets make is using “per capita” to describe a number per 1,000, 10,000 or 100,000 people. For instance, if a government report estimates the number of police officers working in a particular state is 2.3 officers for every 1,000 residents, it would be incorrect to report this as the number of officers per capita. The per capita rate would be 0.0023 officers per resident.

To help us explain the right and wrong ways to use the term in news coverage, we teamed up with two experts — economist Steve Landefeld, the former, longtime director of the U.S. Bureau for Economic Analysis, and statistician Jing Cao, a professor in the Department of Statistics and Data Science at Southern Methodist University.

They suggest journalists keep these four things in mind:

1. Remember that per capita numbers represent averages.

Per capita figures represent the average number of something for all people within a population, Landefeld and Cao explain.

If you were to report on per capita health care expenditures for each state, for instance, you’d provide audiences with the average amount of money each person spends on health care, by state. Per capita health care spending in 2020 ranged from $7,522 in Utah to $14,007 in New York, a report from the U.S. Centers for Medicare & Medicaid Services shows.

2. If a research paper or government report doesn’t break down a number per capita, do the calculation yourself by dividing the number you’re interested in by the target population.

Government agencies in the U.S. report many types of data on a per capita basis. But if they don’t do that for the data you’re examining, Landefeld and Cao recommend doing the math yourself. To calculate per capita, take the number you’re interested in and divide it by the population involved.

“It’s a pretty straightforward concept,” Cao says. “You just need to know the population.”

Here’s a quick demonstration. Let’s say a city government reports that its residents threw away 55,200 tons of garbage in 2021. To find out how much garbage each resident threw out, on average, you’ll need to look up or ask for the city’s population that year. Let’s say 710,000 adults and children lived in that city in 2021.

Here’s the per capita calculation:

Tons of garbage per capita = 55,200 tons of garbage/710,000 residents = 0.08

To help audiences better visualize the amount thrown out, consider reporting it in pounds per person, instead of tons. In the U.S., there are 2,000 pounds in a ton. In this city, each resident threw away an average of 160 pounds of trash in 2021.

3. Don’t confuse per capita with other types of rates.

Researchers and public health officials often report the rate of COVID-19-related deaths as the number of deaths per 100,000 people across a country or other geographic region. Some news outlets have incorrectly described these rates as “per capita” death rates.

In recent years, journalists have also incorrectly referred to rates of identity theft, shootings, home sales and the concentration of fast-food restaurants in an area as “per capita” rates. Some of that coverage focuses on data expressed as a number per 10,000 people.

“When the number you’re reporting is in conflict with the definition of the word you’re using [to describe it], I would, in that scenario, drop ‘per capita,’” Cao says.

Journalists should refer to a rate as “per capita” only when it represents a per-person average.

“Reporting a number like 80 for every 100,000 [people] is not really per capita,” Cao says. “As a data person, if I saw this, I would probably pause and think about, ‘What does this mean? Why is this person using ‘per capita?’”

To ensure accuracy, consider simply telling audiences what the rate is, Landefeld adds.

“Call it what it is: ‘80 per 100,000,’” he says.

4. When comparing countries, states or regions, put data into context by including per capita and median numbers.

When journalists report on a key number such as a country’s GDP without providing per capita GDP, they neglect to add important context. It’s also tough to compare two or more countries’ economic prosperity without that information, considering populations can vary significantly, Landefeld and Cao note.

“Especially when you’re making comparisons across countries, at a minimum you want to look at per capita trends,” Landefield says.

A case in point: In 2021, America’s GDP, $23.3 trillion, was the world’s highest, followed by China‘s GDP of $17.7 trillion, according to the World Bank. But a look at each nation’s GDP, broken down per person, reveals stark differences in their average citizens’ living standards.

China’s population, 1.4 billion in 2021, vastly outpaces America’s at 332 million. As a result, China’s per capita GDP is much smaller — $12,556 per person compared with $70,249 per person in the U.S. that year.

When you look at the data this way, you see that even though China’s GDP was 27% smaller than the U.S. GDP, its GDP per capita was not even one-fifth of America’s GDP per capita, indicating the average person living in China had a significantly lower standard of living than the average person living in the U.S.

Landefeld and Cao urge journalists to include median GDP per capita in their coverage. The median is the middle number in a series or list of numbers arranged from largest to smallest, or smallest to largest. Half the numbers are smaller than the median and half are bigger.

Landefeld and Cao say the middle number on a list ranking a country’s citizens by income will give audiences the best sense of how an economy is performing across segments of the population. Because GDP per capita is an average, it obscures the distribution of wealth. A country where every person has the same standard of living can have the same GDP per capita as a country with lots of very poor and very rich people.   

“Each number only gives you one peek into the whole thing,” Cao explains. “The median is always more representative.”

Two economists who won The Indigo Prize in economics in 2017 have also stressed a need to spotlight median per capita GDP.  In their winning essay, Diane Coyle, a professor at the University of Cambridge, and Benjamin Mitra-Kahn, an assistant commissioner with the Australian government’s Productivity Commission, propose a more accurate way to measure economies.

“The public debate about the economy currently focuses on growth in total GDP, or occasionally per capita GDP,” Coyle and Mitra-Kahn write. “A focus on [wealth] distribution is needed. Statistical agencies could easily make median per capita GDP the standard headline figure in regular press releases.”

Cao notes this context is particularly useful for people who rely on journalists to help them make sense of economic trends.

“When you report something, context is No. 1,” she says.

The post Using ‘per capita’ to describe data: 4 things journalists need to know appeared first on The Journalist's Resource.

]]>
Organizing your research: A scientist’s tips for journalists https://journalistsresource.org/home/organizing-your-research/ Tue, 21 Mar 2023 20:17:32 +0000 https://journalistsresource.org/?p=74649 Maya Gosztyla, a Ph.D. candidate in the Biomedical Sciences Graduate Program at the University of California San Diego, provides an overview of literature mapping tools, RSS feeds, research management software and databases to help journalists organize their research.

The post Organizing your research: A scientist’s tips for journalists appeared first on The Journalist's Resource.

]]>

Journalists collect a lot of stuff while reporting, especially for big stories and projects: interviews, documents, research papers, articles. It can be overwhelming at times.

Academics too must collect a large number of documents. They use a variety of tools to organize their work, some of which journalists can also use to organize materials. 

During a panel at the 2023 Association of Health Care Journalists conference in St. Louis, Missouri, Maya Gosztyla, a Ph.D. candidate in the Biomedical Sciences Graduate Program at the University of California San Diego, shared her organizational approach as a scientist, which journalists can easily adopt. She’s the authors of 2022 Nature career columns, “How to Find, Read and Organize Papers” and “How to Manage Your Time as a Researcher.”

Below is a list of tips and tools that Gosztyla shared during the panel.

1. Find related research with literature mapping tools.

When journalists report on a new study, it’s important to consider where that study fits into the larger body of research.

Pubmed and Google Scholar are go-to research platforms to find academic research. But they’re not the most efficient tools for finding research related to a specific academic study.

A better approach is using literature mapping tools, which show the connection between research papers.

“Imagine papers like nodes in a network,” Gosztyla said. “Each paper will cite other papers, and what you can do is make a giant map of all the papers in a specific subject area. And then you can see the hubs — what are the papers that everyone cites that you should probably read.”

Some of the popular literature mapping tools, which offer free versions, include ResearchRabbit, Inciteful, Connected Papers and Litmaps.

2. Stay on top of current research with RSS feeds.

Many journalists, especially those who write about academic research, subscribe to journal email lists. But that may not be the best option for organizing research.

“It kind of overwhelms your inbox after a while,” said Gosztyla.

Another common method is setting up keyword email alerts. Both Pubmed and Google Scholar let you set up email alerts for specific keywords. But that too can crowd your email inbox.

Gosztyla’s solution is using an RSS feed reader.

RSS stands for “really simple syndication.” An RSS feed reader — or RSS feed aggregator — gets all the new articles or studies published on a website and brings them together in a timeline that you can quickly scroll through.

Many websites have RSS feeds. Once you have a link for the RSS feed, you can then add it to a free or paid RSS feed reader.

Here’s a good explainer by Lifewire on how to find RSS feeds and add it to a reader.

Gosztyla spends a few minutes every morning scrolling through her RSS feed reader — her favorite is Feedly — to check for new published research in her field.

This August 2022 article from Wired lists some of the more popular RSS feed readers.

3. Use research management software to file your research.

There are several free online tools that can help you store what you find during your research instead having dozens of open tabs on your browser.

A popular tool developed by and for journalists is DocumentCloud, where you can upload documents, search the text, annotate, extract data, redact and edit.

Another option, popular among academics, is Zotero. It’s a free, open-source reference management tool and can store and organize your research material, including PDF files.

You can use Zotero in a browser, but for a more powerful experience, download it and install the Zotero plugin for your browser. When you come across a study or article that you want to save, click the plugin. It will save the item to your desired Zotero folder. You can create many folders and subfolders, and also share folders. You can also highlight and annotate PDFs.

“If you’re not using a reference manager, I highly, highly recommend them,” said Gosztyla.

You can integrate Zotero to several apps and programs, including, Word, Google Docs and literature mapping tools like ResearchRabbit.

Some of the alternatives to Zotero include, Mendeley, EndNote, RefWorks and Sciwheel.    

4. Routinely read your research pile.

To stay on top of what you’re collecting, Gosztyla offered this advice:

Block out a time each week, like two hours on Fridays, to read. If you have a big pile, maybe devote a couple of days to reading.

And decide how you’re going to spend that reading time: are you going to devote it to do a deep dive, or just scan what you’ve collected, take notes and decide what to keep and what to toss.

“Maybe it’s your routine that every week you buy yourself a nice cup of coffee. You go to a certain cafe and you just read,” Gosztyla said. “So find a routine that you really look forward to and it’s something you want to do.”

5. Don’t forget to take notes while reading documents.

“Don’t ever read without highlighting or taking notes,” Gosztyla said. “Otherwise, you will forget it. I guarantee it.”

Write a small note, a blurb, on the material you read to remind you of its main takeaways and where it fits into your project. Do you need to email or interview the author with follow-up questions? Or read the authors’ previous work? Make a note of those.

In the next step, you’ll learn about organizing those notes.

6. You have collected. You have read. Now organize your work in a database.

Research management software can help you organize your documents, but it’s helpful to create a database of what you’ve collected, your tasks for each item, and maybe a summary and key points. You can use Google Sheets of Microsoft Excel to create your list.

If you want something other than a classic spreadsheet, you can try web applications like Notion.

Notion is a powerful program, which Gosztyla described as a “multi-use database tool.” Notion describes itself as an all-in-one workspace. You can use it to organize your research, manage projects and tasks, note-taking and even your daily journals. You can also integrate Notion with many other apps and tools.

It has a steep learning curve. Give yourself time to learn to use it before integrating it into your workflow. Notion has tutorials on YouTube and a wiki page. Gosztyla recommended Thomas Frank Explains YouTube tutorials. Frank is an author, YouTuber, and Notion expert.

Some alternatives to Notion include Airtable, Trello and Coda.

7. Go one step further with automation tools.

If you want to go a step further in your Notion journey, you can link a Zotero folder to Notion with a tool called Notero. Every time you add an item to your Zotero folder, it populates your Notion database.

Notion has many templates you can choose from. Or you can use Gosztyla’s template.

You can automate and integrate other apps too, to create a better workflow for your work. Some of the popular options are IFTTT — Short for If This Then That — which integrates apps, devices and services to create automated workflows, and Zapier, which connects web applications and allows users to create automated workflows.

Keep in mind, you don’t have to use all the tools listed above.

“Take the pieces that work for you and apply them to your life,” advised Gosztyla.

If you want to share a tool that’s helped you organize your research, you can reach me at naseem_miller@hks.harvard.edu. You can reach Gosztyla on Twitter @MayaGosztyla.

The post Organizing your research: A scientist’s tips for journalists appeared first on The Journalist's Resource.

]]>
10 simple data errors that can ruin an investigation https://journalistsresource.org/home/10-simple-data-errors-that-can-ruin-an-investigation/ Mon, 20 Mar 2023 12:45:00 +0000 https://journalistsresource.org/?p=74622 Several data journalists discuss common data errors that have threatened or even ruined past investigative journalism projects. Read on to avoid these errors yourself.

The post 10 simple data errors that can ruin an investigation appeared first on The Journalist's Resource.

]]>

Mistakes with numbers can have a cascading effect for investigative stories, and a damaging effect for audience trust, as many other erroneous figures, trend claims, and conclusions can flow from that initial error.

At the recent NICAR23 conference in Nashville, Tennessee — the annual data journalism summit organized by Investigative Reporters & Editors (IRE) — GIJN asked several speakers to suggest common data blunders or oversights that have threatened or ruined past investigations.

“Every journalist will make a mistake — it’s all about being smart about making sure you never make that mistake again, and about being transparent with your audience,” says Aarushi Sahejpal, data editor at the Investigative Reporting Workshop at American University. “But you can certainly minimize the chance of mistakes.”

In a summary echoed by other experts, Sahejpal says avoiding mistakes generally involves asking yourself three questions: Do you actually have the full dataset? Have you spoken to the person behind the data to know what it really means? And what does the data not tell you?

Still, mistakes do happen, and here are 10 common causes, according to data journalism experts.

1. Forgetting the threat of blank rows in spreadsheets.

According to data journalism trainer Samantha Sunne — currently a local reporting fellow at ProPublica — one common and devastating mistake is to wrongly assume that you’ve selected or highlighted an entire data column in your Google Sheet. The problem, she says, is that spreadsheets stop highlighting at blank rows lower down, and Sunne says a failure to spot this data exclusion has caused some reporters to reach the wrong conclusions in their investigations.

“Oftentimes, you’ll get blank rows in your data — maybe that’s where the page break was, or there was no data for that item — and you might easily not notice them if you don’t scroll down,” explains Sunne. “If you aren’t careful to truly select all, it can completely destroy your analysis.”

Her solution? After you’ve clicked on any data column, hit Control A (or Command A) once — and then hit Control A (or Command A) again, to capture the data below any blank row as well.

2. Failing to check whether government nomenclature or coding has changed.

Janet Roberts, data journalism editor at Reuters, says government and municipal agencies often change their codes for functions, and that this could happen while you’re collecting their data — and that it is crucial for reporters to check whether all the data in your dataset applies to the same thing ahead of publication.

“In St. Paul (Minnesota), we were doing an investigation into slumlords, and we got the building code violation data, and we were going to find the landlord with the most of a certain kind of offense,” Roberts recalls. “We did all our crunching — but it turned out that, at some point, the buildings department had changed the codes, so maybe an “02” used to mean rat infestation, but it now means you didn’t sweep your sidewalk, or whatever. Luckily, we found this out — albeit, very deep into the process — because, had we not found it out, the entire story would have been wrong.”

She adds: “The potential error here is failing to understand the data — failing to talk to the people who keep the data. Ask how the data evolves over time.”

3. Confusing percentages with percentage points.

This simple mistake is nevertheless a perennial problem — and can end up accidentally misleading audiences. “If something jumps from 20 to 30%, that’s actually a 50% increase, not a 10% increase — that can be tricky, and important to pay attention to,” explains Sunne. Data experts stress that percent change refers to a rate, but percentage point change means an amount. To avoid confusion, it’s better to describe a 100% increase by saying something “doubled.” “A lot of people don’t understand the difference between percentage points and percentages,” Sahejpal says. “Same with ‘per capita’ — using rates and per capita in the same sentence often doesn’t make sense, because per capita is per person.”

4. Accepting round numbers without double-checking.

Large round numbers, or round numbers of data rows, like 7,000 or 2,000, can often mean some limit on a records search or a data transfer, rather than a true total, according to Roberts.

“We had data that suggested that only 5,000 companies had filed their required reports on something, and we thought: ‘Exactly 5,000?’” Roberts recalls. “That seemed unusual, and also a low number. What the reporter hadn’t noticed was that the website limited search results to 5,000 records, and the true results turned out to be about three times that.”

“If you have a dataset of perfectly 1,000 or 10,000 rows, I would bet money something is off,” Sahejpal says. “And I can’t tell you the number of students I’ve had who downloaded a file, and didn’t realize they’d downloaded a filtered version. Another mistake is if you don’t check that the range of your dataset is equal to the reported range on a government website.”

5. Forgetting that number formats are different in different countries.

“$1,753.00 in the US is written as ‘$1.753,00’ in Latin America — the commas and periods and apostrophes are in different places — but spreadsheets don’t account for the different punctuation,” says Emilia Diaz-Struck, Latin American coordinator at the International Consortium of Investigative Journalists (ICIJ). “It’s also possible to make really basic conceptual mistakes if you don’t think about the origin of the numbers.”

6. Ignoring your gut when the data “just seems off.”

Even after the numbers have been checked in the spreadsheet, and double-checked with a human data source, experienced journalists sometimes find those figures jarring, or at odds with their knowledge of the topic. Dianna Hunt, senior editor at ICT (formerly Indian Country Today), says reporters should respect this feeling, and seek out alternate or historical data, or academic researchers, to check those numbers independently, or at least check if they’re in the “ballpark” for that topic. For instance, that feeling could indicate major errors by the original government data gatherers, or even just a decimal point typo at the input stage.

“You need to pay attention to your gut instinct when something seems wrong — that has certainly paid off in several investigations I’ve worked on,” says Hunt.

7. Failing to speak to the human behind the dataset.

“Before you use the data, you need to reach out to the source, and understand what every column means,” says Sahejpal. “Look, maybe you’re downloading from a website that has a perfect methodology set out — but I’d bet that a lot of the data you’re looking at is not clear in terms of what it actually means, and doesn’t mean. People in data journalism often don’t explain this, but, in fact, all of us talk to people way more than you think — we don’t just stare at computer screens.”

He adds: “Finding a way to reach the people inputting the data is a lot easier than figuring out what to do with their dataset.”

8. Assuming the dataset tells the whole story.

Having obtained a relevant dataset, Sahejpal suggests that reporters immediately compile — and prominently post — the set of relevant questions the dataset does not answer.

“The number one thing I do to avoid mistakes as an editor is to list what the data doesn’t tell you,” he says. “What we call the ‘limitations section’ on your dataset is your strongest ally, because if you know what it doesn’t tell you, you know to not say what you should not say, and what further questions to ask.”

Sahejpal adds: “If you have a dataset on, say, parking ticket violations in Washington, DC, you make a list of the regions and variables you do not have that could impact your analysis, and, right off the bat, you have a full picture of what you need. Then you get on the phone with the person in charge of the data and confirm what you do have.”

9. Using the wrong scale on graphs or charts.

Graphs published by media outlets — or even supplied to journalists — sometimes begin with an arbitrary number on the axes, like “1,500,” instead of zero, which can confuse audiences, or simply be wrong. “Be critical of the visualizations that you do put out,” says Sahejpal. “Make sure to check both the X and the Y axis, the variables compared, and the scale, to ensure accuracy. In any data visualization, it’s important to see if the scale starts wrong, or if the change increments don’t make sense. I see that kind of error all the time.”

10. Forgetting to tie columns together when sorting in Google Sheets.

Sorted data often provides easy angles, by arranging rows to show, for instance, worst-to-best: perhaps the highest death rates for some cause per town, at the top of a column, and the better-performing towns below.

Sorting in Google Sheets is surprisingly straightforward — and is even helped by pop-up suggestions from the program — but it requires a step by step sequence on the Sheet.

According to Tisha Thompson — a data reporter at ESPN — reporters can play around with many of the functions, but warns that the one step that reporters simply cannot forget is to click “the top left square” when sorting in Google Sheets: the blank box that selects both the column and row axes. This box ties a sorted column with the whole dataset. Forgetting this square, she says, can not only mangle your numbers — but can do so without you noticing the error prior to publication.

“Not paying attention to the top left corner is the easiest mistake you will make, and it can end careers,” warns Thompson. “You want to always keep your data tied to other lines and rows, so you need to highlight the whole kit-and-caboodle. Don’t ever sort only a single column; always use the top left hand corner — it should be like tying your shoes.”

This article first appeared on GIJN and is republished here, with permission, under a Creative Commons license.

The post 10 simple data errors that can ruin an investigation appeared first on The Journalist's Resource.

]]>