Leighton Walter Kille – The Journalist's Resource https://journalistsresource.org Informing the news Tue, 28 Feb 2023 21:09:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://journalistsresource.org/wp-content/uploads/2020/11/cropped-jr-favicon-32x32.png Leighton Walter Kille – The Journalist's Resource https://journalistsresource.org 32 32 Deaths in police custody in the United States: Research review https://journalistsresource.org/politics-and-government/deaths-police-custody-united-states/ Sun, 07 Jun 2020 13:11:47 +0000 http://live-journalists-resource.pantheonsite.io/?p=44632 We summarized studies that look at deaths in police custody from multiple angles, including restraint methods and police force demographics.

The post Deaths in police custody in the United States: Research review appeared first on The Journalist's Resource.

]]>

The deaths of black men at the hands of white police officers in recent years have raised a number of questions about the treatment of racial minorities within the criminal justice system, as well as about patterns of arrest-related deaths more generally. Some researchers have called for Congressional-mandated government databases to be more thorough so they can better find patterns in the violent interactions between police and civilians.

The recent death of George Floyd in Minneapolis and the mass protests that followed have pushed the issue back into the national spotlight. Floyd, a black man accused of buying cigarettes with a counterfeit $20 bill, died after a white police officer kneeled on his neck for nearly nine minutes to restrain him. That officer, Derek Chauvin, has been fired and charged with second-degree murder. The three other officers who were involved were fired and charged with aiding and abetting second-degree murder.

In response to the renewed media attention, we have significantly updated this research roundup — last updated in 2016 — with new information. We will continue to update this piece with new research in the weeks to come.

For years, legislators, community leaders and others have wanted to know: How many black Americans have died while being apprehended, arrested or transported by law enforcement officers? And how does that number compare with the number of men and women of other races and ethnicities killed each year in police custody?

No one knows the official answers to those questions. It has been five years since Congress passed the Death in Custody Reporting Act of 2013, which went into effect in December 2014, but federal officials have not yet gathered the data and made it public. Rep. Robert C. “Bobby” Scott of Virginia introduced the legislation, created based on an earlier law that expired in 2006 and had required states to submit quarterly reports on deaths in police custody.

The new law requires the Attorney General to collect from each state as well as all federal law enforcement agencies “information regarding the death of any person who is detained, under arrest, or is in the process of being arrested, is en route to be incarcerated, or is incarcerated at a municipal or county jail, state prison, state-run boot camp prison, boot camp prison that is contracted out by the state, any state or local contract facility, or other local or state correctional facility (including any juvenile facility).”

Earlier this year, two members of Congress — the chairman of the House Judiciary Committee, Jerrold Nadler of New York, and the chairwoman of the House Subcommittee on Crime, Terrorism and Homeland Security, Karen Bass of California — wrote a letter to the Inspector General requesting an investigation into the Department of Justice’s failure to implement the Death in Custody Reporting Act of 2013.

“The United States continues to face a persistent crisis of deaths in custody, the true scope of which remains unknown,” Nadler and Bass wrote.

A 54-page report from the Office of the Inspector General chronicles problems the U.S. Department of Justice has had implementing the law. It notes that state-level data collection “will be delayed until at least FY 2020,” which ends Sept. 30, 2020.

The report, released in December 2018, also points out that the Department of Justice “does not have plans to submit a required report that details results of a study on DCRA [Death in Custody Reporting Act] data. DCRA required that such a report be submitted to Congress no later than 2 years after December 18, 2014.”

In the absence of a government database, several organizations have created their own. For example, Fatal Encounters, founded by former editor and publisher of the Reno News & Review D. Brian Burghart, maintains a searchable database of people who died during interactions with police. The Washington Post’s national database cataloging fatal shootings by police was last updated June 1.

Academic research provides insights but offers an incomplete picture. A study published in The New England Journal of Medicine in 2016 finds there were 222 “legal intervention” deaths in 2013, or cases in which someone was killed by an on-duty law enforcement or other peace officer. The study is based on data from just 17 states, however, and none of the largest states — California, Florida, Illinois, New York, Pennsylvania and Texas — were included.

According to the paper, nearly everyone killed by on-duty officers in those states that year were male and between the ages of 20 and 54 years old. It finds that black people were most likely to die in police custody.

“The rates [of death] were higher among non-Hispanic blacks (0.6 per 100,000 population) and Hispanics (0.3 per 100,000) than among non-Hispanic whites (0.1 per 100,000),” the authors write. “Multiple factors were found to be associated with the circumstances of these events, such as a crisis (e.g., the victim had had a bad argument, had divorce papers served, was laid off, faced foreclosure on a house, or had a court date for a legal problem within 2 weeks before the death), a current mental health problem, and intimate partner violence.”

A newer study, published in the Proceedings of the National Academy of Sciences in 2019, estimates that black men have a 1 in 1,000 chance of being killed by police during their lifetimes. That’s 2.5 times the odds for a non-Hispanic white man, the authors find.

Arrest-related deaths

A string of high-profile deaths of black men in police custody in recent years has raised questions about how black Americans are treated during and immediately after their arrest. But a December 2016 report on arrest-related deaths from the U.S. Bureau of Justice Statistics notes the agency receives data on only a portion of deaths.

“The ARD [Arrest-Related Deaths] program captured about half of the estimated number of justifiable homicides in the United States from 2003 through 2011, excluding 2010,” according to the report. “In general, the incomplete coverage each year was due in part to the unstandardized data collection process across states. However, program coverage increased to a high of 69% in 2011, when the program began to rely more on open information sources to identify potentially eligible deaths.”

The news media and Fatal Encounters project are open information sources that have become key to the Bureau of Justice Statistics’ data collection effort. A technical report the agency released in July 2019 illustrates the importance of these sources. From 2003 to 2009, the report states, the agency’s Arrest-Related Death program identified 375 to 496 arrest-related deaths per year. In 2011, when program officials began collecting data from open information sources, they identified 689 deaths that year.

The new data collection method helped the agency find and track down information on 424 deaths in police custody during a three-month study period, June 1, 2015 through Aug. 31, 2015.  Officials used news media coverage to find the vast majority of deaths – 89%, according to the technical report. Fewer than 50 deaths — 11% — were reported by law enforcement agencies.

More than 42% of arrest-related deaths identified during the study period occurred in five states — California, Florida, Georgia, Ohio and Texas. Of the 424 deaths in total, 63% were categorized as homicides.

The federal government also tracks fatalities in local jails and state and federal prisons through its Deaths in Custody Reporting Program. Suicides accounted for 31% of deaths in local jails from 2000 to 2016, a federal report released earlier this year shows.  About half were due to illness, including heart and liver disease and cancer.

The report reveals white inmates are more likely to die in jail than black or Hispanic inmates. “The mortality rate for white jail inmates in 2016 (240 deaths per 100,000 white inmates) was more than double the rate for black inmates (118 deaths per 100,000 black inmates) and almost triple the rate for Hispanic inmates (87 deaths per 100,000 Hispanic inmates),” it concludes.

Reporting on incidents

Experts involved in analysis of these incidents caution that the numbers can often hide meaningful context, and reporters would be well served to go beneath the surface and ask about how data is collected — and any potential holes or weaknesses in the data. Overall, states have varied in their methods of reporting law enforcement-related incidents of many kinds to the federal government.

For an example of how data, or the lack of it, can matter — and mislead — see the series on prison rape written by David Kaiser for The New York Review of Books. Finally, for news reporters covering individual incidents, context can be crucially important, from the degree to which a neighborhood is a high-crime area, or where assaults on officers are common; to the level of police training to deal with, for example, violent and mentally ill persons; to the precise nature of the incident and whether it involved a suspect threatening public safety at the time of a violent intervention by authorities.

There’s also a substantial body of government and academic research on these issues. Below, we’ve rounded up studies that look at deaths in police custody, the use of different types of restraint and whether changing police force demographics might result in fewer black men dying during interactions with law enforcement.

______

Whose Death Matters? A Quantitative Analysis of Media Attention to Deaths of Black Americans in Police Confrontations, 2013–2016
Zuckerman, Ethan; et al. International Journal of Communication, September 2019.

Summary: The news media became more likely to cover black men’s fatal encounters with police following the August 2014 death of 18-year-old Michael Brown, fatally shot by a police officer after an altercation in Ferguson, Missouri, this study finds.

A team of researchers, led by media scholar Ethan Zuckerman of MIT, analyzed news coverage of black people who were killed by police or died in police custody in the U.S. from January 2013 through the end of June 2016. Researchers also queried Facebook to gauge the number of times those stories were liked and shared.

In total, the researchers examined news coverage of “343 deaths of unarmed Black Americans at the hands of police, as well as the number of media articles, the sharing of those articles, and the framings used in those articles to investigate differences in coverage before and after Michael Brown’s death.”

“Based on analysis of these data, we found evidence for a significant shift in attention to stories about police killings of unarmed Black men in the wake of Michael Brown’s death, which we considered a ‘key event’ in changing the media framing of these deaths,” Zuckerman and his colleagues write. “In addition to an increased volume of coverage, we saw evidence that police-involved deaths after Michael Brown’s death were significantly more likely to be reported on as part of a larger pattern of police violence against Black citizens, and some evidence that these stories were more likely to be shared on social media.”

 

“Will More Black Cops Matter? Officer Race and Police‐Involved Homicides of Black Citizens”
Nicholson-Crotty, Sean; Nicholson-Crotty, Jill; Fernandez, Sergio. Public Administration Review, March/April 2017.

Summary: In this study, researchers investigate whether adding more black officers to local police forces would reduce the number of black people dying in police custody. The key takeaway: It depends on the size of the police department and how many black officers it already employs.

For most cities, simply increasing the share of black officers would not be an effective policy solution, the researchers find after examining arrest-related deaths in the 100 most populous cities in the U.S. in 2014 and 2015 and the demographics of local police departments. The analysis is based in part on data collected by The Washington Post and the advocacy organization Mapping Police Violence.

The researchers, from Indiana University, Bloomington and the University of Johannesburg in South Africa, discovered that as the percentage of black officers working in large city grows, so does the number of black men dying in police custody. “In 2015, we see a positive and significant relationship between the percentage of black officers and fatal encounters between police and black citizens in the vast majority of the largest U.S. cities,” they write. “The results also suggest that there is an inflection point at which black officers may become less likely to discriminate against black citizens and more inclined to assume a minority advocacy role or to become neutral enforcers of the law, though the limited number of observations makes it impossible to know for sure.”

The researchers explain that increasing the share of police officers “is only a solution for the small handful of large departments around the country that already have a high percentage of black officers.”

“In places like Ferguson, where blacks are only 11 percent of the police force, efforts to double or even triple the share of the police force that is black would not reach this threshold and may actually be associated with an increase in violent interactions between police and black citizens until the critical mass is achieved,” they write.

 

Deaths in Police Custody”
Heide, Steffen; Chan, Theodore. The Journal of Forensic and Legal Medicine, February 2016.

Summary: The authors of this study found that it’s difficult to compare deaths in police custody across countries for three reasons: 1) a lack of a uniform definition of the term “custody” 2) country-specific differences in the structures of police custody and 3) variations in the way scholars have designed their studies of the issue.

For example, in Europe, people are generally considered to be in police custody for fewer than 48 hours, the authors explain. But in the U.S., someone is typically considered to be in police custody during the process of arrest through the duration of any incarceration, which can last days to decades. “Some studies only record deaths in the police cell, with suicide as an exclusion criteria in some cases,” write the authors, from the University of Halle-Wittenberg in Germany and the University of California, San Diego. “Other studies also include deaths linked to arrests by the police, up to and including deaths caused by the use of firearms or fatal traffic collisions of police vehicles.”

The authors analyzed academic studies of deaths in police custody in Europe, North America and Australia. Because of drastic differences in the data, the authors concluded they could only make a few broad generalizations. They found that men were more likely than women to die in police custody, for instance. “The male dominance is essentially due to the fact that around the world women are much less frequently taken into police custody than men,” they write.

They note the need for further investigation of “sudden death” incidents in police custody — unexpected deaths often during or immediately after a confrontation with law enforcement in which there is not clear cause of death such as a gunshot wound.

 

“Homicides by Police: Comparing Counts From the National Violent Death Reporting System, Vital Statistics, and Supplementary Homicide Reports”
Barber, Catherine; et al. The American Journal of Public Health, May 2016, Vol.106, doi: 10.2105/AJPH.2016.303074.

Summary: The report compares how different government databases track homicides by law enforcement officers and finds the National Violent Death Reporting System (NVDRS), run by the Center for Disease Control, the best available because it draws from multiple sources and “captures detailed coded data and rich narratives that describe the precipitating circumstances and incident dynamics for all suicides and homicides occurring in participating states.” The authors recommend the NVDRS be expanded to all 50 states. It currently tracks 32.

 

“Race, Crime, and the Micro-Ecology of Deadly Force”
Klinger, David; Rosenfeld, Richard; Isom, Daniel; Deckard, Michael. Criminology & Public Policy, February 2016, Volume 15, doi: 10.1111/1745-9133.12174.

Research Summary: “Limitations in data and research on the use of firearms by police officers in the United States preclude sound understanding of the determinants of deadly force in police work. The current study addresses these limitations with detailed case attributes and a microspatial analysis of police shootings in St. Louis, MO, between 2003 and 2012. The results indicate that neither the racial composition of neighborhoods nor their level of economic disadvantage directly increase the frequency of police shootings, whereas levels of violent crime do—but only to a point. Police shootings are less frequent in areas with the highest levels of criminal violence than in those with midlevels of violence. We offer a provisional interpretation of these results and call for replications in other settings.”

 

“Pattern of law enforcement–related injuries in the United States”
Chang, David C.; et al. Journal of Trauma and Acute Care Surgery, Volume 80(6), June 2016, doi: 10.1097/TA.0000000000001000

Conclusion: “The majority of law enforcement–related injuries are among white or black young men. Hispanic patients are more likely to be injured by a firearm than struck. When injured by firearm, white and black patients are more likely to die compared with Hispanic patients. Unfortunately, data about these injuries are scattered across multiple data systems. A uniform national system to aggregate these data sources is needed to better understand the scope of the problem, for both law enforcement personnel and civilians.”

 

“Arrest-Related Deaths, 2003-2009: Statistical Tables”
Burch, Andrea M. U.S. Department of Justice, Bureau of Justice Statistics, November 2011.

Introduction: “From 2003 through 2009, a total of 4,813 deaths were reported to the Bureau of Justice Statistics’ (BJS) Arrest-Related Deaths (ARD) program. Of these, about 6 in 10 deaths (2,931) were classified as homicide by law enforcement personnel, and 4 in 10 (1,882) were attributed to other manners of death. Suicide and death by intoxication each accounted for 11 percent of reported arrest-related deaths, accidental injury for 6 percent, and natural causes for 5 percent (figure 1). Deaths with manners classified as undetermined or those in which manners were unknown represented about 6 percent of reported arrest-related deaths.”

 

“Unexpected Arrest-Related Deaths in America: 12 Months of Open Source Surveillance”
Ho, Jeffrey D. Western Journal of Emergency Medicine, May 2009, Volume X, No. 2.

Abstract: “Introduction: Sudden, unexpected arrest-related death (ARD) has been associated with drug abuse, extreme delirium or certain police practices. There is insufficient surveillance and causation data available. We report 12 months of surveillance data using a novel data collection methodology. Methods: We used an open-source, prospective method to collect 12 consecutive months of data, including demographics, behavior, illicit substance use, control methods used, and time of collapse after law enforcement contact. Descriptive analysis and chi-square testing were applied. Results: There were 162 ARD events reported that met inclusion criteria. The majority were male with mean age 36 years, and involved bizarre, agitated behavior and reports of drug abuse just prior to death. Law enforcement control techniques included none (14 percent); empty-hand techniques (69 percent); intermediate weapons such as Taser device, impact weapon or chemical irritant spray (52 percent); and deadly force (12 percent). Time from contact to subject collapse included instantaneous (13 percent), within the first hour (53 percent) and 1-48 hours (35 percent). Significant collapse time associations occurred with the use of certain intermediate weapons.”

 

“On the Problems and Promise of Research on Lethal Police Violence”
Klinger, David A. Homicide Studies, 2012, 16(1) 78-96, doi: 10.1177/1088767911430861.

Abstract: “We presently have little information about how frequently police officers shoot citizens or are involved in any sort of interaction in which citizens die. Despite this, however, researchers persist in using the limited data available on fatal police violence in various sorts of analyses. The current article outlines the liabilities in available counts of fatal police action, describes some of the problems posed by using such data, discusses why counting citizens killed by police bullets is not a sound way to measure deadly force, and offers some ideas for improving measurement of the use of deadly force and other police actions that lead to the death of citizens.”

 

“Arrest-Related Deaths Program Assessment: Technical Report”
Banks, Duren; Couzens, Lance; Blanton, Caroline; Cribb, Devon. Bureau of Justice Statistics, U.S. Department of Justice, March 2015, NCJ 248543.

Executive summary: “The Bureau of Justice Statistics (BJS) designed the Arrest-Related Deaths (ARD) program to be a census of all deaths that occur during the process of arrest in the United States…. We found that over the study period from 2003 through 2009 and 2011, the ARD program captured, at best, 49 percent of all law enforcement homicides in the United States. The lower bound of ARD program coverage was estimated to be 36 percent. These findings indicate that the current ARD program methodology does not allow a census of all law enforcement homicides in the United States. The ARD program captured approximately 49 percent of law enforcement homicides, while the SHR captured 46 percent. An estimated 28 percent of the law enforcement homicides in the United States are not captured by either system. However, the methodology for identifying ARD cases has changed over the observation period. In 2011, the ARD program was estimated to cover between 59 percent and 69 percent of all law enforcement homicides in the United States, depending on the estimation method used. While this coverage estimate still does not result in a census, it does suggest improvements over time in the overall approach to identifying law enforcement homicides and reporting them to the ARD program.

 

“Can TASER Electronic Control Devices Cause Cardiac Arrest?”
Kroll, Mark W.; et al. Circulation, 2014. 10.1161/circulationaha.113.004401.

Introduction: “The electronic control device (ECD) has gained widespread acceptance as the force option for law enforcement because of its dramatic reduction in both suspect and officer injury. At the same time, advocacy groups post statements on the Internet listing the hundreds of arrest-related deaths after ECD use with the implication that the ECD involvement was causal. Studies covering a total of >48 000 forceful arrests have consistently found suspect injury rate reductions of ≈65 percent. Of the 250 000 annual ECD field uses in the United States, only 1 in 4000 is involved in an arrest-related death.” The research looks at 12 cases of cardiace after ECD application. Results: ” These data suggest that the threshold of factual evidence for blaming a cardiac arrest on an ECD should be set very high. The published case reports have not met that threshold.”

 

“The Effect of the Prone Maximal Restraint Position with and without
Weight force on Cardiac Output and other Hemodynamic Measures”

Savaser, Davut J.; et al. Journal of Forensic and Legal Medicine, August 2013, Vol. 30

Abstract: “Background: The prone maximal restraint (PMR) position has been used by law enforcement and emergency care personnel to restrain acutely combative or agitated individual. The position places the subject prone with wrists handcuffed behind the back and secured to the ankles. Prior work has indicated a reduction in inferior vena cava (IVC) diameter associated with this position when weight force is applied to the back. It is therefore possible that this position can negatively impact hemodynamic stability. Objectives: We sought to measure the impact of PMR with and without weight force on measures of cardiac function including vital signs, oxygenation, stroke volume (SV), IVC diameter, cardiac output (CO) and cardiac index (CI). Conclusions: PMR with and without weight force did not result in any changes in CO or other evidence of cardiovascular or hemodynamic compromise.”

 

“Effect of Position and Weight Force on Inferior Vena Cava Diameter: Implications for Arrest-related Death”
Ho, Jeffrey D.; et al. Forensic Science International, 2011. doi: 10.1016/j.forsciint.2011.07.001.

Abstract: “Introduction: The physiology of many sudden, unexpected arrest-related deaths (ARDs) proximate to restraint has not been elucidated. A sudden decrease in central venous return during restraint procedures could be physiologically detrimental. The impact of body position and applied weight force on central venous return has not been previously studied. In this study, we use ultrasound to measure the size of the inferior vena cava (IVC) as a surrogate of central venous return in the standing position, prone position, and with weight force applied to the thorax in the prone position…. Conclusions: The physiology involved in many sudden, unexpected ARDs has not been elucidated. However, in our study, we found a significant decrease in IVC diameter with weight force compression to the upper thorax when the subject was in the prone position. This may have implications for the tactics of restraint to aid in the prevention of sudden, unexpected ARD cases.”

 

“Evaluation of the Ventilatory Effects of the Prone Maximum Restraint (PMR) Position on Obese Human Subjects”
Sloane, Christian; et al. Forensic Science International, April 2014, 237. doi: 10.1016/j.forsciint.2014.01.017.

Abstract: “The study sought to determine the physiologic effects of the prone maximum restraint (PMR) position in obese subjects after intense exercise. We designed an experimental, randomized, cross-over trial in human subjects conducted at a university exercise physiology laboratory. Ten otherwise healthy, obese (BMI > 30) subjects performed a period of heavy exertion on a cycling ergometer to 85 percent of maximum heart rate, and then were placed in one of three positions in random order for 15 min: (1) seated with hands behind the back, (2) prone with arms to the sides, (3) PMR position. While in each position, mean arterial blood pressure (MAP), heart rate (HR), minute ventilation (VE), oxygen saturation (SaO2), and end tidal CO2 (etCO2) were measured every 5 min. There were no significant differences identified between the three positions in MAP, HR, VE, or O2s at at any time period. There was a slight increase in heart rate at 15 min in the PMR position over the prone position (95 vs. 87). There was a decrease in end tidal CO2 at 15 min in the PMR over the prone position (32 mmHg vs. 35 mmHg). In addition, there was no evidence of hypoxia or hypoventilation during any of the monitored 15 min position periods. Conclusion: In this small study of obese subjects, there were no clinically significant differences in the cardiovascular and respiratorymeasures comparing seated, prone, and PMR position following exertion.”

 

“Excited Delirium Syndrome (EXDS): Defining Based on a Review of the Literature”
Vilke, Gary M.; et al. Clinical Reviews, February 2011.

Abstract: “Patients present to police, emergency medical services, and the emergency department with aggressive behavior, altered sensorium, and a host of other signs that may include hyperthermia, ‘superhuman’ strength, diaphoresis, and lack of willingness to yield to overwhelming force. A certain percentage of these individuals will go on to expire from a sudden cardiac arrest and death, despite optimal therapy. Traditionally, the forensic community would often classify these as ‘excited delirium’ deaths. Objectives: This article will review selected examples of the literature on this topic to determine if it is definable as a discrete medical entity, has a recognizable history, epidemiology, clinical presentation, pathophysiology, and treatment recommendations. Discussion: Excited delirium syndrome is characterized by delirium, agitation, acidosis, and hyperadrenergic autonomic dysfunction, typically in the setting of acute-on-chronic drug abuse or serious mental illness or a combination of both. Conclusions: Based upon available evidence, it is the consensus of an American College of Emergency Physicians Task Force that Excited Delirium Syndrome is a real syndrome with uncertain, likely multiple, etiologies.”

 

This image was obtained from the Flickr account of Lorie Shaull and is being used under a Creative Commons license. No changes were made.

The post Deaths in police custody in the United States: Research review appeared first on The Journalist's Resource.

]]>
Amtrak safety, rail transit and Positive Train Control: Research roundup https://journalistsresource.org/economics/amtrak-safety-rail-transit-and-infrastructure-issues-research-roundup/ Tue, 19 Dec 2017 19:43:59 +0000 http://live-journalists-resource.pantheonsite.io/?p=44895 An updated collection of recent reports, research and analysis relating to Amtrak, rail transit and Positive Train Control.

The post Amtrak safety, rail transit and Positive Train Control: Research roundup appeared first on The Journalist's Resource.

]]>

In December 2017, an Amtrak train derailed after traveling too fast on a curve in Washington, killing or injuring dozens of people. In May 2015, a train derailed under similar circumstances in Philadelphia, killing eight and sending 185 others to area hospitals.

Investigators are still trying to figure out what happened in Washington. But the National Transportation Safety Board (NTSB) learned that the engineer on the Philadelphia train was distracted by another train and, according to a report the agency released in 2016, the “most likely reason the engineer failed to slow for the curve was … because of his loss of situational awareness.”

The December crash has raised new questions about whether Positive Train Control (PTC) — technology designed to slow trains nearing dangerous conditions — would have prevented these disasters and whether the federal government erred by extending the deadline for railroads to install it.

As the U.S. Government Accountability Office noted in a December 2013 report, PTC was supposed to be implemented by the end of 2015, precisely to prevent accidents caused by human factors. This system and its accompanying 2015 deadline were mandated under the Rail Safety Improvement Act of 2008, which was passed in response to several fatal rail accidents between 2002 and 2008. PTC was described as a “groundbreaking” wireless communications system comprised of “integrated technologies capable of preventing collisions, over-speed derailments and unintended train movements.”

The Federal Railroad Administration (FRA), charged with overseeing PTC’s development and roll out, had not implemented it on the stretch of rail going through Philadelphia prior to the crash there, as  of Vox first noted. News reports indicate that while the technology has been installed on tracks in Washington, it is not yet operational.

In September 2015, the federal government released a report saying nearly 70 percent of railroads were not going to implement PTC until after the Dec. 31, 2015 deadline — an estimated one to five years afterward. Congress extended the deadline to Dec. 31, 2018, although railroads may qualify for more time if they meet certain criteria.

Below is a selection of research and reports relating to Amtrak, rail transit and Positive Train Control that can help inform reporting on these issues:

_______

 

“Technical and Safety Evaluation of the Southern California Regional Rail Authority Positive Train Control Deployment Project: Challenges and Lessons Learned”
Placencia, Greg; Franklin, John; Moore, James E. Report sponsored by the Federal Transit Administration, Report No. 0112, July 2017.

Abstract: “Positive Train Control (PTC), often referred to as Communication Based Train Control (CBTC), has been on the National Transportation Safety Board’s (NTSB) ‘Most Wanted List of Transportation Safety Improvements’ for several decades as a safety-enabling system. The Rail Safety Improvement Act of 2008 mandated its implementation after the September 12, 2008, Chatsworth, California, collision between trains from the Southern California Regional Rail Authority (SCRRA or Metrolink) and Union Pacific. SCRRA has undergone substantial challenges to integrate PTC into its operations. This report investigates the multilevel challenges—technological, human, organizational, and systematic—that SCRRA faced implementing the new technology as well as many of the lessons the railroad industry can learn from these challenges. Technology alone cannot ensure safety, but a properly-implemented PTC system can develop and promote high reliability practices that enable safe operations throughout an organization. The report examines interactions among the numerous Systems of Systems for their impact on successful PTC implementation.”

 

“Amtrak Five Year Service Line Plans: Fiscal Years 2017-2021”
National Railroad Passenger Corporation, 2017.

Summary: This 140-page report offers a broad overview of Amtrak operations, including ridership, ticket revenue, financial forecasts and Amtrak’s five-year capital plan. The report briefly mentions Positive Train Control. According to the report: “After six straight years of annual ridership exceeding 30 million passengers, Amtrak’s business is the strongest it has ever been in its 46-year history. Each day more than 20,000 employees nationwide commit to providing superior customer service. In FY2016 we achieved record revenues of $3.2 billion, proving people are recognizing Amtrak is simply the smarter way to travel.”

 

“A New Alignment: Strengthening America’s Commitment to Passenger Rail”
Robert Puentes; Adie Tomer; Joseph Kan. Metropolitan Policy Program, Brookings Institution, March 2013.

Excerpt: “Amtrak experienced a significant increase in national ridership after 1997. Using Amtrak’s fiscal period of October to September, Amtrak’s total boardings and alightings jumped 55.1 percent from 1997 to 2012. To put this increase in perspective, it outstrips population growth (17.1 percent) more than threefold over the same period and exceeds the growth in real gross domestic product (37.2 percent). With Amtrak setting ridership records for nine of the past ten years, including the new all-time high in 2012, there is a great chance Amtrak’s passenger growth will continue to far outpace growth in population and GDP. In addition, Amtrak’s passenger growth also exceeds all other domestic transportation modes. The most appropriate modal comparison is domestic aviation, since Amtrak and major airlines compete along certain corridors. In this case, Amtrak more than doubled the growth in domestic aviation passengers (20.0 percent) over the same 16-year period. Similarly, Amtrak also far exceeded the growth in driving (measured by vehicle miles traveled per year; 16.5 percent) and transit trips (26.4 percent). All three modes do carry larger aggregate quantities of people, but these growth trends serve as evidence of changing attitudes toward train travel.

In addition to route length, having a direct connection between major metropolitan areas is another driver of higher Amtrak ridership. Across the past 15 years, a consistent group of 10 corridors, all less than 400 miles long, generate around 70 percent of total system ridership. Each of these routes involves many of the country’s 100 largest metropolitan areas and benefit from the higher job and population densities present in those metropolitan cores. The Northeast Corridor is particularly notable in this respect, connected by the metropolitan anchors of Boston, New York, Philadelphia and Washington. These four metropolitan areas house over 35 million people, generate $2.3 trillion in annual output, and share historic and modern relationships. Similarly, all four metro areas suffer from high traffic volumes between them as well as the country’s most congested airspace (New York-Philadelphia), making the rails an attractive alternative to some of the country’s most delayed airports. Indeed, Amtrak boasts 75 percent of the share of the passenger rail/aviation market between New York and Washington.”

 

“State of the Northeast Corridor Region Transportation System”
Northeast Corridor Infrastructure and Operations Advisory Commission, Summary Report, February 2014.

Excerpt: “Available capacity on the highway, rail, and aviation networks is limited such that all three modes experience serious congestion levels with negative consequences for productivity and quality of life. Aging infrastructure, especially on the highway and rail networks, threatens to reduce the capacity we enjoy today. Existing plans and identified funding sources fail to fully address the capital needs for bringing our transportation system into a state of good repair or building new infrastructure to support growth in the economy. Despite these challenges, advances in technology and new types of intermodal and interjurisdictional coordination offer opportunities for modernizing our transportation system.”

 

“Rail Safety: Improved Human Capital Planning Could Address Emerging Safety Oversight Challenges”
U.S. Government Accountability Office, GAO-14-85, December 2013.

Excerpt: “[The Federal Railroad Administration] (FRA) has developed a risk-based approach to direct its inspection efforts, but the agency has been slow to implement broader risk reduction planning. FRA has two tools to help direct its inspection efforts — the National Inspection Plan (NIP) and the Staffing Allocation Model (SAM). The NIP process uses past accident and other data to target FRA’s inspection activities, and the SAM estimates the best allocation of the different types of inspectors across FRA regions in order to minimize damage and casualties from rail accidents. However, all eight FRA regional administrators expressed concerns about FRA’s staffing process that relies primarily on the SAM to predict appropriate regional inspector needs, and that does not allow the flexibility needed to accommodate the regions’ changing resource needs. In addition, the Railroad Safety Improvement Act of 2008 mandated safety risk reduction plans primarily for large freight and passenger railroads. FRA has not yet issued the final rule directing railroads to develop the plans, which was mandated to be issued by October 2012. According to FRA, the rulemaking was delayed due to concerns by railroads over their potential liability. Although FRA anticipates completing approval of railroad’s plans by 2016, the agency has not developed an interim plan with specific timeframes to ensure that there are no further delays in issuing regulations and that timely evaluation and approval of the railroads’ risk reduction plans occurs.”

 

“Rail Safety: Preliminary Observations on Federal Rail Safety Oversight and Positive Train Control Implementation”
Susan A. Fleming. U.S. Government Accountability Office, Testimony before the Committee on Commerce, Science, and Transportation, U.S. Senate, June 2013.

Excerpt: “According to FRA officials, in the next 5 years, about 32 percent of FRA inspectors will be eligible to retire. Although FRA officials said that they anticipate being able to replace inspectors, it can take 1 to 2 years to find, hire, train, and certify a new inspector. Finally, FRA faces other ongoing and emerging safety challenges like addressing adverse weather conditions and their impact on railroad operations and equipment, educating the public on the potential hazards of rail-highway crossings, accommodating changes in rail safety risks including new freight flows that affect the need for inspections, and hiring and training a specialized inspector workforce to provide adequate safety oversight for emerging technologies including positive train control (PTC), a communications-based system designed to prevent train accidents caused by human factors…. FRA is a small agency relative to the railroad industry, making the railroads themselves the primary guarantors of railroad safety. Based on our work to date, FRA has about 470 inspectors in its headquarters and regional offices, in addition to about 170 state inspectors. In contrast, the U.S. railroad system consists of about 760 railroads with about 230,000 employees and 200,000 miles of track in operation.”

 

“Individual Freight Effects, Capacity Utilization and Amtrak Service Quality”
Betty Krier; Chia-Mei Liu; Brian McNamara; Jerrod Sharpe. Transportation Research Part A: Policy and Practice, Vol. 64, June 2014, 163-175. doi: 10.1016/j.tra.2014.03.009

Excerpt: “We hypothesized the existence of a link between individual freight effects and Amtrak’s service quality in the post-deregulation rail industry. We tested the hypothesis with train delay models for long- and short-distance routes. Based on monthly panel data on 1117 directional Amtrak station-pairs for FY 2002 to 2007, we found that individual freight railroads had important effects on Amtrak delays. We also identified other significant delay causes. Among these, the capacity utilization rate, maintenance-related slow orders, and turn points are particularly important because of their potential to be improved upon through stakeholder actions. These findings also have policy implications. Despite the statute giving Amtrak trains priority on freight infrastructure, delays still differ significantly depending on the host railroad. It could be beneficial to look for solutions that further address the potential conflict of interest between freight railroads and Amtrak. For example, there is probably room to increase the effectiveness of the structures for incentive payments by Amtrak to the freights to reduce Amtrak’s train delays.”

 

“Railroad Safety: Amtrak Is Not Adequately Addressing Rising Drug and Alcohol Use by Employees in Safety-Sensitive Positions”
Office of the Inspector General, Amtrak. Report No. OIG-E-2012-023, September 2012.

Excerpt: “Amtrak’s HOS employees are testing positive for drugs and alcohol more frequently than their peers in the railroad industry. Our analysis of Amtrak’s random drug and alcohol test results shows that these employees have been testing positive for drugs and alcohol at a rate that has been generally trending upward since 2006, and this rate has exceeded the industry average for the past 5 years. The majority of Amtrak’s positive tests since 2006 were for drugs, primarily cocaine and marijuana. In 2011, Amtrak had 17 positive tests for drugs or alcohol, which resulted in a combined positive test rate that was about 51 percent above the industry average, its worst rate since 2007. The 2011 rate was driven by a relatively large number of positive tests by signals and mechanical employees that were both over four times the rate of their peers in the industry. Based on the random test data, we calculated, with 95 percent confidence, that if all 4,454 HOS employees had been tested in 2011, between 21 and 65 of these employees would have tested positive for drug use, with a best estimate of 43 employees. We also calculated that between 4 and 32 of Amtrak’s HOS employees would have tested positive for alcohol use, with a best estimate of 18 employees.

Amtrak is not exercising due diligence to control the use of drugs and alcohol by these employees. Until we presented Amtrak’s key senior management with our preliminary results, they were unaware of the extent of drug and alcohol use by these employees. Further, senior management is not actively engaged in the program, nor have they demonstrated that controlling drugs and alcohol is a clear priority at Amtrak, thereby making it difficult to manage the risk that drug and alcohol use poses to its employees, passengers and the public. Amtrak also did not adequately address, for several years, FRA’s concerns about Amtrak’s program to physically observe HOS employees for signs and symptoms of drug and alcohol use. Consequently, FRA has stated that it may elevate enforcement actions against Amtrak up to and including fining Amtrak in the future if the number of observations is not improved. This may become more challenging because the number of HOS employees requiring observation may increase by 2,260 in 2013 due to potential changes in the regulation.”

 

The post Amtrak safety, rail transit and Positive Train Control: Research roundup appeared first on The Journalist's Resource.

]]>
Data journalism lesson with crime stats: Parsing close-call numbers https://journalistsresource.org/home/basic-data-analysis-making-the-call-on-statistics-and-story-focus/ Mon, 30 Jan 2017 12:08:05 +0000 http://live-journalists-resource.pantheonsite.io/?p=44356 Tip sheet explaining a few basic statistical techniques that can help reporters and editors make decisions when there is some ambiguity -- and a borderline “call” -- inherent in the numbers.

The post Data journalism lesson with crime stats: Parsing close-call numbers appeared first on The Journalist's Resource.

]]>

Was Oakland the nation’s most dangerous city in 2013? Or was it Oakland and Flint? What is a valid distinction, statistically speaking? We show the uses of a “confidence interval.”

 

Journalists love rankings and lists, especially when they involve public data that show how certain states, cities, zip codes or neighborhoods compare against one another. But when journalists select angles, write leads and craft headlines, inevitably some amount of nuance — and potentially truth — gets left behind in the act of compression.

When the weight of the data is overwhelmingly in one direction or another, the “story” can almost write itself, and accurately. For example, let’s say a researcher finds that 74 percent of U.S. transportation fatalities take place on highways, and of those, 10 percent are motorcycle riders. Here, writing a lead and headline are relatively direct, and likely to be reflective of what the data are saying.

But what about cases where there’s less clarity in the data — how do we weigh significance and make close calls? Below is a simple example of data journalism with a few straightforward statistical techniques that can help reporters and editors make more accurate decisions when there is some ambiguity — and a borderline “call” — inherent in the numbers.

The attached Excel spreadsheet contains 2013 crime data from 269 cities across the United States. This nicely cleaned-up table is courtesy of Investigative Reporters & Editors (IRE), and comes from its Data Coursepack. As IRE notes, there are all sorts of fundamental issues with using this dataset:

The Federal Bureau of Investigation has been collecting crime data from law enforcement agencies in the United States since the 1930s. The FBI discourages against ranking cities based on this data for many factors. First of all, reporting is voluntary and while the FBI provides guidelines on data collection they are not rules. Just a fraction of law-enforcement agencies report to the FBI and they may collect information differently or have different definitions for offenses. Additionally, cities such as Detroit and St. Louis often float to the top of rankings because the cities are their own counties and don’t include any suburban areas in the offense totals. While journalists need to be aware of these caveats, FBI crime data remain the best tool we have for analyzing this information across the country.

With these caveats in mind, let’s look at the IRE’s dataset and try to use it to answer a classic question: Which city is the most dangerous? If you eyeball the list and the highest rates (see the Media/Analysis tab on this post for the formulas), you’ll see it’s a close call near the top:

Screenshot crime data

If you’re a reporter in Michigan, you’d note that both Flint and Detroit are in the top 10, so the story could be their ongoing troubles with public safety. But what if you’re writing a national story? Just by going from the numbers, the most dangerous city (based on this dataset), is Oakland, California, with 10.27 violent crimes per 1,000 residents. But is it fair to run a story that says, in effect, “Oakland Is Nation’s Most Dangerous City”? Oakland is definitely in the unenviable top position, but the margin between it and Flint, which has 10.21 instances of violent crime per 1,000 persons, seems pretty slender. And what about Detroit, at 9.69? How meaningful are the distinctions between these cities, and how should they be interpreted in order to focus the story?

When you’re working on deadline, you don’t often have time to do a full statistical analysis, so rules of thumb can be helpful, keeping in mind that they’re only that. In this case, what’s the percentage difference between crime rates for Oakland and the cities lower on the list? Here’s how to calculate:

  • 1) In cell S3, type in the formula “=P3/$P$2” — here you’re dividing the crime rate in Flint by that in Oakland. (The dollar signs in the formula mean “keep this reference point fixed”; it’ll be important later on.) After the formula is in place, right-click on the cell, select “format cells,” choose “percentage,” set the number of decimal places to two, and press OK. You should see 99.42 percent. So, Flint’s violence rate is very close to Oakland’s, something you could see just from the raw numbers, but here it’s in percentage terms.
  • 2) To find out how much lower Flint’s violence rate is than Oakland’s, type the formula “=1-(P3/$P$2)” in cell R3. The result should be 0.58 percent. (You could have derived this with a calculator by subtracting 99.42 from 100, but it’s easier to let Excel do the work.)
  • 3) Copy the formula in cell R3 and paste it into R4 through R270. You’ll now get percentages expressing the difference between Oakland’s rate as compared to each other city.

Just looking quickly, you can see that the rates for Oakland and Flint are very close, just 0.58 percent apart — not even 1 percent. Detroit’s is 5.66 percent lower than Oakland’s rate; Memphis is 19.77 percent lower; St. Louis is 26.11 percent lower, and so on, all the way down to Irvine, Calif. Its violent-crime rate is 0.27 per 1,000 — 97.36 percent lower than that for Oakland.

So in the column of results that we have, when does the difference in the rate of violent crime become “significant,” statistically speaking? In the broadest possible terms, the figure of 5 percent can be a helpful guideline. A smaller difference is questionable, though there are a lot of potential subtleties — for example, how bunched up or spread out are the values? Hard to say, just looking at the data quickly. Consequently, we need to derive our answer in a more rigorous way, based on all the observations in the dataset. Only this will allow us to better understand the numbers and, based on this knowledge, write with more authority.

All that’s required is a few more of the built-in formulas that Excel and other spreadsheet programs make available:

  • 1) At the bottom of your spreadsheet, in cell N272, type the word “Count.” In P272, type “=COUNT(P2:P270)”. This gives you the number of values that we have in this dataset. (Note that you could just type in 269 here, but using the COUNT function would allow you to, say, easily exclude some outlying data points to ask slightly different questions.)
  • 2) In cell N273, type the word “Mean” or “Average” — it’s just a label. Then in P273, type “=AVERAGE(P2:P270)”. Once the formula is in place, right-click on the cell and format it as a number with three decimal places. This gives you the average violent-crime rate per 1,000 people for our dataset: 2.686 per 1,000.
  • 3) In cell N274, type the label “Standard Dev.” and in P274, “=STDEV(P2:P270)”. This uses a built-in Excel formula to calculate the standard deviation, which tells us how much variation the dataset contains — are the numbers tightly bunched or spread out? The answer is 1.818.
  • 4) In cell N275, type “Standard Error” and in P275, “=(P274)/(SQRT(P272))”. Here we’re dividing the standard deviation in cell P274 by the square root of the number of data points. The more data points we have, the lower our standard error, the fewer data points, the greater the standard error. In this case, it’s 0.111.
  • 5) In cell N276, type “Confidence Int.” and in P276, “=CONFIDENCE(0.05,P274,P272)”. Another built-in function, with the level of significance we’re choosing (0.05, or 5 percent, in this case), the standard deviation we calculated in cell P274, and the number of data points shown in cell P272. The result is 0.217.

What the confidence interval means is that for any one value in the list of violence rates, from Oakland to Irvine, those that are 0.217 greater or lower have a 95 percent probability of being statistically different. This being the case, let’s find out how far around the average of 2.686 the 95 percent confidence interval extends — what’s the range of “average” in our sample.

  1. In cell N278, type “Minimum” and in cell P278, “=P273-P276.” This is the average minus the confidence interval, and the result is 2.469.
  2. In cell N279, type “Maximum” and in cell P279, “=P273+P276.” This is the average plus the confidence interval, and the result is 2.903.

So for our story, it turns out that the cities with rates of violent crime closest to the national average, 2.686, are Seattle, Washington (2.702), and Lowell, Massachusetts (2.681). Twenty-nine of our cities, from Manchester, New Hampshire (2.899) to Reno, Nevada (2.473), are within 95 percent confidence interval for our mean — their rates are statistically indistinguishable from the average crime rate, meaning they’re effectively the same, at least based on the available data. Given that we have 269 cities, they’re actually fairly tightly bunched: 10.78 percent are within the confidence interval for the mean.

And back at the top of the scale, our more-involved calculations tell us that the rates for Oakland and Flint are effectively identical, and significant outliers. Only when do you get to Detroit, whose rate of 9.62 is 0.581 less than Oakland’s — more than the 95 percent confidence interval of 0.217 — does the difference become significant, statistically speaking.

So then the question becomes, what is the story that the data is telling us? Here are possible headlines, all of them statistically accurate — and defensible:

  • “Oakland, California and Flint, Michigan lead nation in violent crime rates.” The rate for these two cities is more than five times the average for all cities examined.
  • “Philadelphia and Houston the most dangerous U.S. cities over 1 million; San Diego and Phoenix the safest.” To arrive at this, we sorted by population, then sorted just the cities over 1 million in population on their violent-crime rates.
  • “New York: Big city, but violent crimes close to U.S. average.” The city’s rate is 2.929, just beyond the 95 percent confidence limit of 2.903. But looking above New York in the list, there are a lot of cities thought of as “safe” whose violence rates are significantly higher than New York’s (meaning, at least 0.217 higher, our confidence interval), including West Palm Beach, Florida (3.163), Tucson, Arizona (3.178), Peoria, Illinois (3.227) and Spokane, Washington (3.398).

The bottom line: Quick calculations are handy, and can help you in a deadline situation, but it’s always better to really dig into the numbers, even when you have a small amount of time. There are often a lot of great stories there, and far more worth telling than the simplistic read of a column of values.

Related resources: Two other Journalist’s Resource tip sheets can provide more information on data analysis: “Statistical Terms Used in Research Studies; a Primer for Media” and “Regression Analysis: A Quick Primer for Media on a Fundamental Form of Data Crunching.”

The post Data journalism lesson with crime stats: Parsing close-call numbers appeared first on The Journalist's Resource.

]]>
Polling fundamentals and concepts: An overview for journalists https://journalistsresource.org/politics-and-government/polling-fundamentals-journalists/ Thu, 10 Nov 2016 12:59:31 +0000 http://live-journalists-resource.pantheonsite.io/?p=12586 Basic polling concepts for journalists, including how polls are conducted, polling organizations, and things to watch out for when reporting on polling results.

The post Polling fundamentals and concepts: An overview for journalists appeared first on The Journalist's Resource.

]]>

The 2016 presidential election surprised many because Donald Trump’s win defied the vast majority of polls. In the aftermath, some are blaming journalists for rushing information out quickly without explaining basic polling caveats. Despite all the lavish attention, polls are only as valid as their design, execution and analysis.

The best polls are produced by independent, nonpartisan polling organizations, with no vested interest in the outcome of the findings. These include organizations like Gallup and the Pew Research Center and as well as media groups such as CBS News/New York Times, ABC News/Washington Post and NBC News/Wall Street Journal. Many surveys are conducted by partisan actors — political consulting firms, industry groups and candidates. In some cases, the findings are biased by factors such as respondent selection and question wording. Partisan-based polls need to be carefully scrutinized and, when possible, reported in comparison with nonpartisan poll results.

It’s important to remember that polls are a snapshot of opinion at a point in time. Despite 60 years of experience since Truman defied the polls and defeated Dewey in the 1948 presidential election, pollsters can still miss big: In the 2008 Democratic primary in New Hampshire, Barack Obama was pegged to win, but Hillary Clinton came out on top. A study in Public Opinion Quarterly found that “polling problems in New Hampshire in 2008 were not the exception, but the rule.” In a fluid political environment, it is risky to assume that polls can predict the distribution of opinion even a short time later.

Here are some polling concepts that journalists and students should be familiar with:

  • In a public opinion poll, relatively few individuals — the sample — are interviewed to estimate the opinions of a larger population. The mathematical laws of probability dictate that if a sufficient number of individuals are chosen truly at random, their views will tend to be representative.
  • A key for any poll is the sample size: a general rule is that the larger the sample, the smaller the sampling error. A properly drawn sample of one thousand individuals has a sampling error of about plus or minus 3%, which means that the proportions of the various opinions expressed by the people in the sample are likely to be within plus or minus 3% of those of the whole population.
  • In all scientific polls, respondents are chosen at random. Surveys with self-selected respondents — for example, people interviewed on the street or who just happen to participate in a web-based survey — are intrinsically unscientific.
  • The form, wording and order of questions can significantly affect poll results. With some complex issues — the early debate over human embryonic stem cells, for example — pollsters have erroneously measured “nonopinions” or “nonattitudes,” as respondents had not thought through the issue and voiced an opinion only because a polling organization contacted them. Poll results in this case fluctuated wildly depending on the wording of the question.
  • Generic ballot questions test the mood of voters prior to the election. Rather than mentioning candidates’ names, they ask the respondent would vote for a Republican or Democrat if the election were held that day. While such questions can give a sense of where things stand overall, they miss how respondents feel about specific candidates and issues.
  • Poll questions can be asked face-to-face or by telephone, with automated calls, or by email or mail. The rise of mobile-only households has complicated polling efforts, as has the increasing reluctance of Americans to participate in telephone polls. Nevertheless, telephone polls have a better record of accuracy than Internet-based polls. Whatever the technique used, it is important to understand how a poll was conducted and to be careful about reporting any poll that seems to have employed a questionable methodology.
  • Social desirability bias occurs when respondents provide answers they think are socially acceptable rather than their true opinions. Such bias often occurs with questions on difficult issues such as abortion, race, sexual orientation and religion.
  • Beware of push polls, which are thinly disguised attempts by partisan organizations to influence voters’ opinions rather than measure them.
  • Some survey results that get reported are based on a “poll of polls,” where multiple polls are averaged together. Prominent sites that engage in this practice are FiveThirtyEight, Real Clear Politics and the Cook Political Report. There are, however, any number of methodological arguments over how to do this accurately and some statisticians have objections to mixing polls at all.
  • When reporting on public-opinion surveys, include information on how they were conducted — who was polled, when and how. Report the sample size, margin of error, the organizations that commissioned and executed the poll, and whether they have any ideological biases. Avoid polling jargon, and report the findings in as clear a language as possible.
  • Compare and contrast multiple polls when appropriate. If the same question was asked at two different points in time, what changed? If two simultaneously conducted polls give different results, find out why. Talk to unbiased polling professionals or scholars to provide insight. If you’re having trouble finding experts to put findings in perspective, exercise caution.
  • When polls appear in news stories, they’re typically emphasize the “horse race” aspects of politics. This focus can obscure poll findings that are of equal or greater significance, such as how voters feel about the issues and how their candidate preferences are affected by the issues.

For those interested in a deeper dive into polling, Journalist’s Resource has a number of academic studies on measuring public opinion: “I’m Not Voting for Her: Polling Discrepancies and Female Candidates,” “Measuring Americans’ Concerns about Climate Change,” “Dynamic Public Opinion: Communication Effects over Time” and “Exit Polls: Better or Worse Since the 2000 Election?” are just a few of those available.

____

This article is based on work by Thomas Patterson, Harvard’s Bradlee Professor of Government and the Press and research director of Journalist’s Resource; Charlotte Grimes, Knight Chair in Political Reporting at Syracuse University; and the Roper Center for Public Opinion Research at the University of Connecticut.

Keywords: polling, elections

The post Polling fundamentals and concepts: An overview for journalists appeared first on The Journalist's Resource.

]]>
Global warming, rising seas and coastal cities: Trends, impacts and adaptation strategies https://journalistsresource.org/environment/impact-global-warming-rising-seas-coastal-cities/ Wed, 07 Sep 2016 13:02:38 +0000 http://live-journalists-resource.pantheonsite.io/?p=41252 Updated roundup of research on climate-change risks and the regions and groups most threatened by them, attempts to mitigate these risks, and adaptive efforts for coastal regions.

The post Global warming, rising seas and coastal cities: Trends, impacts and adaptation strategies appeared first on The Journalist's Resource.

]]>

Rising seas are one of the central impacts of global warming, and they’re not some abstract challenge for a future day: Areas of the United States now routinely have “sunny-day flooding,” with salt water pushing up through drains even in the absence of storms. When London built the Thames Barrier in 1982, it was expected to be used two to three times a year at most, but has since been employed at twice that rate, a pace that is expected to accelerate. U.S. Army facilities in coastal Virginia already see “recurrent flooding,” according to the Department of Defense. And longer range, things get even more challenging: For example, because of a sea-level “hotspot” on the Northeastern U.S. coast, tides could rise as much as 7.5 feet by 2100 in cities such as Boston. Proposals for a “Venice by the Charles River” are anything but far-fetched.

Yet even as the seas are rising, coastal areas are booming: From 1970 to 2010, the population in the coastal United States grew 39 percent, according to the National Oceanic and Atmospheric Administration, which expects the population in these areas to increase another 8 percent between 2010 and 2020. As of 2010, 123 million Americans lived in coastal counties, at population densities more than four times higher than those of the country as a whole. The same pattern holds around the globe: 60 percent of cities with populations over 5 million are within 60 miles of the sea, and they’re growing rapidly. This rush to the shore puts more lives, wealth and infrastructure in harm’s way, increasing losses when storms inevitably hit. A 2013 study in Global Environmental Change estimates that by 2100 sea-level rise could put up to 7.4 million U.S. residents at risk — many of already disadvantaged — and cut the country’s GDP by as much as $289 billion.

While climate change remains a politically charged issue in the U.S. despite the overwhelming evidence, efforts are underway to better understand the risks, prepare for the future and increase community resiliency. The Department of Defense, per its “Climate Change Adaptation Roadmap” of October 2014, has sought to adapt its facilities to a projected sea-level rise of 1.5 feet as early as 2034, though congressional Republicans have blocked efforts to fund the research. The landmark Paris Climate Accord, negotiated under the the U.N.’s Framework Convention on Climate Change (UNFCCC), was endorsed by 174 nations in April 2016. That notably included both the United States and China, though the U.S. has not yet ratified it. The pact lays out ways to limit or reverse harmful trends in greenhouse-gas emissions, with many suggestions that are “actionable” by state and local governments, businesses and individuals. Resources like FloodTools and the National Flood Insurance Program’s FloodSmart website aim to educate citizens about flood risks and preparedness measures, while the Georgetown Climate Center has page on state and local adaptation plans. And such adaptations can be effective: A 2014 study in the Proceedings of the National Academy of Sciences found that large-scale urban adaptation strategies have the potential to counteract some of the effects of long-term global climate change.

Below is a series of studies examining climate-change related risks and the regions and demographic groups most threatened by them; efficacy of attempts thus far to mitigate these risks; and adaptive solutions for coastal regions. Many recent studies focus on particular communities facing inundation around the world.

———————–

“Trapped in Place? Segmented Resilience to Hurricanes in the Gulf Coast, 1970–2005”
Logan, John R.; Issar, Sukriti; Xu, Zengwang. Demography, 2016. doi:10.1007/s13524-016-0496-4.

Abstract: “Hurricanes pose a continuing hazard to populations in coastal regions. This study estimates the impact of hurricanes on population change in the years 1970–2005 in the U.S. Gulf Coast region. Geophysical models are used to construct a unique data set that simulates the spatial extent and intensity of wind damage and storm surge from the 32 hurricanes that struck the region in this period. Multivariate spatial time-series models are used to estimate the impacts of hurricanes on population change. Population growth is found to be reduced significantly for up to three successive years after counties experience wind damage, particularly at higher levels of damage. Storm surge is associated with reduced population growth in the year after the hurricane. Model extensions show that change in the white and young adult population is more immediately and strongly affected than is change for blacks and elderly residents. Negative effects on population are stronger in counties with lower poverty rates. The differentiated impact of hurricanes on different population groups is interpreted as segmented withdrawal—a form of segmented resilience in which advantaged population groups are more likely to move out of or avoid moving into harm’s way while socially vulnerable groups have fewer choices.”

 

“A Comprehensive Review of Climate Adaptation in the United States: More Than Before, but Less than Needed”
Bierbaum, Rosina; et al. Mitigation and Adaptation Strategies for Global Change, March 2013, Vol. 18, Issue 3, 361-406. doi: 10.1007/s11027-012-9423-1.

Abstract: “We reviewed existing and planned adaptation activities of federal, tribal, state, and local governments and the private sector in the United States to understand what types of adaptation activities are underway across different sectors and scales throughout the country. Primary sources of review included material officially submitted for consideration in the upcoming 2013 U.S. National Climate Assessment and supplemental peer-reviewed and grey literature [working papers]. Although substantial adaptation planning is occurring in various sectors, levels of government, and the private sector, few measures have been implemented and even fewer have been evaluated. Most adaptation actions to date appear to be incremental changes, not the transformational changes that may be needed in certain cases to adapt to significant changes in climate. While there appear to be no one-size-fits-all adaptations, there are similarities in approaches across scales and sectors, including mainstreaming climate considerations into existing policies and plans, and pursuing no- and low-regrets strategies. Despite the positive momentum in recent years, barriers to implementation still impede action in all sectors and across scales. The most significant barriers include lack of funding, policy and institutional constraints, and difficulty in anticipating climate change given the current state of information on change. However, the practice of adaptation can advance through learning by doing, stakeholder engagements (including “listening sessions”), and sharing of best practices.”

 

“Future Flood Losses in Major Coastal Cities”
Hallegatte, Stephane; Green, Colin; Nicholls, Robert J.; Corfee-Morlot, Jan. Nature Climate Change, August 2013, 3:802-806. doi: 10.1038/nclimate1979.

Abstract: “Flood exposure is increasing in coastal cities owing to growing populations and assets, the changing climate, and subsidence. Here we provide a quantification of present and future flood losses in the 136 largest coastal cities. Using a new database of urban protection and different assumptions on adaptation, we account for existing and future flood defenses. Average global flood losses in 2005 are estimated to be approximately U.S. $6 billion per year, increasing to U.S. $52 billion by 2050 with projected socio-economic change alone. With climate change and subsidence, present protection will need to be upgraded to avoid unacceptable losses of U.S.$1 trillion or more per year. Even if adaptation investments maintain constant flood probability, subsidence and sea-level rise will increase global flood losses to U.S.$60–63 billion per year in 2050. To maintain present flood risk, adaptation will need to reduce flood probabilities below present values. In this case, the magnitude of losses when floods do occur would increase, often by more than 50%, making it critical to also prepare for larger disasters than we experience today. The analysis identifies the cities that seem most vulnerable to these trends, that is, where the largest increase in losses can be expected.”

 

“Increasing risk of compound flooding from storm surge and rainfall for major U.S. cities”
Wahl, Thomas; et al. Nature Climate Change, 2015. doi:10.1038/nclimate2736.

Abstract: “When storm surge and heavy precipitation co-occur, the potential for flooding in low-lying coastal areas is often much greater than from either in isolation. Knowing the probability of these compound events and understanding the processes driving them is essential to mitigate the associated high-impact risks. Here we determine the likelihood of joint occurrence of these two phenomena for the contiguous United States (US) and show that the risk of compound flooding is higher for the Atlantic/Gulf coast relative to the Pacific coast. We also provide evidence that the number of compound events has increased significantly over the past century at many of the major coastal cities. Long-term sea-level rise is the main driver for accelerated flooding along the US coastline; however, under otherwise stationary conditions (no trends in individual records), changes in the joint distributions of storm surge and precipitation associated with climate variability and change also augment flood potential. For New York City (NYC)—as an example—the observed increase in compound events is attributed to a shift towards storm surge weather patterns that also favour high precipitation. Our results demonstrate the importance of assessing compound flooding in a non-stationary framework and its linkages to weather and climate.”

 

“Relative Sea-level Rise and the Conterminous United States: Consequences of Potential Land Inundation in Terms of Population at Risk and GDP Loss”
Haer, Toon; Kalnay, Eugenia; Kearney, Michael; Moll, Henk. Global Environmental Change, September 2013, 23:1627-1636. doi: 10.1016/j.gloenvcha.2013.09.005

Abstract: “Global sea-level rise poses a significant threat not only for coastal communities as development continues but also for national economies. This paper presents estimates of how future changes in relative sea-level rise puts coastal populations at risk, as well as affect overall GDP in the conterminous United States. We use four different sea-level rise scenarios for 2010–2100: a low-end scenario (Extended Linear Trend) a second low-end scenario based on a strong mitigative global warming pathway (Global Warming Coupling 2.6), a high-end scenario based on rising radiative forcing (Global Warming Coupling 8.5) and a plausible very high-end scenario, including accelerated ice cap melting (Global Warming Coupling 8.5+). Relative sea-level rise trends for each U.S. state are employed to obtain more reasonable rates for these areas, as long-term rates vary considerably between the U.S. Atlantic, Gulf and Pacific coasts because of the Glacial Isostatic Adjustment, local subsidence and sediment compaction, and other vertical land movement. Using these trends for the four scenarios reveals that the relative sea levels predicted by century’s end could range — averaged over all states — from 0.2 to 2.0 m above present levels. The estimates for the amount of land inundated vary from 26,000 to 76,000 km2. Upwards of 1.8 to 7.4 million people could be at risk, and GDP could potentially decline by USD 70–289 billion…. Even the most conservative scenario shows a significant impact for the U.S., emphasizing the importance of adaptation and mitigation.”

 

“Risks of Sea Level Rise to Disadvantaged Communities in the United States”
Martinich, Jeremy; Neumann, James; Ludwig, Lindsay; Jantarasami, Lesley. Mitigation and Adaptation Strategies for Global Change, February 2013, Vol. 18, Issue 2, 169-185. doi: 10.1007/s11027-011-9356-0.

Abstract: “Climate change and sea level rise (SLR) pose risks to coastal communities around the world, but societal understanding of the distributional and equity implications of SLR impacts and adaptation actions remains limited. Here, we apply a new analytic tool to identify geographic areas in the contiguous United States that may be more likely to experience disproportionate impacts of SLR, and to determine if and where socially vulnerable populations would bear disproportionate costs of adaptation. We use the Social Vulnerability Index (SoVI) to identify socially vulnerable coastal communities, and combine this with output from a SLR coastal property model that evaluates threats of inundation and the economic efficiency of adaptation approaches to respond to those threats. Results show that under the mid-SLR scenario (66.9 cm by 2100), approximately 1,630,000 people are potentially affected by SLR. Of these, 332,000 (∼20%) are among the most socially vulnerable. The analysis also finds that areas of higher social vulnerability are much more likely to be abandoned than protected in response to SLR. This finding is particularly true in the Gulf region of the United States, where over 99% of the most socially vulnerable people live in areas unlikely to be protected from inundation, in stark contrast to the least socially vulnerable group, where only 8% live in areas unlikely to be protected. Our results demonstrate the importance of considering the equity and environmental justice implications of SLR in climate change policy analysis and coastal adaptation planning.”

 

“Coastal Flood Damage and Adaptation Costs under 21st Century Sea-level Rise”
Hinke, Jochen; et al. Proceedings of the National Academy of Sciences, March 2014, Vol. 111, No. 9, 3292-3297. doi: 10.1073/pnas.1222469111.

Abstract: “Coastal flood damage and adaptation costs under 21st century sea-level rise are assessed on a global scale taking into account a wide range of uncertainties in continental topography data, population data, protection strategies, socioeconomic development and sea-level rise. Uncertainty in global mean and regional sea level was derived from four different climate models from the Coupled Model Intercomparison Project Phase 5, each combined with three land-ice scenarios based on the published range of contributions from ice sheets and glaciers. Without adaptation, 0.2–4.6% of global population is expected to be flooded annually in 2100 under 25–123 cm of global mean sea-level rise, with expected annual losses of 0.3–9.3% of global gross domestic product. Damages of this magnitude are very unlikely to be tolerated by society and adaptation will be widespread. The global costs of protecting the coast with dikes are significant with annual investment and maintenance costs of US$12–71 billion in 2100, but much smaller than the global cost of avoided damages even without accounting for indirect costs of damage to regional production supply. Flood damages by the end of this century are much more sensitive to the applied protection strategy than to variations in climate and socioeconomic scenarios as well as in physical data sources (topography and climate model). Our results emphasize the central role of long-term coastal adaptation strategies. These should also take into account that protecting large parts of the developed coast increases the risk of catastrophic consequences in the case of defense failure.”

 

“Climate Change Risks to U.S. Infrastructure: Impacts on Roads, Bridges, Coastal Development and Urban Drainage”
Neumann, James E.; et al. Climatic Change, January 2014. doi: 10.1007/s10584-013-1037-4.

Abstract: “Changes in temperature, precipitation, sea level, and coastal storms will likely increase the vulnerability of infrastructure across the United States. Using four models that analyze vulnerability, impacts, and adaptation, this paper estimates impacts to roads, bridges, coastal properties, and urban drainage infrastructure and investigates sensitivity to varying greenhouse gas emission scenarios, climate sensitivities, and global climate models. The results suggest that the impacts of climate change in this sector could be large, especially in the second half of the 21st century as sea-level rises, temperature increases, and precipitation patterns become more extreme and affect the sustainability of long-lived infrastructure. Further, when considering sea-level rise, scenarios which incorporate dynamic ice sheet melting yield impact model results in coastal areas that are roughly 70% to 80% higher than results that do not incorporate dynamic ice sheet melting. The potential for substantial economic impacts across all infrastructure sectors modeled, however, can be reduced by cost-effective adaptation measures. Mitigation policies also show potential to reduce impacts in the infrastructure sector — a more aggressive mitigation policy reduces impacts by 25% to 35%, and a somewhat less aggressive policy reduces impacts by 19% to 30%. The existing suite of models suitable for estimating these damages nonetheless covers only a small portion of expected infrastructure sector effects from climate change, so much work remains to better understand impacts on electric and telecommunications networks, rail, and air transportation systems.”

 

“Increased threat of tropical cyclones and coastal flooding to New York City during the anthropogenic era”
Reed, A.J.; et al., Proceedings of the National Academy of Sciences, 2015. doi: 10.1073/pnas.1513127112.

Abstract: “In a changing climate, future inundation of the United States’ Atlantic coast will depend on both storm surges during tropical cyclones and the rising relative sea levels on which those surges occur. However, the observational record of tropical cyclones in the North Atlantic basin is too short (A.D. 1851 to present) to accurately assess long-term trends in storm activity. To overcome this limitation, we use proxy sea level records, and downscale three CMIP5 models to generate large synthetic tropical cyclone data sets for the North Atlantic basin; driving climate conditions span from A.D. 850 to A.D. 2005. We compare pre-anthropogenic era (A.D. 850–1800) and anthropogenic era (A.D.1970–2005) storm surge model results for New York City, exposing links between increased rates of sea level rise and storm flood heights. We find that mean flood heights increased by ∼1.24 m (due mainly to sea level rise) from ∼A.D. 850 to the anthropogenic era, a result that is significant at the 99% confidence level. Additionally, changes in tropical cyclone characteristics have led to increases in the extremes of the types of storms that create the largest storm surges for New York City. As a result, flood risk has greatly increased for the region; for example, the 500-y return period for a ∼2.25-m flood height during the preanthropogenic era has decreased to ∼24.4 y in the anthropogenic era. Our results indicate the impacts of climate change on coastal inundation, and call for advanced risk management strategies.”

 

“Reducing Coastal Risks on the East and Gulf Coasts”
Committee on U.S. Army Corps of Engineers Water Resources Science, Engineering, and Planning: Coastal Risk Reduction; Water Science and Technology Board; Ocean Studies Board; Division on Earth and Life Studies; National Research Council. 2014, the National Academies Press.

Summary: “Hurricane- and coastal-storm-related economic losses have increased substantially over the past century, largely due to expanding population and development in the most susceptible coastal areas… This report calls for the development of a national vision for managing risks from coastal storms (hereafter, termed “coastal risk”) that includes a long-term view, regional solutions, and recognition of the full array of economic, social, environmental, and life-safety benefits that come from risk reduction efforts. To support this vision, a national coastal risk assessment is needed to identify those areas with the greatest risks that are high priorities for risk reduction efforts. Benefit-cost analysis, constrained by other important environmental, social, and life- safety factors, provides a reasonable framework for evaluating national investments in coastal risk reduction. However, extensive collaboration and additional policy changes will be necessary to fully embrace this vision and move from a nation that is primarily reactive to coastal disasters to one that invests wisely in coastal risk reduction and builds resilience among coastal communities.”

 

“The Role of Ecosystems in Coastal Protection: Adapting to Climate Change and Coastal Hazards”
Spalding, Mark D.; Ruffo, Susan; Lacambra, Carmen; Meliane, Imen; Hale, Lynne Zeitlin; Shephard, Christine C.; Beck, Michael W. Ocean and Coastal Management, March 2014, 90:50-57.

Abstract: “Coastal ecosystems, particularly intertidal wetlands and reefs (coral and shellfish), can play a critical role in reducing the vulnerability of coastal communities to rising seas and coastal hazards, through their multiple roles in wave attenuation, sediment capture, vertical accretion, erosion reduction and the mitigation of storm surge and debris movement. There is growing understanding of the array of factors that affect the strength or efficacy of these ecosystem services in different locations, as well as management interventions which may restore or enhance such values. Improved understanding and application of such knowledge will form a critical part of coastal adaptation planning, likely reducing the need for expensive engineering options in some locations, and providing a complementary tool in hybrid engineering design. Irrespective of future climate change, coastal hazards already impact countless communities and the appropriate use of ecosystem-based adaptation strategies offers a valuable and effective tool for present-day management. Maintaining and enhancing coastal systems will also support the continued provision of other coastal services, including the provision of food and maintenance of coastal resource dependent livelihoods.”

 

“Sea Level and Global Ice Volumes from the Last Glacial Maximum to the Holocene”
Lambeck, Kurt; Rouby, Hélène; Purcell, Anthony; Sun, Yiying; Sambridge, Malcolm, Proceedings of the National Academies of Science, September 2014. doi: 10.1073/pnas.1411762111.

Abstract: “Several areas of earth science require knowledge of the fluctuations in sea level and ice volume through glacial cycles. These include understanding past ice sheets and providing boundary conditions for paleoclimate models, calibrating marine-sediment isotopic records, and providing the background signal for evaluating anthropogenic contributions to sea level. From ~1,000 observations of sea level, allowing for isostatic and tectonic contributions, we have quantified the rise and fall in global ocean and ice volumes for the past 35,000 years. Of particular note is that during the ~6,000 years up to the start of the recent rise ~100−150 years ago, there is no evidence for global oscillations in sea level on time scales exceeding ~200-year duration or 15−20 cm amplitude.”

 

“Sea-level rise due to polar ice-sheet mass loss during past warm periods”
Dutton, A; et al. Science, 2015. doi: 10.1126/science.aaa4019.

Abstract: “Interdisciplinary studies of geologic archives have ushered in a new era of deciphering magnitudes, rates, and sources of sea-level rise from polar ice-sheet loss during past warm periods. Accounting for glacial isostatic processes helps to reconcile spatial variability in peak sea level during marine isotope stages 5e and 11, when the global mean reached 6 to 9 meters and 6 to 13 meters higher than present, respectively. Dynamic topography introduces large uncertainties on longer time scales, precluding robust sea-level estimates for intervals such as the Pliocene. Present climate is warming to a level associated with significant polar ice-sheet loss in the past. Here, we outline advances and challenges involved in constraining ice-sheet sensitivity to climate change with use of paleo–sea level records.”

 

“The Multimillennial Sea-level Commitment of Global Warming”
Levermann, Anders; et al. Proceedings of the National Academies of Science, June 2013. Vol. 110, No. 34. doi: 10.1073/pnas.1219414110.

Abstract: “Global mean sea level has been steadily rising over the last century, is projected to increase by the end of this century, and will continue to rise beyond the year 2100 unless the current global mean temperature trend is reversed. Inertia in the climate and global carbon system, however, causes the global mean temperature to decline slowly even after greenhouse gas emissions have ceased, raising the question of how much sea-level commitment is expected for different levels of global mean temperature increase above preindustrial levels. Although sea-level rise over the last century has been dominated by ocean warming and loss of glaciers, the sensitivity suggested from records of past sea levels indicates important contributions should also be expected from the Greenland and Antarctic Ice Sheets…. Oceanic thermal expansion and the Antarctic Ice Sheet contribute quasi-linearly, with 0.4 m °C−1 and 1.2 m °C−1 of warming, respectively. The saturation of the contribution from glaciers is overcompensated by the nonlinear response of the Greenland Ice Sheet. As a consequence we are committed to a sea-level rise of approximately 2.3 m °C−1 within the next 2,000 years. Considering the lifetime of anthropogenic greenhouse gases, this imposes the need for fundamental adaptation strategies on multicentennial time scales.”

 

“From the extreme to the mean: Acceleration and tipping points of coastal inundation from sea level rise”
Sweet, William V.; Park, Joseph. Earth’s Future, 2014. doi: 10.1002/2014EF000272.

Abstract: “Relative sea level rise (RSLR) has driven large increases in annual water level exceedances (duration and frequency) above minor (nuisance level) coastal flooding elevation thresholds established by the National Weather Service (NWS) at U.S. tide gauges over the last half-century. For threshold levels below 0.5 m above high tide, the rates of annual exceedances are accelerating along the U.S. East and Gulf Coasts, primarily from evolution of tidal water level distributions to higher elevations impinging on the flood threshold. These accelerations are quantified in terms of the local RSLR rate and tidal range through multiple regression analysis. Along the U.S. West Coast, annual exceedance rates are linearly increasing, complicated by sharp punctuations in RSLR anomalies during El Niño Southern Oscillation (ENSO) phases, and we account for annual exceedance variability along the U.S. West and East Coasts from ENSO forcing. Projections of annual exceedances above local NWS nuisance levels at U.S. tide gauges are estimated by shifting probability estimates of daily maximum water levels over a contemporary 5-year period following probabilistic RSLR projections of Kopp et al. (2014) for representative concentration pathways (RCP) 2.6, 4.5, and 8.5. We suggest a tipping point for coastal inundation (30 days/per year with a threshold exceedance) based on the evolution of exceedance probabilities. Under forcing associated with the local-median projections of RSLR, the majority of locations surpass the tipping point over the next several decades regardless of specific RCP.”

Keywords: research roundup, Katrina, Sandy, preparedness, global warming, water, oceans, sea level rise

The post Global warming, rising seas and coastal cities: Trends, impacts and adaptation strategies appeared first on The Journalist's Resource.

]]>
Excessive or reasonable force by police? Research on law enforcement and racial conflict https://journalistsresource.org/criminal-justice/police-reasonable-force-brutality-race-research-review-statistics/ Thu, 28 Jul 2016 14:20:52 +0000 http://live-journalists-resource.pantheonsite.io/?p=40130 Updated review of studies and reports that provide insights into law enforcement actions and recent patterns in America.

The post Excessive or reasonable force by police? Research on law enforcement and racial conflict appeared first on The Journalist's Resource.

]]>

Allegations of the use of excessive force by U.S. police departments continue to generate headlines more than two decades after the 1992 Los Angeles riots brought the issue to mass public attention and spurred some law enforcement reforms. Recent deaths at the hands of police have fueled a lively debate across the nation in recent years.

In a number of closely watched cases involving the deaths of young black men, police have been acquitted, generating uproar and concerns about equal justice for all. On Staten Island, N.Y., the July 2014 death of Eric Garner because of the apparent use of a “chokehold” by an officer sparked outrage. A month later in Ferguson, Mo., the fatal shooting of teenager Michael Brown by officer Darren Wilson ignited protests, and a grand jury’s decision not to indict Wilson triggered further unrest. In November, Tamir Rice was shot by police in Cleveland, Ohio. He was 12 years old and playing with a toy pistol. On April 4, 2015, Walter L. Scott was shot by a police officer after a routine traffic stop in North Charleston, S.C. The same month, Freddie Gray died while in police custody in Baltimore, setting off widespread unrest. The policeman in the South Carolina case, Michael T. Slager, was charged with murder based on a cellphone video. In Baltimore, the driver of the police van in which Gray died, Caesar Goodson, was charged with second-degree murder, with lesser charges for five other officers. There have been no indictments in the earlier cases.

These follow other recent incidents and controversies, including an April 2014 finding by the U.S. Department of Justice (DOJ), following a two-year investigation, that the Albuquerque, N.M., police department “engages in a pattern or practice of use of excessive force, including deadly force, in violation of the Fourth Amendment,” and a similar DOJ finding in December 2014 with regard to the Cleveland police department. In March 2015, the DOJ also issued a report detailing a pattern of “clear racial disparities” and “discriminatory intent” on the part of the Ferguson, Mo., police department.

As the Washington Post reported in July 2015, a pervasive problem that is only now beginning to be recognized is the lack of training for officers dealing with mentally ill persons, a situation that can often escalate to violent confrontations.

The events of 2014-2016 have prompted further calls by some police officials, politicians and scholars for another round of national reforms, in order to better orient “police culture” toward democratic ideals.

Two sides, disparate views

Surveys in recent years with minority groups — Latinos and African-Americans, in particular — suggest that confidence in law enforcement is relatively low, and large portions of these communities believe police are likely to use excessive force on suspects. A 2014 Pew Research Center survey confirms stark racial divisions in response to the Ferguson police shooting, as well, while Gallup provides insights on historical patterns of distrust. According to a Pew/USA Today poll conducted in August 2014, Americans of all races collectively “give relatively low marks to police departments around the country for holding officers accountable for misconduct, using the appropriate amount of force, and treating racial and ethnic groups equally.” Social scientists who have done extensive field research and interviews note the deep sense of mistrust embedded in many communities.

Numerous efforts have been made by members of the law enforcement community to ameliorate these situations, including promising strategies such as “community policing.” Still, from a police perspective, law enforcement in the United States continues to be dangerous work — America has a relatively higher homicide rate compared to other developed nations, and has many more guns per capita. Citizens seldom learn of the countless incidents where officers choose to hold fire and display restraint under extreme stress. Some research has shown that even well-trained officers are not consistently able to fire their weapon in time before a suspect holding a gun can raise it and fire first; this makes split-second judgments, even under “ideal” circumstances, exceptionally difficult. But as the FBI points out, police departments and officers sometimes do not handle the aftermath of incidents well in terms of transparency and clarity, even when force was reasonably applied, fueling public confusion and anger.

In 2013, 49,851 officers were assaulted in the line of duty, with an injury rate of 29.2 percent, according to the FBI. Twenty-seven were murdered that year.

FBI Director: No “reliable grasp” of problem

How common are such incidents of police use of force, both lethal and non-lethal, in the United States? Has there been progress in America? The indisputable reality is that we do not fully know. FBI Director James B. Comey stated the following in a remarkable February 2015 speech:

Not long after riots broke out in Ferguson late last summer, I asked my staff to tell me how many people shot by police were African-American in this country. I wanted to see trends. I wanted to see information. They couldn’t give it to me, and it wasn’t their fault. Demographic data regarding officer-involved shootings is not consistently reported to us through our Uniform Crime Reporting Program. Because reporting is voluntary, our data is incomplete and therefore, in the aggregate, unreliable.

I recently listened to a thoughtful big city police chief express his frustration with that lack of reliable data. He said he didn’t know whether the Ferguson police shot one person a week, one a year, or one a century, and that in the absence of good data, “all we get are ideological thunderbolts, when what we need are ideological agnostics who use information to try to solve problems.” He’s right.

The first step to understanding what is really going on in our communities and in our country is to gather more and better data related to those we arrest, those we confront for breaking the law and jeopardizing public safety, and those who confront us. “Data” seems a dry and boring word but, without it, we cannot understand our world and make it better.

How can we address concerns about “use of force,” how can we address concerns about officer-involved shootings if we do not have a reliable grasp on the demographics and circumstances of those incidents? We simply must improve the way we collect and analyze data to see the true nature of what’s happening in all of our communities.

The FBI tracks and publishes the number of “justifiable homicides” reported by police departments. But, again, reporting by police departments is voluntary and not all departments participate. That means we cannot fully track the number of incidents in which force is used by police, or against police, including non-fatal encounters, which are not reported at all.

Without a doubt, training for police has become more standardized and professionalized in recent decades. A 2008 paper in the Northwestern University Law Review provides useful background on the evolving legal and policy history relating to the use of force by police and the “reasonableness” standard by which officers are judged. Related jurisprudence is still being defined, most recently in the 2007 Scott v. Harris decision by the U.S. Supreme Court. But inadequate data and reporting — and the challenge of uniformly defining excessive versus justified force — make objective understanding of trends difficult.

A 2015 report conducted for the Justice Department analyzed 394 incidents involving deadly police force in Philadelphia from 2007-2014. It found that “officers do not receive regular, consistent training on the department’s deadly force policy”; that early training among recruits is sometimes inadequate in regard to these issues; that investigations into such incidents are not consistent; and that officers “need more less-lethal options.”

For perhaps the best overall summary of police use-of-force issues, see “A Multi-method Evaluation of Police Use of Force Outcomes: Final Report to the National Institute of Justice,” a 2010 study conducted by some of the nation’s leading criminal justice scholars.

Available statistics, background on use of force

The federal Justice Department releases statistics on this and related issues, although these datasets are only periodically updated: It found that in 2015, among the 53.5 million U.S. residents aged 16 or older who had any contact with police, 985,300 of them — 1.8 percent — experienced threats or use of force.  Law enforcement officials were more likely to threaten or use force on black people and Hispanics than white people, according to an October 2018 report. “When police initiated the contact, blacks (5.2 percent) and Hispanics (5.1 percent) were more likely to experience the threat or use of physical force than whites (2.4 percent), and males (4.4 percent) were more likely to experience the threat or use of physical force than females (1.8 percent).” Of those who experienced a threat or use of force, 84 percent considered it to be excessive. In terms of the volume of citizen complaints, the Justice Department also found that there were 26,556 complaints lodged in 2002; this translates to “33 complaints per agency and 6.6 complaints per 100 full-time sworn officers.” However, “overall rates were higher among large municipal police departments, with 45 complaints per agency, and 9.5 complaints per 100 full-time sworn officers.” In 2011, about 62.9 million people had contact with the police.

In terms of the use of lethal force, aggregate statistics on incidents of all types are difficult to obtain from official sources. Some journalists are trying to rectify this; and some data journalists question what few official national statistics are available. The Sunlight Foundation explains some of the data problems, while also highlighting databases maintained by the Centers for Disease Control (CDC). The available data, which does not paint a complete national picture, nevertheless raise serious questions, Sunlight notes:

[A]ccording to the CDC, in Oklahoma the rate at which black people are killed per capita by law enforcement is greater than anywhere else in the country. That statistic is taken from data collected for the years 1999-2011. During that same time period, Oklahoma’s rate for all people killed by law enforcement, including all races, is second only to New Mexico. However, Oklahoma, the District of Columbia, Nevada and Oregon are all tied for the rate at which people are killed. (The CDC treats the District of Columbia as a state when collecting and displaying statistics.) In Missouri, where Mike Brown lived and died, black people are killed by law enforcement twice as frequently as white people. Nationwide, the rate at which black people are killed by law enforcement is 3 times higher than that of white people.

As mentioned, the FBI does publish statistics on “justifiable homicide” by law enforcement officers: The data show that there have been about 400 such incidents nationwide each year. However, FiveThirtyEight, among other journalism outlets, has examined the potential problems with these figures. News investigations suggest that the rates of deadly force usage are far from uniform. For example, Los Angeles saw an increase in such incidents in 2011, while Massachusetts saw more officers firing their weapon over the period 2009-2013.

The Bureau of Justice Statistics did publish a report in 2016 that found that about 1,900 people had died while in police custody during the prior year. That report, which offered details about the cause of death during a three-month period, found that nearly two-thirds of deaths in police custody between June and August of 2015 were homicides — including justifiable homicides by a law enforcement officer — while nearly one-fifth were suicides and just over one-tenth were accidental deaths.

The academic community has also provided some insights in this area. A 2008 study from Matthew J. Hickman of Seattle University, Alex R. Piquero of the University of Maryland and Joel H. Garner of the Joint Centers for Justice Studies reviewed some of the best studies and data sources available to come up with a more precise national estimate for incidents of non-lethal force. They note that among 36 different studies published since the 1980s, the rates of force asserted vary wildly, from a high of more than 30 percent to rates in the low single digits. The researchers analyze Police-Public Contact Survey (PPCS) data and Bureau of Justice Statistics Survey of Inmates in Local Jails (SILJ) data and conclude that an estimated 1.7 percent of all contacts result in police threats or use of force, while 20 percent of arrests do.

A 2012 study in the Criminal Justice Policy Review analyzed the patterns of behavior of one large police department — more than 1,000 officers — and found that a “small proportion of officers are responsible for a large proportion of force incidents, and that officers who frequently use force differ in important and significant ways from officers who use force less often (or not at all).” A 2007 study in Criminal Justice and Behavior, “Police Education, Experience and the Use of Force,” found that officers with more experience and education may be less likely to use force, while a review of case studies suggests that specific training programs and accountability structures can lower the use of violence by police departments.

A 2016 working paper from the National Bureau of Economic Research (NBER) came to a conclusion that surprised some observers. Across the U.S., though blacks are 21.3 percent more likely to be involved in an altercation with police where a weapon is drawn, the researchers found no racial differences in police shootings: “Partitioning the data in myriad ways, we find no evidence of racial discrimination in officer-involved shootings. Investigating the intensive margin – the timing of shootings or how many bullets were discharged in the endeavor – there are no detectable racial differences.”

Researchers continue to refine analytical procedures in order to make more accurate estimates based on police reports and other data. 

Characteristics of suspects

A widely publicized report in October 2014 by ProPublica concluded that young black males are 21 times more likely to be shot by police than their white counterparts: “The 1,217 deadly police shootings from 2010 to 2012 captured in the federal data show that blacks, age 15 to 19, were killed at a rate of 31.17 per million, while just 1.47 per million white males in that age range died at the hands of police.”

Research has definitively established that “racial profiling” by law enforcement exists — that persons of color are more likely to be stopped by police. FBI Director James Comey’s 2015 comments are again relevant here:

[P]olice officers on patrol in our nation’s cities often work in environments where a hugely disproportionate percentage of street crime is committed by young men of color. Something happens to people of good will working in that environment. After years of police work, officers often can’t help but be influenced by the cynicism they feel.

A mental shortcut becomes almost irresistible and maybe even rational by some lights. The two young black men on one side of the street look like so many others the officer has locked up. Two white men on the other side of the street—even in the same clothes—do not. The officer does not make the same association about the two white guys, whether that officer is white or black. And that drives different behavior. The officer turns toward one side of the street and not the other. We need to come to grips with the fact that this behavior complicates the relationship between police and the communities they serve.

While the cases of Rodney King in 1991 and Amadou Diallo in 1999 heightened the country’s awareness of race and policing, research has not uniformly corroborated the contention that minorities are more likely, on average, to be subject to acts of police force than are whites. A 2010 paper published in the Southwestern Journal of Criminal Justice reviewed more than a decade’s worth of peer-reviewed studies and found that while many studies established a correlation between minority status and police use of force, many other studies did not — and some showed mixed results.

Of note in this research literature is a 2003 paper, “Neighborhood Context and Police Use of Force,” that suggests police are more likely to employ force in higher-crime neighborhoods generally, complicating any easy interpretation of race as the decisive factor in explaining police forcefulness. The researchers, William Terrill of Northeastern University and Michael D. Reisig of Michigan State University, found that “officers are significantly more likely to use higher levels of force when encountering criminal suspects in high crime areas and neighborhoods with high levels of concentrated disadvantage independent of suspect behavior and other statistical controls.” Terrill and Reisig explore several hypothetical explanations and ultimately conclude:

Embedded within each of these potential explanations is the influence of key sociodemographic variables such as race, class, gender, and age. As the results show, when these factors are considered at the encounter level, they are significant. However, the race (i.e., minority) effect is mediated by neighborhood context. Perhaps officers do not simply label minority suspects according to what Skolnick (1994) termed “symbolic assailants,” as much as they label distressed socioeconomic neighborhoods as potential sources of conflict.

In studying the Seattle and Miami police departments, the authors of the 2010 National Institute of Justice report also conclude that “non-white suspects were less likely to be injured than white suspects … where suspect race was available as a variable for analysis. Although we cannot speculate as to the cause of this finding, or whether it is merely spurious, it is encouraging that minority suspects were not more likely to be injured than whites.”

Use of Tasers and other “less lethal” weapons

A 2011 report from the National Institute of Justice, “Police Use of Force, Tasers and Other Less-Lethal Weapons,” examines the effectiveness and health outcomes of incidents involving CEDs (conducted energy devices), the most common of which is the Taser. The report finds that: (1) Injury rates vary widely when officers use force in general, ranging from 17 percent to 64 percent for citizens and 10 percent to 20 percent for officers; (2) Use of Tasers and other CEDs can reduce the statistical rate of injury to suspects and officers who might otherwise be involved in more direct, physical conflict — an analysis of 12 agencies and more than 24,000 use-of-force cases “showed the odds of suspect injury decreased by almost 60 percent when a CED was used”; and (3) A review of fatal Taser incidents found that many involved multiple uses of the device against the suspect in question.

A 2011 study, “Changes in Officer Use of Force Over Time: A Descriptive Analysis of a National Survey,” documents trends in the use of non-lethal force by law enforcement officers (LEAs). The results indicate that CED use has risen significantly (to about 70 percent of LEAs), while baton use is down to 25 percent in 2008. “CED use was ranked among the most-used tactics from 2005 to 2008,” the scholars conclude. “Excessive-force complaints against LEAs, internally generated, have more than doubled from 2003 to 2008. Officer injuries varied little from 2003 to 2008, but they are still only about half as common as suspect injuries. Also, only 20 percent of LEAs collect injury data in a database, complicating future research.”

Meanwhile, a 2018 study published in the Security Journal, “Smart Use of Smart Weapons: Jail Officer Liability for the Inappropriate Use of Tasers and Stun Guns on Pretrial Detainees,” offers insights on how some these weapons are used on individuals who are incarcerated while awaiting trial. The paper demonstrates that although the U.S. Supreme Court has ruled it inappropriate to use tasers on pretrial detainees who do not follow verbal commands, the practice is common. It suggests that correctional officers should be reminded “of the distinction between convicted inmates and pretrial detainees who cannot be punished, so that they can abide by the recent Supreme Court decision and avoid liability in the future.”

Potential impact of body cameras

Video recordings of interactions between the police and the public have increased significantly in recent years as technology has improved and the number of distribution channels has expanded. Any standard smartphone can now make a video — as was the case in the Walter L. Scott shooting — and dash-mounted cameras in police cars have become increasingly common.

The mandatory adoption of body cameras by police has been suggested to increase transparency in interactions between law-enforcement officials and the public. A 2014 study from the U.S. Department of Justice, “Police Officer Body-Worn Cameras: Assessing the Evidence,” reviews available research on the costs and benefits of body-worn camera technology. The author, Michael D. White of Arizona State University, identified five empirical studies on body cameras, and assesses their conclusions. In particular, a year after the Rialto, Calif., police deparment began requiring all officers to wear body cameras, use of force by officers fell by 60 percent and citizen complaints dropped by nearly 90 percent. The searcher notes:

The decline in complaints and use of force may be tied to improved citizen behavior, improved police officer behavior, or a combination of the two. It may also be due to changes in citizen complaint reporting patterns (rather than a civilizing effect), as there is evidence that citizens are less likely to file frivolous complaints against officers wearing cameras. Available research cannot disentangle these effects; thus, more research is needed.

The studies also noted concerns about the cost of the required devices, training and systems for storing video footage; potential health and safety effects; and especially privacy concerns, both for citizens and the police. In April 2015, a bill being considered in the Michigan State legislature would exempt some body-camera footage from the state’s Freedom of Information (FOI) laws. Those who spoke in favor of the law included a conservative Republican legislator and an ACLU representative.

Public opinion and media

The coverage of such incidents by mass media has been studied by researchers, some of whom have concluded that the press has often distorted and helped justify questionable uses of force. Finally, survey data continue to confirm the existence of undercurrents of racism and bias in America, despite demonstrable social progress; a 2014 Stanford study shows how awareness of higher levels of black incarceration can prompt greater support among whites for tougher policing and prison programs.

 

Keywords: crime, local reporting, racism, violence, police enforcement, police brutality, body cameras, technology, policing

The post Excessive or reasonable force by police? Research on law enforcement and racial conflict appeared first on The Journalist's Resource.

]]>
Research on what “global warming” and “climate change” mean, and when to use the terms https://journalistsresource.org/environment/research-global-warming-meaning-use-terms/ Sun, 24 Jan 2016 17:15:12 +0000 http://live-journalists-resource.pantheonsite.io/?p=39194 2014 study from Yale University and George Mason University based on two nationally representative surveys on how Americans react to the use of the two terms.

The post Research on what “global warming” and “climate change” mean, and when to use the terms appeared first on The Journalist's Resource.

]]>

The growing threat of rising levels of greenhouse gases has been in the news for a good 30 years now, and a range of terms have been used to describe the consequences of inaction: “climate change,” “global warming,” “climate disruption,” and more. The scientific community tends to use “climate change” in peer-reviewed literature, and many large international organizations, first and foremost the United Nations’ Intergovernmental Panel on Climate Change, tie their identity to that term. Journalists often use the two terms interchangeably, the implicit assumption being that readers understand them to mean the same thing and that they have the same connotations.

A May 2014 study reveals that people can react to them differently, however, with significant implications for science journalism and communications. The study, What’s in a Name? Global Warming Versus Climate Change,” is based on two nationally representative surveys, one conducted by the Yale Project on Climate Change and the other by the George Mason University Center for Climate Change Communication. (The research team consisted of Anthony Leiserowitz, Geoff Feinberg and Seth Rosenthal of Yale; Nicholas Smith of University College London; Ashley Anderson of Colorado State University; and Connie Roser-Renouf and Edward Maibach of George Mason University.)

In their introduction, the authors note that despite being used widely to describe the same set of phenomena, “climate change” and “global warming” are different:

Global warming refers to the increase in the Earth’s average surface temperature since the Industrial Revolution, primarily due to the emission of greenhouse gases from the burning of fossil fuels and land use change, whereas climate change refers to the long-term change of the Earth’s climate including changes in temperature, precipitation and wind patterns over a period of several decades or longer.

Use of the terms has gone back and forth, the researchers indicate, sometimes for ideological reasons: In 2001 President George W. Bush used “global warming” in several environmental addresses, but his administration was advised by Republican strategist Frank Luntz to employ “climate change” instead: “While ‘global warming’ has catastrophic connotations attached to it, ‘climate change’ suggests a more controllable and less emotional challenge,” Luntz wrote. The White House followed his advice, and began to use “climate change” consistently. Internet data indicate that the public searches for the term “global warming” far more frequently, but that searches decreased significantly with the onset of the economic crisis.

To better understand how people react to the two terms and to explore their use, the researchers conducted two surveys in late 2013 and early 2014. The first asked 1,027 respondents about their familiarity with the terms, how often they heard them and which of the two they used more often. The second survey asked a nationally representative sample of 1,657 people a series of questions about the phenomena. For half of those surveyed, the questions used the term “global warming” and the other half, “climate change.” Respondents were classified by age (Generation Y, 18-30; Generation X, 31-48, Baby Boom, 49-67; and Greatest Generation, 68 or older) as well as gender, race, political views, and religious convictions.

The surveys’ results included:

  • Based on the survey data, Americans are equally familiar with the two terms, but are four times more likely to report that they heard “global warming” in the public discourse. They are also twice as likely to say that they use the term “global warming” rather than “climate change” in conversations.
  • For Republicans, the terms “global warming” and “climate change” are regarded as synonyms — neither is more engaging than the other, though “global warming” did result in stronger perceptions of personal or familial threats.
  • Use of “climate change” appears to reduce engagement on the issue for a range of subgroups across age, political and gender lines. These included Democrats, independents, liberals and moderates; men, women and minorities; and different generations.
  • In some cases, the difference in peoples’ reactions could be significant: “African-Americans (+20 percentage points) and Hispanics (+22) are much more likely to rate global warming as a ‘very bad thing’ than climate change. Generation X (+21) and liberals (+19) are much more likely to be certain global warming is happening. African-Americans (+22) and Hispanics (+30) are much more likely to perceive global warming as a personal threat, or that it will harm their own family (+19 and +31, respectively). Hispanics (+28) are much more likely to say global warming is already harming people in the United States right now. And Generation X (+19) is more likely to be willing to join a campaign to convince elected officials to take action to reduce global warming than climate change.”
  • Use of “global warming” caused more intense worry about the issue, particularly among men, Generation Y, Generation X, Democrats, liberals and moderates. For men, Generation X, and liberals, use of “global warming” produced greater certainty that it was happening; for independents, greater understanding that human activities are the primary cause; and for independents and liberals, a greater understanding of the scientific consensus.
  • “Global warming” was also associated with events such as melting glaciers, world catastrophe and other extreme phenomena. “Climate change” was associated more with general weather patterns.

“These diverse results strongly suggest that global warming and climate change are used differently and mean different things in the minds of many Americans,” the researchers conclude. “Scientists often prefer the term ‘climate change’ for technical reasons, but should be aware that the two terms generate different interpretations among the general public and specific subgroups.”

Related research: The National Oceanic and Atmospheric Administration offers a number of reports related to climate change, including one titled “What’s the Difference Between Global Warming and Climate Change?” that includes several data graphics. The Donald W. Reynolds Journalism Institute at the University of Missouri offers a guide to covering and defining what it calls “climate adaptation.”

 

Keywords: climate change, global warming, greenhouse gases, fossil fuels, science communication, climate politics, adaptation

The post Research on what “global warming” and “climate change” mean, and when to use the terms appeared first on The Journalist's Resource.

]]>
Trends in the frequency and intensity of floods across the central U.S. https://journalistsresource.org/environment/frequency-intensity-floods-central-united-states/ Wed, 30 Dec 2015 00:47:01 +0000 http://live-journalists-resource.pantheonsite.io/?p=43670 2015 study from the University of Iowa published in Nature Climate Change examining long-term trends in the frequency and intensity of flood events in the central United States.

The post Trends in the frequency and intensity of floods across the central U.S. appeared first on The Journalist's Resource.

]]>

In North America the winter of 2014-2015 was one for the record books — in particular, 108.6 inches of snow fell in Boston, breaking a longstanding record. But melting snow can bring rising waters, as indicated by a March 2015 report from the National Oceanic and Atmospheric Administration (NOAA), which highlighted the approaching flood risks for New England and Upstate New York as well as for southern Missouri, Illinois and Indiana.

The role of human-induced climate change in the increasing frequency and severity of extreme weather events is well established. The 2014 Vermont Climate Assessment found that since 1960 average temperatures in that state have climbed 1.3 degrees and annual precipitation by nearly 6 inches, with the majority of the increase occurring after 1990. The 2011 Vermont floods and those in North Dakota in 2009 show how devastating such events can be. The projected rise in sea levels will also create profound challenges for coastal communities large and small.

According to a 2013 report from the Congressional Research Service (CRS), just 18% of Americans living in flood zones have the required insurance — not comforting news for them or for federal and state governments, charged with providing material and financial disaster relief after the fact. Data from the Federal Emergency Management Agency (FEMA) indicates that between 2006 and 2010, the average flood claim was nearly $34,000, and large events can impose substantially higher costs. According to the CRS, the National Flood Insurance Program (NFIP) paid out between $12 and $15 billion after Hurricane Sandy — more than triple the $4 billion in cash and borrowing authority it initially had. Starting in 2016, states seeking to receive disaster-preparedness funds must have plans in place to mitigate the effects of climate change, or risk losing funding, FEMA announced in March 2016.

A 2015 study published in Nature Climate Change, “The Changing Nature of Flooding across the Central United States,” examines long-term trends in the frequency and intensity of flood events. The researchers, Iman Mallakpour and Gabriele Villarini of the University of Iowa, used data from 774 stream gauges over the period 1962 to 2011. Gauges had at least 50 years of data with no gaps of more than two continuous years. The 14 states examined were Illinois, Indiana, Iowa, Kansas, Kentucky, Michigan, Minnesota, Missouri, Nebraska, North and South Dakota, Ohio, West Virginia and Wisconsin.

The study’s key findings include:

  • In the Central United States (CUS) the frequency of flooding events has been increasing, while the magnitude of historic events has been decreasing. Overall, 34% of the stations (264) showed an increasing trend in the number of flood events, 9% (66) a decrease, and 57% (444) no significant change.

Central U.S. flooding intensity and frequency (Nature Climate Change)

  • The largest proportion of flood events was in the spring and summer, and 6% of the stations (46) showed increasing trends in the spring, and 30% (227) in the summer. “Most of the flood peaks in the northern part of the CUS tend to occur in the spring and are associated with snow melt, rain falling on frozen ground and rain-on-snow events.”
  • Increased flood frequency was concentrated from North Dakota down to Iowa and Missouri, and east to Illinois, Indiana and Ohio. Areas with decreasing flood frequencies were to the southwest (Kansas and Nebraska) and to the northeast (northern Minnesota, Wisconsin and Michigan).
  • “Trends of rising temperature yield an increase in available energy for snow melting, and the observed trends in increasing flood frequency over the Dakotas, Minnesota, Iowa and Wisconsin can, consequently, be related to both increasing temperature and rainfall.”

The researchers note that “a direct attribution of these changes in discharge, precipitation and temperature to human impacts on climate represents a much more complex problem that is very challenging to address using only observational records.” At the same time, “changes in flood behavior along rivers across the CUS can be largely attributed to concomitant changes in rainfall and temperature, with changes in the land surface potentially amplifying this signal.”

Related research: A 2011 metastudy from the Institute for Environmental Studies at Vrije Universiteit in the Netherlands, “Have Disaster Losses Increased Due to Anthropogenic Climate Change?” analyzes the results of 22 peer-reviewed studies on economic losses from weather disasters and the potential connection to human-caused global warming. It found that while economic losses from weather-related natural hazards — including storms, cyclones, floods and wildfires — have increased around the globe, the exposure of costly assets is by far the most important driver.

 

Keywords: global warming, climate change, flooding, snowfall, rain, precipitation, disasters, water

The post Trends in the frequency and intensity of floods across the central U.S. appeared first on The Journalist's Resource.

]]>
The impact of big-box retailers on communities, jobs, crime, wages and more: Research roundup https://journalistsresource.org/politics-and-government/impact-big-box-retailers-employment-wages-crime-health/ Wed, 16 Dec 2015 06:58:20 +0000 http://live-journalists-resource.pantheonsite.io/?p=38473 2015 updated selection of research on the wide range of impacts of retail chain stores, including on small local business, county-level employment, and more.

The post The impact of big-box retailers on communities, jobs, crime, wages and more: Research roundup appeared first on The Journalist's Resource.

]]>

Year zero in the history of U.S. big-box stores was 1962: In that one year, the first Walmart, Target and Kmart stores opened. While the firms’ origins varied, their common focus was on deep discounts and suburban locations. Shoppers would arrive by car, not foot, so what mattered was highway access, acres of parking and massive scale.

In the five decades since, the American retail landscape and built environment have been profoundly altered. At the end of 2015, Wal-Mart had 4,614 stores and Supercenters in the United States, while Target operated 1,805 stores and Best Buy had 1,050. Then there are smaller chains — still huge by any measure — as well as “category killers” and all the diverse residents of the shopping-mall ecosystem. While some big-box retailers have stumbled in recent years, the rise of Internet commerce and the increasing appeal of cities has helped them remain a powerful force: Wal-Mart alone is estimated to employ approximately 1 percent of the American workforce and reported nearly $486 billion in revenue for fiscal year 2015.

All that retail and economic muscle hasn’t come without significant controversy. A 2008 study from the Massachusetts Institute of Technology indicates that Wal-Mart’s rapid expansion in the 1980s and 1990s was responsible for 40 percent to 50 percent of the decline in the number of small discount stores. According to 2014 research in Social Science Quarterly, a similar effect continues: On average, within 15 months of a new Wal-Mart store’s opening, as many as 14 existing retail establishments close. Other research has found that the arrival of Wal-Mart stores was associated with increased obesity of area residents, higher crime rates relative to communities that were not by stores, lower overall employment at the county level, and lower per-acre tax revenues than mixed-use development.

Despite such well-documented effects, big-box retailers are often courted by cities and regions, as suggested by a 2014 paper from the Harvard Kennedy School. A 2011 report by a Missouri metropolitan planning organization found that over 20 years, more than $5.8 billion had been given to private developers in the St. Louis region, with a substantial portion going to retail-oriented projects. And because big-box stores dominate the malls in which they operate, subsidies continue long after opening day: A study of more than 2,500 stores found that 73 percent of mall anchors paid no rent. Instead, mall owners use their presence to attract smaller retailers that pay elevated rates in the hope of benefiting from the big stores. Some research suggests that small retailers in such malls indeed see more patrons, and municipalities that do attract big box stores can see increased tax revenue, although there may be revenue lost when smaller businesses fail.

Still, a 2014 study from researchers at Stanford and the University of Michigan finds positive effects for wages, relative to pay levels traditionally available through small stores and firms: Indeed, the “spread of these chains has been accompanied by higher wages. Large chains and large establishments pay considerably more than small mom-and-pop establishments. Moreover, large firms and large establishments give access to managerial ranks and hierarchy, and manage rs, most of whom are first-line supervisors, are a large fraction of the retail la bor force, and earn about 20 percent more than other workers.”

Given the outsized role that chain retailers play in the U.S. economy, media coverage often focuses on business issues, such as the wave of closures hitting J.C. Penney and other firms, or the rise of “small-box” urban stores. Walmart’s overseas operations get a lot of attention, including its pledge to sell more U.S.-made goods, or an investigation into its use of bribery in Mexico. Workplace issues are also important, such as why openings at the company often attract hundreds of applicants and the company’s February 2015 announcement that it would raise the entry wage to at least $10 an hour by February 2016.

For state and local reporters, particularly those on a municipal beat, the challenge comes in understanding the positive and negative effects that the potential arrival or departure of a big-box retailer can have. For example, if politicians propose tax-increment financing or other tax-based incentives for a retail project, is that an appropriate use of public funds? What are the potential effects — long and short term — on other retailers and employers in the area? Could an expansion of low-wage jobs increase use of taxpayer-funded assistance programs?

Below are a series of studies that shed light on the effects of big-box retailers on other businesses, employment, wages, crime and health. Beat reporters also can find industry statistics and related resources from organizations such as the National Retail Federation, Retail Industry Leaders Association and the International Council of Shopping Centers.

———————-

“The Evolution of National Retail Chains: How We Got Here”
Foster, Lucia; Haltiwanger, John; Klimek, Shawn D.; Krizan; C.J.; Ohlmacher, Scott. U.S. Census Bureau Center for Economic Studies paper, March 2015.

Abstract: “The growth and dominance of large, national chains is a ubiquitous feature of the U.S. retail sector. The recent literature has documented the rise of these chains and the contribution of this structural change to productivity growth in the retail trade sector. Recent studies have also shown that the establishments of large, national chains are both more productive and more stable than the establishments of single-unit firms they are displacing. We build on this literature by following the paths of retail firms and establishments from 1977 to 2007 using establishment- and firm-level data from the Census of Retail Trade and the Longitudinal Business Database. We dissect the shift towards large, national chains on several margins. We explore the differences in entry and exit as well as job creation and destruction patterns at the establishment and firm level. We find that over this period there are consistently high rates of entry and job creation by the establishments of single-unit firms and large, national firms, but net growth is much higher for the large, national firms. Underlying this difference is far lower exit and job destruction rates of establishments from national chains. Thus, the story of the increased dominance of national chains is not so much due to a declining entry rate of new single-unit firms but rather the much greater stability of the new establishments belonging to national chains relative to their single-unit counterparts. Given the increasing dominant role of these chains, we dissect the paths to success of national chains, including an analysis of four key industries in retail trade. We find dramatically different patterns across industries. In General Merchandise, the rise in national chains is dominated by slow but gradual growth of firms into national chain status. In contrast, in Apparel, which has become much more dominated by national chains in recent years, firms that quickly became national chains play a much greater role.”

 

“Do Large Modern Retailers Pay Premium Wages?”

Brianna Cardiff-Hicks; Francine Lafontaine; Kathryn Shaw. NBER working paper, July 2014.

Abstract: “With malls, franchise strips and big-box retailers increasingly dotting the landscape, there is concern that middle-class jobs in manufacturing in the U.S. are being replaced by minimum wage jobs in retail. Retail jobs have spread, while manufacturing jobs have shrunk in number. In this paper, we characterize the wages that have accompanied the growth in retail. We show that wage rates in the retail sector rise markedly with firm size and with establishment size. These increases are halved when we control for worker fixed effects, suggesting that there is sorting of better workers into larger firms. Also, higher ability workers get promoted to the position of manager, which is associated with higher pay. We conclude that the growth in modern retail, characterized by larger chains of larger establishments with more levels of hierarchy, is raising wage rates relative to traditional mom-and-pop retail stores.”

 

“Business Churn and the Retail Giant: Establishment Birth and Death from Wal-Mart’s Entry”

Ficano, Carlena Cochi. Social Science Quarterly, March 2013, Vol. 94, Issue 1, 263-291. doi: 10.1111/j.1540-6237.2012.00857.x.

Abstract: “This analysis examines separately the establishment, birth and death rate implications of Wal-Mart’s entry into a community as a means of reconciling the inconsistency between sizable documented poverty effects and more limited employment and payroll changes from Wal-Mart. I estimate instrumented county and year fixed effects regressions of county establishment birth rate and establishment death rate on the predicted number of Wal-Mart stores and years of operation in a county for the retail sector, the aggregate local economy, and the manufacturing sector, where the latter serves as a falsification test. I find that within 15 months of a new Wal-Mart store entry, between 4.4 and 14.2 existing retail establishments close while at most 3.5 new retail establishments open. The article provides new and strong evidence that, through its effect on establishment births and deaths, Wal-Mart’s expansion has had a larger impact on the employment situations of those working in retail than net employment and payroll numbers would indicate.”

 

“The Impact of an Urban WalMart Store on Area Businesses: The Chicago Case”
Merriman, David; Persky, Joseph; Davis, Julie; Baiman, Ron. Econonomic Development Quarterly, November 2012, Vol. 26, No. 4, 321-333. doi: 10.1177/0891242412457985.

Abstract: “This study, the first on the impact of a WalMart in a large city, draws on three annual surveys of enterprises within a four-mile radius of a new Chicago WalMart. It shows that the probability of going out of business was significantly higher for establishments close to that store. This probability fell off at a rate of 6% per mile in all directions. Using this relationship, we estimate that WalMart’s opening resulted in the loss of approximately 300 full-time equivalent jobs in nearby neighborhoods. This loss about equals WalMart’s own employment in the area. Our analysis of separate data on sales tax receipts shows that after its opening there was no net increase in retail sales in WalMart’s own and surrounding zip codes. Overall, these results support the contention that large-city WalMarts, like those in small towns, absorb retail sales from nearby stores without significantly expanding the market.”

 

“Shops and the City: Evidence on Local Externalities and Local Government Policy from Big Box Bankruptcies”
Shoag, Daniel; Stan Veuger. Harvard Kennedy School, Faculty Research Working Paper Series RWP14-019, April 2014.

Abstract: “Large retailers have significant positive spillovers on nearby businesses, and both private and public mechanisms exist to attract them. We estimate these externalities using detailed geographic establishment data and exogenous variation from national chain bankruptcies. We show that local government policy responds to the size of these spillovers. When political boundaries allow local governments to capture more of the gains from these large stores, governments are more likely to provide retail subsidies. However, these public incentives also crowd out private mechanisms that subsidize these stores and internalize their benefits. On net, we find no evidence that government subsidies affect the efficiency of these large retailers’ location choice as measured by the size of the externalities at a given distance, rather than within a certain border.”

 

“Rolling Back Prices and Raising Crime Rates? The Walmart Effect on Crime in the United States”
Wolfe, Scott E.; Pyrooz, David C. British Journal of Criminology, January 2014. doi: 10.1093/bjc/azt071.

Abstract: “Concentrating on the 1990s, results [of this study] reveal that Wal-Mart is located in United States counties with higher crime rates, net of robust macro-level correlates of crime. Wal-Mart selected into counties primed for the 1990s crime decline, but, after accounting for endogeneity, growth of the company stunted crime declines when compared to matched counties. A Wal-Mart–crime relationship exists. If Wal-Mart did not build in a county, property crime rates fell by an additional 18 units per capita from the 1990s to the 2000s. A marginally statistically significant, yet stable, effect for violent crime was also observed, falling by two units per capita. These findings provide important theoretical implications regarding the influence of specific economic forces on aggregate crime trends and offer important implications for local governments faced with the prospect of Wal-Mart entering their communities.”

 

“When Walmart Comes to Town: Always Low Housing Prices? Always?”
Pope, Devin G.; Pope, Jaren C. National Bureau of Economic Research, Working Paper No. 18111, May 2012.

Abstract: “Walmart often faces strong local opposition when trying to build a new store. Opponents often claim that Walmart lowers nearby housing prices. In this study we use over one million housing transactions located near 159 Walmarts that opened between 2000 and 2006 to test if the opening of a Walmart does indeed lower housing prices. Using a difference-in-differences specification, our estimates suggest that a new Walmart store actually increases housing prices by between 2 and 3 percent for houses located within 0.5 miles of the store and by 1 to 2 percent for houses located between 0.5 and 1 mile.”

 

“Walmart and Other Food Retail Chains: Trends and Disparities in the Nutritional Profile of Packaged Food Purchases”
Taillie, Lindsey Smith; Ng, Shu Wen; Popkin, Barry M. American Journal of Preventive Medicine, October 2015. doi: 10.1016/j.amepre.2015.07.015.

Summary: This study looks at the nutritional profile of packaged food purchases (PFPs) from food retail chains. Little is known about the nutritional profile of PFPs from such retailers, which include Walmart, the nation’s largest grocer. The study finds that Walmart’s packaged food purchases in 2000 had a less “favorable” nutritional profile compared to those of other food retail chains. But Walmart purchases’ nutritional profile has improved and was similar to that of other food retail chains in 2013.

 

“Supersizing Supercenters? The Impact of Walmart Supercenters on Body Mass Index and Obesity”
Courtemanche, Charles; Carden, Art. Journal of Urban Economics, March 2011, Vol. 69, Issue 2, 165-181. doi: 10.1016/j.jue.2010.09.005.

Abstract: “Researchers have linked the rise in obesity to technological progress reducing the opportunity cost of food consumption and increasing the opportunity cost of physical activity. We examine this hypothesis in the context of Walmart Supercenters, whose advancements in retail logistics have translated to substantial reductions in the prices of food and other consumer goods. Using data from the Behavioral Risk Factor Surveillance System matched with Walmart Supercenter entry dates and locations, we examine the effects of Supercenters on body mass index (BMI) and obesity. We account for the endogeneity of Walmart Supercenter locations with an instrumental variables approach that exploits the unique geographical pattern of Supercenter expansion around Walmart’s headquarters in Bentonville, Arkansas. An additional Supercenter per 100,000 residents increases average BMI by 0.24 units and the obesity rate by 2.3% points. These results imply that the proliferation of Walmart Supercenters explains 10.5% of the rise in obesity since the late 1980s, but the resulting increase in medical expenditures offsets only a small portion of consumers’ savings from shopping at Supercenters.”

 

“Does Local Firm Ownership Matter?”
Fleming, David A.; Goetz, Stephan J. Economic Development Quarterly, August 2011, Vol. 25, No. 3, 277-281. doi: 10.1177/0891242411407312.

Abstract: “A data set for U.S. counties that includes residence status of firm owners is used to assess whether per-capita density of locally owned businesses affects local economic growth, compared with nonlocal ownership. The database also permits stratification of firms across different employment size categories. Economic growth models that control for other relevant factors reveal a positive relationship between density of locally owned firms and per capita income growth but only for small (10-99 employees) firms, whereas the density of large (more than 500 workers) firms not owned locally has a negative effect. These results provide strong evidence that local ownership matters for economic growth but only in the small size category. Results are robust across rural and urban counties.”

 

“Mom-and-Pop Meet Big-Box: Complements or Substitutes?”
Haltiwanger, John C.; et al, National Bureau of Economic Research, September 2009, Working Paper No. 15348.

Findings: This study quantifies the effects of the entry and growth of multi-store retailers within the Washington, D.C., metropolitan area. Key findings include: Employment growth and survival of independent stores and smaller chains that operate in the same industry as a big-box chain are negatively affected by the entry and growth of big-box stores. Most of the negative effect is due to smaller stores being forced to close rather than reducing the scale of their operations. The negative effect is greatest for stores that are located within one to five miles of a big-box store. In terms of smaller chain stores in other sectors, the results are mixed.  The one positive big-box effect is on smaller chain restaurants.

 

“What Happens When Wal-Mart Comes to Town: An Empirical Analysis of the Discount Retailing Industry”
Jia, Panle. Econometrica, November 2008, Vol. 76, Issue 6, 1263-1316. doi: 10.3982/ECTA6649.

Abstract: “In the past few decades, multistore retailers, especially those with 100 or more stores, have experienced substantial growth. At the same time, there is widely reported public outcry over the impact of these chain stores on other retailers and local communities. This paper develops an empirical model to assess the impact of chain stores on other discount retailers and to quantify the size of the scale economies within a chain. The model has two key features. First, it allows for flexible competition patterns among all players. Second, for chains, it incorporates the scale economies that arise from operating multiple stores in nearby regions. In doing so, the model relaxes the commonly used assumption that entry in different markets is independent. The lattice theory is exploited to solve this complicated entry game among chains and other discount retailers in a large number of markets. It is found that the negative impact of Kmart’s presence on Wal-Mart’s profit was much stronger in 1988 than in 1997, while the opposite is true for the effect of Wal-Mart’s presence on Kmart’s profit. Having a chain store in a market makes roughly 50% of the discount stores unprofitable. Wal-Mart’s expansion from the late 1980s to the late 1990s explains about 40-50% of the net change in the number of small discount stores and 30-40% for all other discount stores. Scale economies were important for Wal-Mart, but less so for Kmart, and the magnitude did not grow proportionately with the chains’ sizes.

 

“The Effects of Wal-Mart on Local Labor Markets”
Neumark, David; Zhang, Junfu; Ciccarella, Stephen. Journal of Urban Economics, March 2008, Vol. 63, Issue 2, 405-430. doi: 10.1016/j.jue.2007.07.004.

Abstract: “We estimate the effects of Wal-Mart stores on county-level retail employment and earnings, accounting for endogeneity of the location and timing of Wal-Mart openings that most likely biases the evidence against finding adverse effects of Wal-Mart stores…. The employment results indicate that a Wal-Mart store opening reduces county-level retail employment by about 150 workers, implying that each Wal-Mart worker replaces approximately 1.4 retail workers. This represents a 2.7% reduction in average retail employment. The payroll results indicate that Wal-Mart store openings lead to declines in county-level retail earnings of about $1.4 million, or 1.5%. Of course, these effects occurred against a backdrop of rising retail employment, and only imply lower retail employment growth than would have occurred absent the effects of Wal-Mart.”

 

Keywords: consumer affairs, development, crime, obesity, consumer affairs, research roundup, retail, prices, economic impact, brand performance, minimum wage, Supercenter

The post The impact of big-box retailers on communities, jobs, crime, wages and more: Research roundup appeared first on The Journalist's Resource.

]]>
The health effects and costs of air pollution: Research roundup https://journalistsresource.org/economics/health-effects-costs-air-pollution-research-roundup/ Mon, 07 Dec 2015 20:48:49 +0000 http://live-journalists-resource.pantheonsite.io/?p=29915 2015 review of scholarship on impact of air pollution in the United States, including health effects, economic costs and automotive and transportation causes.

The post The health effects and costs of air pollution: Research roundup appeared first on The Journalist's Resource.

]]>

If you’d like to start a lively discussion at a politically diverse gathering, a good way is to bring up the potential benefits and costs of environmental regulations. The current political divide in the United States pretty much guarantees that soon you’ll be hearing supporters defending “common-sense laws” on one side while detractors denounce “job-killing taxes” on the other.

It’s hard to remember, but environmental regulations weren’t always an invitation to angry debate and Congressional gridlock. The Clean Air Act of 1970 was signed into law by President Richard Nixon and gave the Environmental Protection Agency (created under the same administration) the authority to regulate emissions of hazardous air pollutants from fixed and mobile sources. Two decades later President George H.W. Bush spearheaded the expansion of the Clean Air Act, giving the EPA authority to tackle ozone depletion and establishing a market-based approach to reduce emissions of sulfur dioxide, a contributor to acid rain. The 1990 amendments also created incentives to promote the use of natural gas and the development of biofuels.

According to the EPA, these laws have produced significant progress across a wide range of pollutants: Between 1990 to 2002, emissions of carbon monoxide, volatile organic compounds, particulate matter, sul­fur dioxide and nitrogen oxides dropped between 17% to 44%; over the same period, mercury emissions dropped 52%. And between 1970 and 2002, lead emissions have been cut by 99%. And across all 188 air toxics, emissions have decreased between 1990 and 2002.

Addressing newer threats such as climate change have been more elusive, however, in part because of increasing partisan polarization and in part because of changes in the environmental movement. The 2007 Climate Security Act, which would have established a carbon cap-and-trade program, never made it out of Congress. But in October 2015, the EPA strengthened the National Ambient Air Quality Standards for ground-level ozone to 70 parts per billion, a change the federal agency considered especially important for children and people with asthma. The EPA also aims to reduce air pollution from passenger cars and trucks by setting new vehicle emissions standards and lowering the sulfur content of gasoline beginning in 2017. Despite significant human costs — a 2013 study calculated that approximately 200,000 early deaths occur every year in the United States because of air pollution — every change is difficult and the fate of such regulations is often decided in the courts.

Below is a selection of studies on a range of issues related to air pollution. It has sections on the health effects, economic costs and automotive causes of air pollution. For journalists who write about pollution regularly, the EPA has compiled a collection of online information, including glossaries, about specific air pollutants, including asbestos, lead and chlorofluorocarbons.

—————————-

Health effects

The Contribution of Outdoor Air Pollution Sources to Premature Mortality on a Global Scale
Lelieveld, J; Evans, J.S.; Fnais, M.; Giannadaki, D.; Pozzer, A. Nature, September 2015, Vol. 525. doi: 10.1038/nature15371.

Summary: Researchers investigated the link between premature death and seven sources of air pollution in urban and rural environments. They determined, through calculations, that outdoor air pollution led to 3.3 million premature deaths worldwide in 2010, mostly in Asia. Model projections suggest that premature deaths caused by outdoor air pollution could double by 2050. The researchers predict “moderate though significant increases of premature mortality will occur in Europe and the Americas, to a large degree in urban areas.”

 

“Effect of Air Pollution Control on Life Expectancy in the United States: An Analysis of 545 U.S. Counties for the Period from 2000 to 2007”
Correia, Andrew W.; et al. Epidemiology, January 2013, Vol. 24, Issue 1, 23-31. doi: 10.1097/EDE.0b013e3182770237

Abstract:  “In recent years (2000-2007), ambient levels of fine particulate matter (PM2.5) have continued to decline as a result of interventions, but the decline has been at a slower rate than previous years (1980-2000). Whether these more recent and slower declines of PM2.5 levels continue to improve life expectancy and whether they benefit all populations equally is unknown. Methods: We assembled a data set for 545 U.S. counties consisting of yearly county-specific average PM2.5, yearly county-specific life expectancy, and several potentially confounding variables measuring socioeconomic status, smoking prevalence, and demographic characteristics for the years 2000 and 2007. We used regression models to estimate the association between reductions in PM2.5 and changes in life expectancy for the period from 2000 to 2007. Results: A decrease of 10 μg/m3 in the concentration of PM2.5 was associated with an increase in mean life expectancy of 0.35 years (SD = 0.16 years, P = 0.033). This association was stronger in more urban and densely populated counties. Conclusions: Reductions in PM2.5 were associated with improvements in life expectancy for the period from 2000 to 2007. Air pollution control in the last decade has continued to have a positive impact on public health.”

 

Ambient Air Pollution in China Poses a Multifaceted Health Threat to Outdoor Physical Activity
Li, Fuzhong; Liu, Yu; Lü, Jiaojiao; Liang, Leichao; Harmer, Peter.  Journal of Epidemiology & Community Health, 2015. doi: 10.1136/jech-2014-203892.

Introduction: “While outdoor physical activity has been shown to promote health and wellbeing, exercising in environments with high levels of air pollution can increase the risk of health problems ranging from asthma attacks to heart or lung pathologies.The interaction of these two phenomena is of specific significance in China, where outdoor physical activity has been a traditional practice but where rapid industrialization has led to major degradation of the environment. This situation raises the specter of an emergent major public health crisis in the most populous country in the world.”

 

“Prenatal Polycyclic Aromatic Hydrocarbon (PAH) Exposure and Child Behavior at Age 6-7”
Perera, F.P.; Tang, D.; Wang, S; Vishnevetsky, J.; Zhang, B.; Diaz, D.; Camann, D.; Rauh, V. Environmental Health Perspectives, March 2012. doi: 10.1289/ehp.1104315.

Abstract: “Airborne polycyclic aromatic hydrocarbons (PAH) are widespread urban air pollutants from fossil fuel burning and other combustion sources. Methods: Children of nonsmoking African-American and Dominican women in New York City were followed from in utero to 6-7 years. Prenatal PAH exposure was estimated by personal air monitoring of the mothers during pregnancy as well as … in maternal and cord blood…. Results: High prenatal PAH exposure, whether characterized by personal air monitoring (greater than the median of 2.27 ng/m³) or maternal and cord adducts (detectable or higher), was positively associated with symptoms of Anxious/Depressed and Attention Problems.”

 

“Persistent Environmental Pollutants and Couple Fecundity”
Buck Louis, Germaine M.; et al. Environmental Health Perspectives, November 2012. doi: 10.1289/ehp.1205301.

Abstract: “Evidence suggesting that persistent environmental pollutants may be reproductive toxicants underscores the need for prospective studies of couples for whom exposures are measured.
Methods: A cohort of 501 couples who discontinued contraception to become pregnant was prospectively followed for 12 months of trying to conceive or until a human chorionic gonadotrophin (hCG) test confirmed pregnancy. Couples completed daily journals on lifestyle and provided biospecimens for the quantification of 9 organochlorine pesticides, 1 polybrominated biphenyl, 10 polybrominated diphenyl ethers, 36 polychlorinated biphenyls (PCBs), and 7 perfluorochemicals (PFCs) in serum. Conclusions: We observed that a subset of persistent environmental chemicals were associated with reduced fecundity.”

 

“Multiple Environmental Chemical Exposures to Lead, Mercury and PCBs Among Childbearing-Aged Women”
Thompson, Marcella Remer; Boekelheide, Kim. Environmental Research, February 2013, doi: 10.1016/j.envres.2012.10.005.

Findings: The researchers based their research on analysis of the National Health and Nutrition Examination Survey (1999-2004). In all, 3,173 women were included in the study. More than 20% of the women examined had levels of lead, mercury, and PCBs at or above average. “Among the 33% of women who had two xenobiotic levels at or above the median, it was as likely to be PCBs-lead (36%), mercury-lead (34%) or mercury-PCBs (29%). Among the 27% of women having only one xenobiotic level at or above the median, it was as likely to be lead (43%), mercury (36%) or PCBs (21%).”

 

Costs

“The Benefits and Costs of the Clean Air Act from 1990 to 2020”
U.S. Environmental Protection Agency, Office of Air and Radiation, March 2011

Findings: “Based on the scenarios analyzed in this study, the costs of public and private efforts to meet 1990 Clean Air Act Amendment requirements rise throughout the 1990 to 2020 period of the study, and are expected to reach an annual value of about $65 billion by 2020…. Though costly, these efforts are projected to yield substantial air quality improvements which lead to significant reductions in air pollution — related premature death and illness, improved economic welfare of Americans, and better environmental conditions. The economic value of these improvements is estimated to reach almost $2 trillion for the year 2020, a value which vastly exceeds the cost of efforts to comply with the requirements of the 1990 Clean Air Act Amendments.”

 

The Economics of Household Air Pollution
Jeuland, Marc; Pattanayak, Subhrendu K.; Bluffstone, Randall. Annual Review of Resource Economics, October 2015, Vol. 7. doi: 10.1146/annurev-resource-100814-125048.

Abstract: “Traditional energy technologies and consumer products contribute to household well-being in diverse ways but also often harm household air quality. We review the problem of household air pollution at a global scale, focusing particularly on the harmful effects of traditional cooking and heating. Drawing on the theory of household production, we illustrate the ambiguous relationship between household well-being and adoption of behaviors and technologies that reduce air pollution. We then review how the theory relates to the seemingly contradictory findings emerging from the literature on developing country household demand for clean fuels and stoves. In conclusion, we describe an economics research agenda to close the knowledge gaps so that policies and programs can be designed and evaluated to solve the global household air pollution problem.”

 

“Impact of Air Quality on Hospital Spending”
Romley, John A.; Hackbarth, Andrew; Goldman, Dana P. Rand Corporation, 2010.

Findings:  Between 2005 and 2007, nearly 30,000 hospital admissions and emergency-room visits could have been avoided throughout California if federal clean-air standards had been met. These cases led to higher hospital care cost of approximately $193 million. Medicare and Medicaid spent about $132 million on such hospital care while the rest was incurred by private third-party purchasers. Five case studies of individual hospitals in Riverside, Fresno, Lynwood, Stanford and Sacramento show that the costs incurred by the different types of payers vary by region.

  

“The Impact of Pollution on Worker Productivity”
Graff Zivin, Joshua S.; Neidell, Matthew J. National Bureau of Economic Research, April 2011.

Abstract: “Environmental protection is typically cast as a tax on the labor market and the economy in general. Since a large body of evidence links pollution with poor health, and health is an important part of human capital, efforts to reduce pollution could plausibly be viewed as an investment in human capital and thus a tool for promoting economic growth. While a handful of studies have documented the impacts of pollution on labor supply, this paper is the first to rigorously assess the less visible but likely more pervasive impacts on worker productivity. In particular, we exploit a novel panel dataset of daily farm worker output as recorded under piece rate contracts merged with data on environmental conditions to relate the plausibly exogenous daily variations in ozone with worker productivity. We find robust evidence that ozone levels well below federal air quality standards have a significant impact on productivity: a 10 ppb decrease in ozone concentrations increases worker productivity by 4.2 percent.

 

The Effect of Pollution on Labor Supply: Evidence From a Natural Experiment in Mexico City
Hanna, Rema; Oliva, Paulina. Journal of Public Economics, February 2015, Vol. 122. doi: 10.1016/j.jpubeco.2014.10.004.

Abstract:  “Moderate effects of pollution on health may exert important influences on work. We exploit exogenous variation in pollution due to the closure of a large refinery in Mexico City to understand how pollution impacts labor supply. The closure led to a 19.7 percent decline in pollution, as measured by SO2, in the surrounding neighborhoods. The closure led to a 1.3 h (or 3.5 percent) increase in work hours per week. The effects do not appear to be driven by differential labor demand shocks nor selective migration.”

 

“The Effects of Environmental Regulation on the Competitiveness of U.S. Manufacturing”
Greenstone, Michael; List, John A.; Syverson, Chad. MIT Department of Economics, September 2012.

Abstract: “The economic costs of environmental regulations have been widely debated since the U.S. began to restrict pollution emissions more than four decades ago. Using detailed production data from nearly 1.2 million plant observations drawn from the 1972-1993 Annual Survey of Manufactures, we estimate the effects of air quality regulations on manufacturing plants’ total factor productivity (TFP) levels. We find that among surviving polluting plants, stricter air quality regulations are associated with a roughly 2.6 percent decline in TFP. The regulations governing ozone have particularly large negative effects on productivity, though effects are also evident among particulates and sulfur dioxide emitters. Carbon monoxide regulations, on the other hand, appear to increase measured TFP, especially among refineries. The application of corrections for the confounding of price increases and output declines and sample selection on survival produce a 4.8% estimated decline in TFP for polluting plants in regulated areas. This corresponds to an annual economic cost from the regulation of manufacturing plants of roughly $21 billion, which is about 8.8% of manufacturing sector profits in this period.”

 

Transportation

“Residential Traffic Exposure and Childhood Leukemia: A Systematic Review and Meta-analysis”
Boothe, Vickie L.; Boehmer, Tegan K.; Wendel, Arthur M.; Yip, Fuyuen Y. American Journal of Preventive Medicine, April 2014, vol. 46, issue 4, 413-422. doi: 10.1016/j.amepre.2013.11.004.

Abstract: Exposure to elevated concentrations of traffic-related air pollutants in the near-road environment is associated with numerous adverse human health effects, including childhood cancer, which has been increasing since 1975. Results of individual epidemiologic studies have been inconsistent. Therefore, a meta-analysis was performed to examine the association between residential traffic exposure and childhood cancer…. Current evidence suggests that childhood leukemia is associated with residential traffic exposure during the postnatal period, but not during the prenatal period. Additional well-designed epidemiologic studies that use complete residential history to estimate traffic exposure, examine leukemia subtypes, and control for potential confounding factors are needed to confinn these findings.”

   

“Measurement of Airborne Concentrations of Tire and Road Wear Particles in Urban and Rural areas of France, Japan and the United States”
Panko, Julie M.; Chu, Jennifer; Kreider, Marisa L.; Unice, Ken M. Atmospheric Environment, June 2013, Vol. 72.

Abstract: “Exhaust and non-exhaust vehicle emissions are an important source of ambient air respirable particulate matter (PM10). Non-exhaust vehicle emissions are formed from wear particles of vehicle components such as brakes, clutches, chassis and tires…. In this study, a global sampling program was conducted to quantify tire and road wear particles (TRWP) in the ambient air in order to understand potential human exposures and the overall contribution of these particles to the PM10. The sampling was conducted in Europe, the United States and Japan and the sampling locations were selected to represent a variety of settings including both rural and urban core; and within each residential, commercial and recreational receptors…. Results indicated that TRWP concentrations in the PM10 fraction were low with averages ranging from 0.05 to 0.70 μg m−3, representing an average PM10 contribution of 0.84%.”

 

The Effect of Beijing’s Driving Restrictions on Pollution and Economic Activity
Viarda, V. Brian; Fub, Shihe. Journal of Public Economics, May 2015. Vol. 125. doi:10.1016/j.jpubeco.2015.02.003.

Abstract: “We evaluate the pollution and labor supply reductions from Beijing’s driving restrictions. Causal effects are identified from both time-series and spatial variation in air quality and intra-day variation in television viewership. Based on daily data from multiple monitoring stations, air pollution falls 21 percent during one-day-per-week restrictions. Based on hourly television viewership data, viewership during the restrictions increases by 9 to 17 percent for workers with discretionary work time but is unaffected for workers without, consistent with the restrictions’ higher per-day commute costs reducing daily labor supply. We provide possible reasons for the policy’s success, including evidence of high compliance based on parking garage entrance records.”

 

“Traffic-Related Air Pollution, Particulate Matter and Autism”
Volk, Heather E.; Lurmann, Fred; Penfold, Bryan; Hertz-Picciotto, Irva; McConnell, Rob. JAMA Psychiatry. January 2013. doi:10.1001/jamapsychiatry.2013.266.

Abstract: “This population-based case-control study includes data obtained from children with autism and control children with typical development who were enrolled in the Childhood Autism Risks from Genetics and the Environment study in California. [The study found that] children with autism were more likely to live at residences that had the highest quartile of exposure to traffic-related air pollution, during gestation and during the first year of life, compared with control children. Regional exposure measures of nitrogen dioxide and particulate matter less than 2.5 and 10 μm in diameter (PM2.5 and PM10) were also associated with autism during gestation and during the first year of life.”

 

“Filtration Effectiveness of HVAC Systems at Near-roadway Schools”
McCarthy, M.C.; Ludwig, J.F.; Brown, S.G.; Vaughn, D.L.; Roberts, P.T. Indoor Air, January 2013, doi: 10.1111/ina.12015.

Abstract: “Concern for the exposure of children attending schools located near busy roadways to toxic, traffic-related air pollutants has raised questions regarding the environmental benefits of advanced heating, ventilation, and air-conditioning (HVAC) filtration systems for near-road pollution. Levels of black carbon and gaseous pollutants were measured at three indoor classroom sites and at seven outdoor monitoring sites at Las Vegas schools. Initial HVAC filtration systems effected a 31% to 66% reduction in black carbon particle concentrations inside three schools compared with ambient air concentrations. After improved filtration systems were installed, black carbon particle concentrations were reduced by 74% to 97% inside three classrooms relative to ambient air concentrations. Average black carbon particle concentrations inside the schools with improved filtration systems were lower than typical ambient Las Vegas concentrations by 49% to 96%. Gaseous pollutants were higher indoors than outdoors. The higher indoor concentrations most likely originated at least partially from indoor sources, which were not targeted as part of this intervention.”

 

Keywords: pollution, cars, cancer, coal, fossil fuels, research roundup, driving, China, premature death

The post The health effects and costs of air pollution: Research roundup appeared first on The Journalist's Resource.

]]>
Lifecycle greenhouse gas emissions from solar and wind energy: A critical meta-survey https://journalistsresource.org/environment/lifecycle-greenhouse-gas-emissions-solar-wind-energy/ Sat, 28 Nov 2015 16:59:29 +0000 http://live-journalists-resource.pantheonsite.io/?p=45214 2014 meta-analysis in Energy Policy that identifies robust studies in the current literature to better understand CO2 emissions from renewable energy facilities over their lifetimes.

The post Lifecycle greenhouse gas emissions from solar and wind energy: A critical meta-survey appeared first on The Journalist's Resource.

]]>

In August 2015, the U.S. Environmental Protection Agency announced its new Clean Power Plan, which includes the first national standards for reducing carbon pollution from power plants. The plan requires states to reduce carbon-dioxide emission rates, while allowing flexibility in how the goals are achieved. For example, states could employ a mix of improving the efficiency of existing generation capacity, reducing demand through conservation methods, and, of course, increasing the proportion of electricity obtained from less carbon-intensive fuel sources.

Shortly before the plan was finalized, the U.S. Energy Information Agency released a detailed analysis of the plan’s probable impacts. The agency found that switching generation capacity from coal to natural gas was likely to be the dominant strategy employed by states, with a significant reduction in annual emissions — up to 625 million fewer metric tons of CO2 in 2030, depending on the scenario. Wind and solar are expected to play a “growing role” by the mid-2020s, and by 2030 would generate more than 400 billion kilowatt hours of electricity annually.

While renewables have long faced the challenge of providing backup capacity — fossil-fuel and nuclear plants currently play that role — a 2015 study in Nature Climate Change indicates that battery technology is advancing more rapidly than anticipated. In May the electric car manufacturer Tesla announced it was developing a battery system intended to store power generated from household solar during the day for nighttime use.

A key question with respect to renewable energy growth is the greenhouse gas emissions associated with specific technologies. While renewable power sources are themselves carbon-free — it’s just sunlight, wind and water, after all — the components and facilities have to be manufactured, built, and maintained, and at the end of their lives, plants must be retired or replaced, and their components disposed of or recycled. A landmark 2008 study in Energy Policy examined nuclear power from this perspective and found that the mean value of CO2 emissions over a reactor’s lifetime was 66 grams per kilowatt-hour of electricity — less than the best fossil fuel (natural gas), but more than the most carbon-intensive renewable (biomass).

A 2014 research review and meta-analysis published in Energy Policy, “Assessing the Lifecycle Greenhouse Gas Emissions from Solar PV and Wind Energy: A Critical Meta-Survey,” tackles this question for renewables. The authors were Daniel Nugent and Benjamin K. Sovacool of Vermont Law School; Sovacool is also at Aarhus University in Denmark and authored the 2008 nuclear-power study. In their research, they examined more than 153 studies on the life-cycle CO2 emissions of a range of wind and solar photovoltaic (PV) technologies, and selected 41 for deeper analysis, allowing the scholars to better understand the emissions of current technologies as well as pinpoint where emissions occur and under what circumstances, and thus how they might be reduced. All the studies chosen for inclusion were peer-reviewed and more than 70% were published within the last five years.

The metastudy’s key findings include:

  • Based on the studies examined, wind energy emits an average of 34.11 grams of CO2 per kWh over its lifetime, with a low estimate of 0.4 grams and a high estimate of 364.8 grams. The mean value for solar PV is 49.91 grams of CO2 per kWh, with a low estimate of 1 gram and a high estimate of 218 grams. The large ranges in the estimates were due to factors such as resource inputs, technology, location, sizing and capacity and longevity, as well different calculation methods used by source studies.
  • The sources of energy used to manufacture components can be critical: “The same manufacturing process in Germany would result in less than half of the total emissions that such a process would entail in China. This was primarily due to China’s significantly greater dependence on black coal for electricity production in comparison with Germany’s much greater reliance on natural gas and nuclear power.” (The same issue plays into the lifetime emissions of electric cars.)
  • The “material cultivation and fabrication stage” of renewable-energy facilities was responsible for the greatest proportion of emissions — just over 71% for both solar PV and wind. Facility construction and related transportation were responsible for 24% of wind’s lifetime CO2 emissions and 19% for solar PV, while operation contributed 19.4% of wind farms’ lifetime emissions and 13% for solar.

Sources of renewable-energy greenhouse-gas emissions

  • Decommissioning or reuse was a net gain for both solar and wind, offsetting the equivalent of 19.4% of a wind farm’s lifetime emissions and 3.3% of a solar PV facility’s. This is because “reclamation is not a standard practice for wind energy (the pads are often left or reused), and a majority of the steel towers, plastics, and fiberglass blades are recyclable.” These practices allow future emissions to be avoided.
  • On average, larger wind turbines were found to have lower lifetime emissions per kWh than smaller ones: “Higher capacity wind turbines, both with taller hub heights and larger rotor diameters, correspond to lower GHG intensities. Tremeac and Meunier (2009) compared a 4.5 MW turbine to a 250W version and found the smaller to have a GHG intensity equal to approximately three times greater than the larger turbine.”
  • Solar GHG intensity also fell with increasing size, despite the fact that panels are modular and should theoretically have the same efficiency at all sizes. This was possibly due to gains in transportation and logistics.
  • Lifetime emissions decreased substantially as lifespan increased: Studies that assumed a 20-year turbine life resulted in an average of 40.69 grams per kWh, falling to 28.53 grams for 25 years and 25.33 grams for 30 years. Solar followed a similar pattern, with an even sharper drop over time, from 106.25 grams per kWh for five years to 17.5 grams per kWh over 20 years.

“By spotlighting the lifecycle stages and physical characteristics of these technologies that are most responsible for emissions, improvements can be made to lower their carbon footprint,” the authors state. Looking forward, they recommend that future studies should be more methodologically rigorous, and that key questions such as the impact of energy storage on lifetime emissions be examined.

Related research: A 2015 report in the Annual Review of Environment and Resources, “Opportunities for and Alternatives to Global Climate Regimes Post-Kyoto,” compares international policies for mitigation of climate change to the Kyoto Protocol. A 2014 study in Renewable Energy, “How Does Wind Farm Performance Decline with Age?” examined the long-term performance of 282 wind farms in the United Kingdom.

 

Keywords: renewable energy, wind power, solar power, solar photovoltaics, lifecycle assessment, metastudy, greenhouse gases, climate change

The post Lifecycle greenhouse gas emissions from solar and wind energy: A critical meta-survey appeared first on The Journalist's Resource.

]]>
France, Islam, terrorism and the challenges of integration: Research roundup https://journalistsresource.org/politics-and-government/france-muslims-terrorism-integration-research-roundup/ Mon, 16 Nov 2015 13:00:58 +0000 http://live-journalists-resource.pantheonsite.io/?p=42417 2015 review of research related to Muslims in France, and the terrorist attacks on the French satirical news outlet Charlie Hebdo and a chemical plant near Lyon.

The post France, Islam, terrorism and the challenges of integration: Research roundup appeared first on The Journalist's Resource.

]]>

A Nov. 13, 2015 string of terrorist attacks across Paris that killed 129 people has again raised concerns across French society about jihadist violence and ISIS-inspired domestic terrorism. The tragedy comes in the wake of several other attacks in France in 2015, including an attack on an American-owned chemical factory near Lyon in June 2015 and two in January 2015, when 12 people were murdered at the satirical news outlet Charlie Hebdo and then, days later, four hostages were killed at a kosher supermarket.

Like other European nations, France has a long and complicated relationship with the Muslim world and its own immigrant population, many of whom have been in the country for generations. French Muslims are highly diverse, and some are secular while others are observant. One of the policemen killed in the Charlie Hebdo attacks, Ahmed Merabet, was Muslim. Some are at the center of society — soccer player Zinedine Zidane, born in Marseille to Algerian parents, led France to a World Cup victory in 1998 — but large segments of the population remain excluded. Research from INSEE, France’s national statistical agency, indicates that in 2013, the unemployment rate for all immigrants was approximately 17.3%, nearly 80% higher than the non-immigrant rate of 9.7%, and descendents of immigrants from Africa have a significantly more difficult time finding work. The report found that the education and skill levels only explained 61% of the difference in employment rates between descendents of African immigrants and those whose parents were born in France.

According to a 2012 report from INSEE, approximately 11% of the population was born outside the country, primarily Algeria, Morocco and Portugal. For comparison, 12.9% of the U.S. population was foreign born in 2010, while the figure for Canada was 20.6% in 2011. In 2013 the number of immigrants living in France was 5.8 million. With arrivals and departures, the increase per year is approximately 90,000 (0.14% of France’s population of 66 million in 2013), with most growth in recent years in immigrants from within Europe — Portugal, the U.K., Spain, Italy and Germany. The average education level of immigrants has been rising over time.

By law the French government is prohibited from asking about or keeping data on its citizens’ race and religion. A 2015 report from the Pew Research Center indicates that 7.5% of French residents are of Muslim descent, but does not indicate their degree of religiosity. However, a 2007 Brookings Institution book, Integrating Islam, estimated there were 5 million French residents of Muslim heritage, approximately 7.8% of the country’s population at that time (64.1 million). The authors, Jonathan Laurence and Justin Vaisse, estimate that the rate of self-affiliation of French residents of Muslim descent with Islam was approximately the same as for French people of Catholic heritage with Catholicism, 66%. This would indicate that 3.3 million French residents were to some degree observant Muslims in that year, or 5.1% of the population. Any such figures should be used with great caution, as they’re necessarily imperfect.

Reactions to the attacks

Examining the January 2015 incident may provide clues about the direction of French society in the wake of another attack. Immediately after the Charlie Hebdo killings there were hundreds of spontaneous mass demonstrations across Europe condemning the senseless violence, defending the liberty of the press and urging tolerance. A January 11 march calling for unity brought together over 1.3 million people, including more than 40 present and former heads of state. In May, France passed a wide-reaching surveillance law intended to improve the ability of the country’s intelligence services to identify potential terrorists. While the law was strongly supported by the government, some condemned it as paving the way for mass surveillance on the order of that undertaken by the National Security Agency (NSA) in the United States.

In general, French society is more tolerant of religious mockery and satire than some other Western nations. Charlie Hebdo’s fierce independence has long attracted admiration and criticism, as does its relentless pursuit of politicians and public figures who abused the public trust. Nothing was sacred, least of all religion: Child abuse by Catholic priests and violence by self-proclaimed protectors of Islam were both considered fair game. After a Danish newspaper published cartoons of the Prophet Muhammad in 2005, Charlie Hebdo printed them again to demonstrate the importance of the free press in an open society. Their offices were firebombed in 2011 after an issue that was supposedly “guest edited” by Muhammad, and had since regularly received threats of violence.

The gunmen who attacked Charlie Hebdo, Chérif and Saïd Kouachi, died two days later after being surrounded by French security forces. Born in France to Algerian-immigrant parents, their upbringing was not religious but Chérif, the younger, converted to Islam around 2003 and was arrested in 2005 while preparing to join the war in Iraq; Saïd reportedly trained in Yemen in 2011. A 2014 report from the French National Assembly found that as of last July, 899 French residents had been implicated in Islamist networks.

Secularism and security

France is a secular republic, with a strict separation of church and state, as epitomized by a 2004 law that reasserted the right of the government to exclude “conspicuous” religious symbols such as crosses, skullcaps and headscarves from public schools. In 2011 the law was extended to ban the wearing of full-face veils in public places. According to a 2010 Pew report, before the full-veil ban was passed in France there was widespread support for the measure across Western Europe, with 82% of the public in France supporting it. France’s right-wing party, the National Front, has worked to fan fears over Islam, immigration and terrorism — and to conflate them — part of a larger movement of populist political parties that have grown in influence. Nevertheless, 2014 data from Pew indicate that 72% of the French public had favorable views of Muslims.

At the same time, tracking and neutralizing potential terrorists has long been a concern for French security agencies. The country fought a brutal war in Algeria in the 1950s and 1960s, and during the subsequent civil war, the conflict often spilled out back home — in 1995 an Islamic group carried out attacks that killed eight people and injured hundreds across the country. Incidents large and small have occurred since then, including the shooting spree by Mohamed Merah in 2012. In December 2014, 10 people suspected of being part of a jihadist network were arrested, and a charity accused of financing terrorism was shut down. Complicating matters is France’s assertive presence in anti-terrorism operations in the Middle East and Africa: Its air force has carried out raids against ISIS, and it leads operations in Mali against Islamist forces.

Free speech in the crosshairs

The 2015 Charlie Hebdo attack differed from earlier terrorist events not only in its scale — it’s reportedly the deadliest in France since 1961 — but also its focus on the press and freedom of expression. Organizations such as the Committee to Protect Journalists (CPJ) have pointed out that 2014 was another particularly violent year, with at least 60 journalists killed worldwide (where the motive could be confirmed) and some 18 other cases still under review. The number killed in 2013 was 70, including Ghislaine Dupont and Claude Verlon of Radio France International. Leaders in the journalism community have spoken of a “new war on journalists.” CPJ continues to monitor countries where there appears to be some amount of impunity for these crimes against the press and cases of murder typically go unsolved.

After the attack the French government, Google, and the Guardian donated the equivalent of $1.8 million to help Charlie Hebdo recover. They published their next issue on schedule, with a press run of 7 million copies, nearly 120 times the regular circulation. The edition was published in at least five languages, including Arabic, with distribution in 25 countries. The cover of the “survivors’ edition” featured a caricature of Muhammad with a “Je Suis Charlie” sign, and the first print run reportedly sold out in hours. Some international news sites reproduced the cover, including the Guardian, while others such as the New York Times did not. The public editor of the Times criticized the publication’s decision.

Below is a selection of relevant research on France’s immigrant-origin communities, terrorism coverage in France and related issues.

—————–

“Developing Terrorism Coverage: Variances in News Framing of the January 2015 attacks in Paris and Borno”
Nevalskya, Eric Chien. Critical Studies on Terrorism, November 2015. doi: 10.1080/17539153.2015.1096656.

Abstract: “Written news coverage of an event influences public perception and understanding of that event. Through agenda setting and news framing, journalists control the importance and substance of readers’ beliefs about the event. While existing research has been conducted on the relationship between media coverage and the geographic location of the country an event took place in, there is limited understanding of this relationship in terms of terrorist events. Utilizing an agenda-setting theory and news framing theory lens to compare news coverage of the January 2015 terrorist attacks in Paris, France, and Borno, Nigeria revealed significant variances in the overall coverage, headline style and discourse usage based on the event. In particular, the American news coverage positively framed France through detailed, sympathetic coverage and negatively framed Nigeria by overgeneralizing and placing blame. Determining the origin and impacts of these variances is integral to forming a more comprehensive understanding of international terrorism and the most effective ways to combat it.”

 

“Should I Stay or Should I Go? Explaining Variation in Western Jihadists’ Choice between Domestic and Foreign Fighting”
Hegghammer, Thomas. American Political Science Review, February 2013, 107(1), 1-15. doi: 10.1017/S0003055412000615.

Findings: “This article studies variation in conflict theater choice by Western jihadists in an effort to understand their motivations. Some militants attack at home, whereas others join insurgencies abroad, but few scholars have asked why they make these different choices. Using open-source data, I estimate recruit supply for each theater, foreign fighter return rates, and returnee impact on domestic terrorist activity. The tentative data indicate that jihadists prefer foreign fighting, but a minority attacks at home after being radicalized, most often through foreign fighting or contact with a veteran. Most foreign fighters do not return for domestic operations, but those who do return are more effective operatives than nonveterans. The findings have implications for our understanding of the motivations of jihadists, for assessments of the terrorist threat posed by foreign fighters, and for counterterrorism policy.”

 

“Identifying Barriers to Muslim Integration in France”
Adida, Claire L.; Laitin, David D.; Valfort, Marie-Anne. Proceedings of National Academy of Sciences (PNAS), 2010, Vol. 107, No. 52, doi: 10.1073/pnas.1015550107.

Abstract: “Is there a Muslim disadvantage in economic integration for second-generation immigrants to Europe? Previous research has failed to isolate the effect that religion may have on an immigrant family’s labor market opportunities because other factors, such as country of origin or race, confound the result. This paper uses a correspondence test in the French labor market to identify and measure this religious effect. The results confirm that in the French labor market, anti-Muslim discrimination exists: a Muslim candidate is 2.5 times less likely to receive a job interview callback than is his or her Christian counterpart. A high-n survey reveals, consistent with expectations from the correspondence test, that second-generation Muslim households in France have lower income compared with matched Christian households. The paper thereby contributes to both substantive debates on the Muslim experience in Europe and methodological debates on how to measure discrimination. Following the National Academy of Sciences’ 2001 recommendations on combining a variety of methodologies and applying them to real-world situations, this research identifies, measures, and infers consequences of discrimination based on religious affiliation, controlling for potentially confounding factors, such as race and country of origin.”

 

“‘One Muslim Is Enough!’ Evidence from a Field Experiment in France”
Adida, Claire L.; Laitin, David D.; Valfort, Marie-Anne. Discussion Paper series, 2011, Forschungsinstitut zur Zukunft der Arbeit/Institute for the Study of Labor (IZA), No. 6122.

Abstract: “Anti-Muslim prejudice is widespread in Western countries. Yet, Muslims are expected to constitute a growing share of the total population in Western countries over the next decades. This paper predicts that this demographic trend will increase anti-Muslim prejudice. Relying on experimental games and a formal model, we show that the generosity of rooted French toward Muslims is significantly decreased with the increase of Muslims in their midst, and demonstrate that these results are driven by the activation of rooted French taste-based discrimination against Muslims when Muslim number s increase. Our findings call for solutions to anti-Muslim prejudice in the West.”

 

The Republic Unsettled: Muslim French and the Contradictions of Secularism
Fernando, Mayanthi L. Duke University Press, 2014.

Description: “In 1989 three Muslim schoolgirls from a Paris suburb refused to remove their Islamic headscarves in class. The headscarf crisis signaled an Islamic revival among the children of North African immigrants; it also ignited an ongoing debate about the place of Muslims within the secular nation-state. Based on ten years of ethnographic research, The Republic Unsettled alternates between an analysis of Muslim French religiosity and the contradictions of French secularism that this emergent religiosity precipitated. Mayanthi L. Fernando explores how Muslim French draw on both Islamic and secular-republican traditions to create novel modes of ethical and political life, reconfiguring those traditions to imagine a new future for France. She also examines how the political discourses, institutions, and laws that constitute French secularism regulate Islam, transforming the Islamic tradition and what it means to be Muslim. Fernando traces how long-standing tensions within secularism and republican citizenship are displaced onto France’s Muslims, who, as a result, are rendered illegitimate as political citizens and moral subjects. She argues, ultimately, that the Muslim question is as much about secularism as it is about Islam.”

 

“Opposing Muslims and the Muslim Headscarf in Western Europe”
Helbling, Marc. European Sociological Review, 2014, pp. 1-16. doi: 10.1093/esr/jct038

Abstract: “While Muslims have a surprisingly good reputation in Western Europe, the wearing of the headscarf in schools is opposed by a large majority of people. Several arguments are developed in this article to explain why people make a distinction between Muslims as a group and legislation on their religious practices. While attitudes towards Muslims vary little across countries, there is a lot of variation in levels of opposition to the headscarf. It appears that the more state and church are separated in a country or the more a state discriminates against religious groups the more opposed people are to allowing new religious practices in schools. At the individual level this article will test the extent to which general xenophobic attitudes, liberal values, and religiosity help us understand why attitudes differ. The article will show, among other things, that religious people are opposed to Muslims but not the rules that allow them to practice their religion. On the other hand, people with liberal values are tolerant of Muslims as a group but feel torn when it comes to legislation on religious practices such as the wearing of the headscarf, which for some people stands for the illiberal values of Islam. Data from a survey in six Western European countries will be analyzed. Despite all the heated political debates this is one of the first studies that analyses attitudes towards Muslim immigrants across a number of countries, and for the first time attitudes towards Muslims as a group and legislation on the headscarf are compared.”

 

“Widespread Support for Banning Full Islamic Veil in Western Europe”
The Pew Global Attitudes Project, July 2010.

Excerpt: “A survey by the Pew Research Center’s Global Attitudes Project, conducted April 7 to May 8 [2010], finds that the French public overwhelmingly endorses this measure; 82% approve of a ban on Muslim women wearing full veils in public, including schools, hospitals and government offices, while just 17% disapprove…. Majorities in Germany (71%), Britain (62%) and Spain (59%) would also support a similar ban in their own countries. In contrast, most Americans would oppose such a measure; 65% say they would disapprove of a ban on Muslim women wearing full veils in public places compared with 28% who say they would approve.”

 

“Migrating Concepts: Immigrant Integration and the Regulation of Religious Dress in France and Canada”
Lépinard, Eléonore. Ethnicities, April 2014. doi: 10.1177/1468796814529939.

Abstract: “Religion in general, and Islam in particular, has become one of the main focal points of policy-making and constitutional politics in many Western liberal states. This article proposes to examine the legal and political dynamics behind new regulations targeting individual religious practices of Muslims. Although one could presuppose that church-state relations or the understanding of secularism is the main factor accounting for either accommodation or prohibition of Muslim religious practices, I make the case that the policy frame used to conceptualize the integration of immigrants in each national context is a more significant influence on how a liberal state approaches the legal regulation of individual practices such as veiling. However, this influence must be assessed carefully since it may have different effects on the different institutional actors in charge of regulating religion, such as the Courts and the legislature. To assess these hypotheses I compare two countries, France and Canada, which are solid examples of two contrasting national policy frames for the integration of immigrants. “

 

“Surveying the Landscape of Integration: Muslim Immigrants in the United Kingdom and France”
Jacobson, David; Deckard, Natalie Delia. Democracy and Security, May 2014, Vol. 10, Issue 2. doi: 10.1080/17419166.2014.908283.

Abstract: “This research explores how the beliefs of Muslim immigrants living in France compare to those of their counterparts in the United Kingdom. We conducted a survey of 400 Muslims in each nation and noted significant differences between them. We found that British Muslims felt less positively about the West and its influence in the Muslim world than did French Muslims. British Muslims were more likely to prioritize loyalty to the Ummah and to perceive hostility toward Islam. The findings are suggestive of a disparity in the immigrant experience in the two nations, as well as in the effectiveness of their government integration strategies.”

 

“Islamophobia: Understanding Anti-Muslim Sentiment in the West”
Gallup World report, 2012.

Excerpt: “Gallup collected data in 2008 from representative samples in Germany, France, and the U.K., focusing on several issues related to the social and cultural integration of Muslim communities in these three countries. And while majorities of the adults in these countries agree that people from minority groups enrich the cultural life of their nations, sizable minorities of these respondents express fear about certain aspects of Muslim culture. Only the general population in the U.K. includes a sizable minority — more than one-quarter — that says people with different religious practices than theirs threaten respondents’ way of life. The Muslim populations in France, Germany, and the U.K. are less likely than the general public in these countries to say those with differing religious practices threaten their way of life. Between 16% and 21% of people in France, Germany and the U.K. say they would not like Muslims as their neighbors, similar to the percentages of each country’s general population that say they would not like homosexuals as neighbors. Generally, people in these countries are more likely to say they would not like Muslims as neighbors than they are to say the same about Jews, Christians, atheists, blacks, and Asians. An exception exists in the U.K., though, where 22% of people say they would not like immigrants or foreign workers as neighbors…. Overall, however, large majorities of French (90%), British (90%) and German (95%) respondents say they have not experienced racial or religious discrimination in the past year. Among Muslims in each of these three countries, those in France and Germany are significantly more likely than the general population to say they experienced discrimination in the past year.”

 

“European Converts to Islam: Mechanisms of Radicalization”
Karagiannisa, Emmanuel. Politics, Religion & Ideology, 2012, Volume 13, Issue 1, doi: 10.1080/21567689.2012.659495.

Abstract: “While European converts to Islam represent only a tiny percentage of Europe’s Muslim population, members of that group have participated in major Islamist terrorist plots and attacks on European soil. Although the radicalization process has not been the same for all individuals, it could be still possible to understand the circumstances under which some European converts turned to violence. Therefore, the article focuses on a number of mechanisms that may have contributed to the radicalization of European jihadi converts, including personal victimization, political grievance, the slippery-slope effect, the power of love and the inspirational preaching.”

 

 

Keywords: Islam, terrorism, religion, news framing, Amedy Coulibaly, Cherif Kouachi, Said Kouachi, Lassana Bathily, #jesuischarlie, #jesuisahmed, research roundup

The post France, Islam, terrorism and the challenges of integration: Research roundup appeared first on The Journalist's Resource.

]]>
The U.S. Veterans Affairs Department and challenges to providing care for service members: Research roundup https://journalistsresource.org/criminal-justice/veterans-affairs-department-health-care-hospitals/ Tue, 10 Nov 2015 12:32:46 +0000 http://live-journalists-resource.pantheonsite.io/?p=39113 2014 roundup of recent research on the difficulties that retired troops face when using V.A. services and the challenges to reforming the organization and improving its efficiency.

The post The U.S. Veterans Affairs Department and challenges to providing care for service members: Research roundup appeared first on The Journalist's Resource.

]]>

In September 2015, a third-party report prepared for the U.S. Department of Veterans Affairs was released detailing a “leadership crisis” within the health care delivery system that serves over 9.1 million veterans. This report was mandated by the Veterans Access, Choice, and Accountability Act of 2014, which was passed by Congress amidst accusations that these healthcare facilities were underserving its patients. In some cases, delayed care was blamed for the death of veterans, some of whom were put on “secret lists” meant to falsify the documented patient wait times at V.A. facilities.

The report came on the heels of a tumultuous year and a half for the V.A. In May 2014, the department’s Inspector General launched an investigation after managers of a V.A. hospital in Phoenix, AZ were accused of concealing months-long wait times; the probe eventually widened to include 26 medical facilities. Dr. Robert Petzel, the V.A. Undersecretary for Health quickly resigned, followed by Secretary Eric Shinseki. Two-months later, Robert McDonald was appointed to the position with a mandate to address long-standing problems at the agency. He soon faced criticism that he wasn’t moving fast enough. In July 2015, McDonald appeared at a hearing of the House Veteran Affairs Committee, in which he asked for funds to close the $2.5 billion gap for his department’s 2015 budget. During the hearing, McDonald was questioned by representatives on why these concerns were not brought up sooner and on his department’s lack of accountability.

The intense pressure on V.A. facilities is the consequence of a number of interlinked factors. Thanks to improved medical care, more service members survive battle — currently, 16 are wounded for every one killed, compared to 2.6 soldiers wounded for every one killed in Vietnam. When soldiers do come home, their injuries can be more profound and the care required more involved. Thousands suffer from post-traumatic stress disorder (PTSD), many have lost one or more limbs, and even injuries that seem to leave no external sign can have a severe impact, including traumatic brain injury (TBI). Meanwhile, many of the V.A.’s technical patient management systems are out of date, leading to considerable duplication, delays and errors.

The proposed budget for the V.A. — which enrolls 9.1 million veterans of the estimated 21.9 million living U.S. veterans — in fiscal year 2015 is $158.6 billion, according to a May 2014 Congressional Research Service (CRS) report, which also notes that in 2015 the V.A. “anticipates treating more than 757,000 Operation Enduring Freedom (OEF), Operation Iraqi Freedom (OIF), and Operation New Dawn (OND) veterans.” Between fiscal years 2011 and 2014, the number of V.A. enrollees increased 6.3%.

The Department of Defense estimates that more than 50,000 service members have been wounded in action during the Global War on Terror conflicts, but that figure does not fully capture mental health needs or still-emerging disabilities. A 2013 CRS report provides a detailed look at all casualty statistics and spells out the extent of the mental health injuries. A 2014 survey by the Washington Post and Kaiser Family Foundation of Iraq and Afghanistan veterans found deep dissatisfaction with current levels of government care. The Iraq and Afghanistan Veterans of America (IAVA) issued a 2014 report exploring the troubles with the V.A. claims backlog and detailing the experiences of service members.

A 2013 report from the Harvard Kennedy School estimates that spent or accrued costs for post-9/11 veterans’ medical and disability care are already $134.3 billion, and may run as high as $970.4 billion by 2053. For more, see Brown University’s “Costs of War” project which states as one of its goals “to identify less costly and more effective ways to prevent further terror attacks.”

As The New Republic noted in a May 21 2014 article, the V.A.’s problems are anything but new: A 2001 report from the Government Accounting Office warned that wait times were often excessive even then. In 2007 the Army general in charge of the Walter Reed medical center was fired after the Washington Post revealed poor living conditions and excessive red tape at the facility. A presidential commission recommended “fundamental changes” to the V.A. system, but change has been slow to come. In 2013, a whistleblower revealed chronic understaffing and life-threatening medical mistakes at a Mississippi V.A. hospital and in July of that year, the Department of Veterans Affairs released a “strategic plan” to eliminate the claims backlog.

Below is a roundup of other background research on the Veterans Affairs Department and the challenges it faces in providing care to former soldiers, now and in the future. For journalists covering veterans issues, the American Journal of Public Health publishes a wide range of studies, including new research on suicide risks, gender disparities and the challenges of providing care to homeless veterans.

 

——————————

“Access to the U.S. Department of Veterans Affairs Health System: Self-reported Barriers to Care Among Returnees of Operations Enduring Freedom and Iraqi Freedom”
Elnitsky, Christine A.; et al. BMC Health Services Research, December 2013, 13:498. doi: 10.1186/1472-6963-13-498.

Abstract: “The U.S. Department of Veterans Affairs (VA) implemented the Polytrauma System of Care to meet the health care needs of military and veterans with multiple injuries returning from combat operations in Afghanistan and Iraq…. We studied combat veterans (n = 359) from two polytrauma rehabilitation centers using structured clinical interviews and qualitative open-ended questions, augmented with data collected from electronic health records. Our outcomes included several measures of exclusive utilization of VA care with our primary exposure as reported access barriers to care. Results: Nearly two thirds of the veterans reported one or more barriers to their exclusive use of VA healthcare services. These barriers predicted differences in exclusive use of VA healthcare services. Experiencing any barriers doubled the returnees’ odds of not using VA exclusively, the geographic distance to VA barrier resulted in a seven-fold increase in the returnees’ odds of not using VA, and reporting a wait time barrier doubled the returnee’s odds of not using VA. There were no striking differences in access barriers for veterans with polytrauma compared to other returning veterans, suggesting the barriers may be uniform barriers that predict differences in using the VA exclusively for health care.”

 

“Health Care Spending and Efficiency in the U.S. Department of Veterans Affairs”
Auerbach, David I.; Weeks, William B.; Brantley, Ian. RAND Corporation, 2013.

Abstract: “In its 2013 budget request, the Obama administration sought $140 billion for the U.S. Department of Veterans Affairs (VA), 54% of which would provide mandatory benefits, such as direct compensation and pensions, and 40% of which is discretionary spending, earmarked for medical benefits under the Veterans Health Administration (VHA). Unlike Medicare, which provides financing for care when its beneficiaries use providers throughout the U.S. health care system, the VHA is a government-run, parallel system that is primarily intended for care provision of veterans. The VHA hires its own doctors and has its own hospital network infrastructure. Although the VHA provides quality services to veterans, it does not preclude veterans from utilizing other forms of care outside of the VHA network — in fact, the majority of veterans’ care is received external to the VHA because of location and other system limitations. Veterans typically use other private and public health insurance coverage (for example, Medicare, Medicaid) for external care, and many use both systems in a given year (dual use). Overlapping system use creates the potential for duplicative, uncoordinated, and inefficient use. The authors find some suggestive evidence of such inefficient use, particularly in the area of inpatient care. Coordination management and quality of care received by veterans across both VHA and private sector systems can be optimized (for example, in the area of mental illness, which benefits from an integrated approach across multiple providers and sectors), capitalizing on the best that each system has to offer, without increasing costs.”

 

“Recovering Servicemembers and Veterans: Sustained Leadership Attention and Systematic Oversight Needed to Resolve Persistent Problems Affecting Care and Benefits”
Government Accountability Office, November 2012, GAO-13-5.

Findings: “Deficiencies exposed at Walter Reed Army Medical Center in 2007 served as a catalyst compelling the Departments of Defense (DOD) and Veterans Affairs (VA) to address a host of problems for wounded, ill, and injured servicemembers and veterans as they navigate through the recovery care continuum. This continuum extends from acute medical treatment and stabilization, through rehabilitation to reintegration, either back to active duty or to the civilian community as a veteran. In spite of 5 years of departmental efforts, recovering servicemembers and veterans are still facing problems with this process and may not be getting the services they need. Key departmental efforts included the creation or modification of various care coordination and case management programs, including the military services’ wounded warrior programs. However, these programs are not always accessible to those who need them due to the inconsistent methods, such as referrals, used to identify potentially eligible servicemembers, as well as inconsistent eligibility criteria across the military services’ wounded warrior programs. The departments also jointly established an integrated disability evaluation system to expedite the delivery of benefits to servicemembers. However, processing times for disability determinations under the new system have increased since 2007, resulting in lengthy wait times that limit servicemembers’ ability to plan for their future. Finally, despite years of incremental efforts, DOD and VA have yet to develop sufficient capabilities for electronically sharing complete health records, which potentially delays servicemembers’ receipt of coordinated care and benefits as they transition from DOD’s to VA’s health care system.”

 

“Department of Veterans Affairs: Strategic Plan to Eliminate the Compensation Claims Backlog”
Department of Veterans Affairs, January 2013

Introduction: “The VBA completed a record-breaking 1 million claims per year in fiscal years 2010, 2011, and 2012. Yet the number of claims received continues to exceed the number processed. In 2010 VBA received 1.2M claims. In 2011, VBA received another 1.3M claims, including claims from veterans made eligible for benefits as a result of the Secretary’s decision to add three new presumptive conditions for Veterans exposed to Agent Orange. In 2012, VBA received 1.08M claims. Over the last three years, the claims backlog has grown from 180 thousand to 594 thousand claims…. But too many veterans have to wait too long to get the benefits they have earned and deserve. These delays are unacceptable. This report outlines VA’s robust plan to tackle this problem and build a paperless, digital disability claims system — a lasting solution that will transform how we operate and ensure we achieve the Secretary’s goal of eliminating the claims backlog and improving decision accuracy to 98 percent in 2015.”

 

“Departments of Defense and Veterans Affairs: Status of the Integrated Electronic Health Record (iEHR)”
Panangala, Sidath Viranga; Jansen, Don J. Congressional Research Service, 2013.

Introduction: “In December 2010, the Deputy Secretaries of [the Department of Defense and the Veterans Administration] directed the development of an integrated Electronic Health Record (iEHR) , which would provide both Departments an opportunity to reduce costs and improve interoperability and connectivity. On March 17, 2011, the Secretaries of DOD and VA reached an agreement to work cooperatively on the development of a common electronic health record and to transition to the new iEHR by 2017. On February 5, 2013, the Secretary of Defense and the Secretary of Veterans Affairs announced that instead of building a single integrated electronic health record (iEHR), both DOD and VA will concentrate on integrating VA and DOD health data by focusing on interoperability and using existing technological solutions. This announcement was a departure from the previous commitments that both Departments had made to design and build a new single iEHR, rather than upgrading their current electronic health records and trying to develop interoperability solutions…. It is unclear at this time what the long-term implications of the most recent change in the program strategy will be.”

 

“Uninsured Veterans and Family Members: Who Are They and Where Do They Live?”
Haley, Jennifer; Kenney, Genevieve M. Urban Institute, May 2012.

Findings: Approximately 1 in 10 — 1.3 million — of the country’s 12.5 million nonelderly veterans did not have health insurance coverage or access to Veterans Affairs (VA) health care as of 2010. When family members of veterans are included, the uninsured total rises to 2.3 million. An additional 900,000 veterans use VA health care but have no other coverage. Nearly 50% of uninsured veterans have incomes at or below 138% of the Federal Poverty Line ($30,429 for a family of four in 2010). Under the Affordable Care Act (ACA), these would qualify for coverage as of January 2014. Another 40.1% of veterans and 49% of their families have incomes that qualify for new subsidies through health insurance exchanges with the ACA. The uninsured rate is 12.3% in states with the least progress on exchange implementation, compared with 9.6% to 9.8% for veterans in states with most progress to health insurance exchange implementation.

 

“Improving Trends in Gender Disparities in the Department of Veterans Affairs: 2008–2013”

Whitehead, Alison M.; et al. American Journal of Public Health, September 2014, Vol. 104, No. S4, S529-S531, doi: 10.2105/AJPH.2014.302141.

Abstract: “Increasing numbers of women veterans using Department of Veterans Affairs (VA) services has contributed to the need for equitable, high-quality care for women. The VA has evaluated performance measure data by gender since 2006. In 2008, the VA launched a 5-year women’s health redesign, and, in 2011, gender disparity improvement was included on leadership performance plans. We examined data from VA Office of Analytics and Business Intelligence quarterly gender reports for trends in gender disparities in gender-neutral performance measures from 2008 to 2013. Through reporting of data by gender, leadership involvement, electronic reminders, and population management dashboards, VA has seen a decreasing trend in gender inequities on most Health Effectiveness Data and Information Set performance measures.”

 

“Racial Disparities in Cancer Care in the Veterans Affairs Health Care System and the Role of Site of Care”
Samuel, Cleo A.; et al. American Journal of Public Health, September 2014, Vol. 104, No. S4, S562-S571. doi: 10.2105/AJPH.2014.302079

Abstract: “We assessed cancer care disparities within the Veterans Affairs (VA) health care system and whether between-hospital differences explained disparities…. Compared with Whites, Blacks had lower rates of early-stage colon cancer diagnosis; curative surgery for stage I, II, or III rectal cancer; 3-year survival for colon cancer; curative surgery for early-stage lung cancer; 3-dimensional conformal or intensity-modulated radiation; and potent antiemetics for highly emetogenic chemotherapy…. Conclusions: Disparities in VA cancer care were observed for 7 of 20 measures and were primarily attributable to within-hospital differences.”

 

“Retaining Homeless Veterans in Outpatient Care: A Pilot Study of Mobile Phone Text Message Appointment Reminders”
McInnes, D. Keith; et al. American Journal of Public Health, September 2014, Vol. 104, No. S4, S588-S594. doi: 10.2105/AJPH.2014.302061.

Abstract: “We examined the feasibility of using mobile phone text messaging with homeless veterans to increase their engagement in care and reduce appointment no-shows… Results: Participants were satisfied with the text-messaging intervention, had very few technical difficulties, and were interested in continuing. Patient-cancelled visits and no-shows trended downward from 53 to 37 and from 31 to 25, respectively. Participants also experienced a statistically significant reduction in emergency department visits, from 15 to 5 and a borderline significant reduction in hospitalizations, from 3 to 0. Conclusions: Text message reminders are a feasible means of reaching homeless veterans, and users consider it acceptable and useful. Implementation may reduce missed visits and emergency department use, and thus produce substantial cost savings.”

 

Keywords: veterans affairs, health care, PTSD, TBI, cost-effectiveness, military budgets and defense, research roundup

The post The U.S. Veterans Affairs Department and challenges to providing care for service members: Research roundup appeared first on The Journalist's Resource.

]]>
The growing problem of Internet “link rot” and best practices for media and online publishers https://journalistsresource.org/media/website-linking-best-practices-media-online-publishers/ Fri, 09 Oct 2015 22:19:44 +0000 http://live-journalists-resource.pantheonsite.io/?p=40729 Ten linking “best practices,” with an emphasis on stability and transparency. The goal is to reduce the chance that links will go bad, minimize the work going forward and maximize the utility for users.

The post The growing problem of Internet “link rot” and best practices for media and online publishers appeared first on The Journalist's Resource.

]]>

The Internet is an endlessly rich world of sites, pages and posts — until it all ends with a click and a “404 page not found” error message. While the hyperlink was conceived in the 1960s, it came into its own with the HTML protocol in 1991, and there’s no doubt that the first broken link soon followed.

On its surface, the problem is simple: A once-working URL is now a goner. The root cause can be any of a half-dozen things, however, and sometimes more: Content could have been renamed, moved or deleted, or an entire site could have evaporated. Across the Web, the content, design and infrastructure of millions of sites are constantly evolving, and while that’s generally good for users and the Web ecosystem as a whole, it’s bad for existing links.

In its own way, the Web is also a very literal-minded creature, and all it takes is a single-character change in a URL to break a link. For example, many sites have stopped using “www,” and even if their content remains the same, the original links may no longer work. The same can occur with the shift from “http:” to “https:” The rise of CMS platforms such as WordPress and Druple have led to the fall of static HTML sites, and with each relaunch, untold thousands of links die.

Even if a core URL remains the same, many sites frequently append login information or search terms to URLs, and those are ephemeral. And as the Web has grown, the problem has been complicated by Google and other search engines that crawl the Web and archive — briefly — URLs and pages. Many work, but their long-term stability is open to question.

This phenomenon has its own name, “link rot,” and it’s far more than just an occasional annoyance to individual users.

Nerdy but important context

A 2013 study in BMC Bioinformatics looked at the lifespan of links in the scientific literature — a place where link persistence is crucial to public knowledge. The scholars, Jason Hennessey and Steven Xijin Ge of South Dakota State University, analyzed nearly 15,000 links in abstracts from Thomson Reuters’ Web of Science citation index. They found that the median lifespan of Web pages was 9.3 years, and just 62% were archived. Even the websites of major corporations that should know better — including Adobe, IBM, and Intel — can be littered with broken links.

A 2014 Harvard Law School study looks at the legal implications of Internet link decay, and finds reasons for alarm. The authors, Jonathan Zittrain, Kendra Albert and Lawrence Lessig, determined that approximately 50% of the URLs in U.S. Supreme Court opinions no longer link to the original information. They also found that in a selection of legal journals published between 1999 and 2011, more than 70% of the links no longer functioned as intended. The scholars write:

[As] websites evolve, not all third parties will have a sufficient interest in preserving the links that provide backwards compatibility to those who relied upon those links. The author of the cited source may decide the argument in the source was mistaken and take it down. The website owner may decide to abandon one mode of organizing material for another. Or the organization providing the source material may change its views and “update” the original source to reflect its evolving views. In each case, the citing paper is vulnerable to footnotes that no longer support its claims. This vulnerability threatens the integrity of the resulting scholarship.

To address some of these issues, academic journals are adopting use of digital object identifiers (DOIs), which provide both persistence and traceability. But as Zittrain, Albert and Lessig point out, many people who produce content for the Web are likely to be “indifferent to the problems of posterity.” The scholars’ solution, supported by a broad coalition of university libraries, is perma.cc — the service takes a snapshot of a URL’s content and returns a permanent link (known as a permalink) that users employ rather than the original link.

Resources exist to preserve a comprehensive history of the Web, including the Internet Archive’s WayBackMachine. This service takes snapshots of entire websites over time, but the pages and data preserved aren’t always consistent and comprehensive, in part because many sites are dynamic — they’re built on the fly, and thus don’t exist in the way that classic HTML pages do — or because they block archiving.

The Hiberlink project, a collaboration between the University of Edinburgh, the Los Alamos National Laboratory and others, is working to measure “reference rot” in online academic articles, and also to what extent Web content has been archived. A related project, Memento, has established a technical standard for accessing online content as it existed in the past.

useful links, stable sources, be transparentLinking best practices

As of October 2015, the Journalist’s Resource website had more than 12,000 internal and external links — and we’re a tiny site compared to many. We use a WordPress extension to regularly check our links, and 10 or more can break every week — our own little universe of link rot. Many of these are caused by sites that update their design or infrastructure, PDFs that move, press releases that expire, and so forth. While there’s nothing we can do about many of these changes, by carefully choosing when and how to link, we try to minimize the odds that we’ll be affected. Every media organization should do this — the cost in time and resources is minimal, and the long-term benefits for both organizations and users can be substantial.

Below are some suggested linking “best practices,” with an emphasis on stability and transparency rather than search-engine optimization and page ranking. The goal is to reduce the probability that outbound links will go bad, minimize your work going forward and maximize your site’s long-term utility to users. Of course, in many journalistic situations — breaking news, Twitter and live blogs, for example — the calculus necessarily changes. Speed counts, and the resources to which you link may intrinsically be ephemeral.

As time allows, however, keep the basic philosophy in mind. To paraphrase the author Michael Pollan and his famous rules for a good diet, it can be summed up in a simple mantra: “Useful links. Stable sources. Be transparent.”

1. Put in only essential links.

  • Every link has the potential to go bad over time, and the more you put in, the higher the chance that one will break. If something is not central to the subject at hand and the information can be easily found with a simple Web search — institutional websites, well-known individuals, and so forth — there’s no point in linking. Doing so only increases your risk.
  • For your users’ sake, don’t link too much. If you have a forest of links in your writing, it can become difficult to know what to click on — what may be behind a link, or why it’s even there. Choose your links carefully and strategically.

2. Ensure that links are clearly visible, yet don’t obscure your text.

  • Single words (“told,” “study,” “reasons”) are too easy to overlook, yet linking entire phrases can be distracting and come off as overly emphatic. Link text of two to five words works well.
  • The link color and style should be distinct from unlinked text, but not overshadow it completely. Keep Web accessibility for all in mind.

3. Choose linking text carefully.

  • The link text should let users know what they’ll find if they click. Options include nouns with some descriptive information (“2014 Yale study”), a person and an active verb (“Micah Sifry wrote”) or an interesting statistic (“97% of social scientists”). This also helps demonstrate accuracy and openness, as Oxford’s Reuters Institute put it in a 2014 report.
  • Avoid structures such as “A new University of Pittsburgh study (link here) reveals the incidence of concussions among younger football players.” The insertion just slows down readers, and at this point in the Internet’s evolution, people know what a hyperlink looks like. That said, if this is your style, be consistent.
  • Avoid stacking links tightly in a sentence — for example, “Three new studies provide a research perspective on concussions in sports.” It may work for insider coverage of issues that have received extensive online attention, and you need to pack in a lot of links, but the chance for reader confusion is significant.
  • To better indicate content, you can use hover text that appears when users mouse over a link. However, you should be thoughtful and consistent about this — go all in, or avoid hover text.
  • A side-benefit of informative link and hover text is that if the URL goes bad later on, you have information that will simplify the search for the lost content — you know what to look for.

4. URL and content stability is essential — except when its ephemerality is part of the story.

  • Unless you’re covering breaking news, try to avoid linking to anything that might go away — personal or short-term project websites that may disappear, draft versions of documents or press releases. Fast-moving stories may require linking to content that could be taken down or modified, however, and the solution is to use website tools that monitor link validity in real time.
  • Link to primary sources whenever possible, unless the secondary source is central to your coverage. For example, if you’re writing about a new U.N. report, link directly to it. However, if you’re dissecting how the report has been misinterpreted, you’ll want to link to both the primary document and what you see as faulty coverage.
  • Because of concerns about Wikipedia’s accuracy, reliability and potential for bias, link to the site only when it’s the subject at hand. If you do choose to link to a page, click on “cite this page” and use the “permanent link” displayed. This will lead to a snapshot of that particular version of the Wikipedia page, unaffected by subsequent edits.
  • When you have a choice of sites to which to link, chose stability. For example, at Journalist’s Resource we tend to favor PubMed, even if its user interface (UI) isn’t the snazziest. Beyond their having 24 million citations and counting, they’re part of the National Institutes of Health and are going to be around for a good long time.
  • If you’re linking to scholarly content, beware drafts on authors’ websites. They can be open, unlike the versions on many academic journals, but they aren’t the final content, and you owe it to your readers to point them to the real thing.
  • If you’re linking to an academic paper with a DOI number, consider using that (the domain to use is “http://doi.org/”, followed by the DOI number). Persistent URLs (PURLs) also offer greater longevity, but there’s some debate over the wisdom of using them for archival purposes.
  • For major reports that are regularly updated — say, the State Department’s work on human trafficking — link to the report landing page rather than specific documents (more on this below). This way your link will continue to work even as documents and sub-pages change. On the other hand, if you’re referencing a particular statistic or fact, don’t link to a generic page with content that might change. Instead, find a source that is both specific and stable.

5. Whenever possible, link to pages rather than PDFs.

  • Many online resources are present in both Web page and PDF form — for example, the Reuters Institute paper, “Accuracy, Independence and Impartiality: How Legacy Media and Digital Natives Approach Standards in the Digital Age,” has a landing page and is also available in a full-text PDF. Given this choice, go for the landing page. This allows users to quickly assess the content without having to download it, and also offers the option of an executive summary.
  • Landing pages are generally more stable than PDFs. Because the latter are documents, they tend to be renamed or move around on websites. They can also be updated, potentially invalidating the reason for your original link, yet this won’t necessarily be indicated to you or your users.
  • PDF filenames are more likely to contain characters considered “unsafe” in URLs — commas, spaces, accented characters and so on. While these are automatically translated to Web-safe codes (more information below), they can impact link reliability.
  • If a PDF is large, the required download can cause browsers to time out. They also depend on specific software being installed on users’ computers. Yes, most people have Adobe Reader and compatibility is built into many browsers, but you can’t count on that.
  • PDFs can contain copyrighted material, and linking directly to them might raise legal issues (more on this below). They may also be behind paywalls.
  • If you do choose to link directly to a PDF, it can be helpful to signal this to users: “A new Scholars Strategy Network post on the immigration crisis (PDF) sheds light on some persistent myths,” for example. This is a matter of local style, however, and whatever approach you choose, be consistent.

6. Always look for the most compact and direct URL available, and ensure that it’s clean, with no unnecessary information after the core of the URL. (This process is often referred to as “URL normalization” or “URL canonicalization.”)

  • If a URL contains an “.html” or other Web page extension, in most cases anything thereafter can and should be removed — it’s just dead weight and could, down the line, break a link that’s actually good. Verify that the slimmed URL works and if so, use that for your hyperlink.
  • When there’s a “?” character in a URL, check whether it and everything thereafter is mandatory for the link’s functioning. For example, in http://live-journalists-resource.pantheonsite.io/ballot-framing?utm_source=JR-email, everything from “?” on can and should be deleted. Note that the “?” sometimes precedes post or category information; that’s fine, and you at least verified that this was required rather than, say, useless search terms or tracking codes.
  • With multiple “?” characters, you can often “peel back” the URL, progressively removing unnecessary elements from the end until you get down to the smallest and most stable link possible.
  • Exercise caution with links that have “%” in them — the symbol precedes codes that replace characters considered “unsafe” for URLs. For example, PDF document with the name “skating basics.pdf” would be “skating%20basics.pdf” because space characters are not valid for URLs. While URLs with codes may function, they can be unstable in the long run.
  • Watch out for URLs that contain references to resources that may not be universally accessible — Google Drive, for example, or login or session information. These could work perfectly well for you, but could fail for others. If you do see such information encoded in a URL, use Google or another search engine to find a direct path to the desired content.
  • Do some research before linking deeply into websites (this is called deep linking, but is distinct from the similarly named but completely different practice in mobile applications). Long URLs are intrinsically more vulnerable, and you could be inadvertently violating copyright or jumping over paywalls.

7. Avoid link-shorteners, with two exceptions.

  • Bitly, TinyURL and other such services are essential for Twitter and other contexts where URL length is tightly constrained. However, for text hyperlinks they should be avoided. While they produce a compact link, it’s no more stable than the underlying URL it contains — garbage in, garbage out, as the coders say. You’re also dependent on a third party’s maintaining your links, and that adds a layer of risk.
  • Perma.cc, as described in the introduction, both produces a permalink and archives the target content for at least two years; vested organizations such as law journals and courts have the authority to make links truly permanent.
  • WebCite, a project of the University of Toronto and other organizations, provides a similar service to perma.cc, but is open to all.

8. Don’t link in a way that violates copyright or breaks through paywalls. While there are a lot of gray areas, do your absolute best to respect all laws and regulations.

  • For academic papers, link to abstract page rather than the full-text or PDF version. For paywalled sites, you’re indicating to the user where content is, but respecting copyright.
  • Link to abstracts even with open journals, as they load quickly and allow users to judge whether to go for the full-text version. This also protects you down the line if a study that’s initially free and accessible moves behind a paywall.
  • For media sites, respect paywalls, even if you can find the direct link to full content by using a search engine.
  • Exercise caution with links to YouTube and other media-sharing sites. Because videos are uploaded by users who may or may not have copyright, they can be taken down for infringement — don’t assume such links are permanent.
  • Avoid linking to documents on sites such as Academia.edu where the users’ right to upload content isn’t always clear.

9. Verify after publication and check your links at regular intervals.

  • Check all your links after you publish. Some content-management systems can manipulate URLs during the production process, and the end results may not work.
  • If possible, use an application or service that regularly checks the validity of your site’s links. When you do find broken links, fix them promptly. Also be aware that valid links can, in a sense, be “broken” when the content you were originally pointing to changes without notice.

10. As you’re building and maintaining your own blog or website, remember that other sites link to your content, and you want to keep those links alive.

  • Create landing pages for all individual PDF documents, rather than just a page of links to a series of PDFs.
  • If you do post PDFs, ensure that their names do not contain any unsafe characters — in particular, no commas, periods or spaces. The same goes for all your URLs, naturally.
  • General-purpose pages can have generic URLs (http://live-journalists-resource.pantheonsite.io/about, for example), but specific content — articles, blog posts, dated reports and so on — should have distinct, long-lived URLs.
  • When content is superseded, consider keeping the original material with a note at the top pointing users to the new content.
  • If you must change a page’s URL, set up a quick redirect to send users from the old URL to the new one.
  • When a redesign or infrastructure upgrade requires wholesale changes to your URL structure, build in ways that allow inbound links to the old URLs to connect to the right content.
  • Ensure that your server can handle incorrect URLs with upper-case letters — for example, mysite.com/BigBlog should be automatically redirect to mysite.com/bigblog. (Moz.com has a great post on URL best practices.)

At this point, you’ve done everything you can. Your outbound links will still have the digital rug pulled out from under them from time to time, but the risks will be minimized, and you’ll have a fighting chance at fixing them if they do break. For practitioners, educators and others interested in learning more about decoding URLs, we’ve put some sample exercises beneath the “Media/Analysis Tips” tab above.

______

Many thanks to David Weinberger and Jonathan Zittrain of Harvard’s Berkman Center for Internet & Society; Micah L. Sifry, the executive editor of the Personal Democracy Forum; Keely Wilczek and Valerie Weis of the Harvard Kennedy School Library; and Evan Horowitz of the Boston Globe for their invaluable suggestions and input for this article.

Keywords: technology, link rot, linkrot, resource rot, hyperlinking, uniform resource identifier (URI), uniform resource locator (URL), uniform resource name (URN), canonical URLs, SEO

The post The growing problem of Internet “link rot” and best practices for media and online publishers appeared first on The Journalist's Resource.

]]>
Long-term perspective on wildfires in the western United States https://journalistsresource.org/environment/long-term-perspective-wildfires-western-usa/ Mon, 14 Sep 2015 20:51:00 +0000 http://live-journalists-resource.pantheonsite.io/?p=17501 2012 study in PNAS on the frequency of wildfires in the American West over the past 3,000 years and the implications for larger wildfires.

The post Long-term perspective on wildfires in the western United States appeared first on The Journalist's Resource.

]]>

As Anglo-American settlers moved across North America, they had a significant impact on the land, clearing trees, expanding agriculture and building towns. As settlement expanded, forest fires — once an integral part of the natural world — were systematically suppressed. This practice had consequences when fires did break out, as they seemed to be increasingly large and severe.

In recent decades, Western wildfires have become larger and more severe, and scientists note that this is often related to accumulated materials — fuel loads — that, historically, have been burned in smaller blazes. Indeed, the combination of human suppression and changes in climate could make for an increasingly volatile mix in the coming years, and there remains continuing debate over best practices for forest management in this regard. The cost of fighting fires is high — the federal government spent more than $1.5 billion in 2014 alone, according to the National Interagency Fire Center. The U.S. Forest Service released a report in August 2015 predicting that its costs will rise sharply through 2025.

To better understand long-term variations in wildfires in the American West, a group of researchers looked at 3,000 years of sedimentary charcoal accumulation rates — obtained from the Global Palaeofire Working Group at the University of Bristol, U.K. — and matched them to fire scars and historical accounts of blazes. The results of the 2012 study, “Long-term Perspective on Wildfires in the Western USA,”  were published in the journal Proceedings of the National Academy of Sciences.

The area studied includes the U.S. states of Washington, Oregon, California, Idaho, Nevada, Montana, Wyoming, Colorado, New Mexico and Arizona.

Key findings include:

  • Burning in the American West declined slightly over the past 3,000 years, with low levels between 1400 and 1700 CE and throughout the 20th century. Peaks in burning occurred between 950 and 1250 CE and then again during the 1800s CE.
  • Fire activity was historically highest at 1000, 1400 and 1800 CE. The rise in fires at 1000 CE occurred when temperatures high and drought area were widespread. Another increase in fires occurred at around 1400 CE, when drought conditions increased rapidly.
  • Humans began to have a significant impact on fires in the 1800s. During expansion of Anglo-American settlements in the 19th century, evidence of burnings increased. In the 20th century this is reversed, due in part to changing practices nationally in fire outbreak management. Since then, “fire activity has strongly diverged from the trend predicted by climate alone and current levels of fire activity are clearly out of equilibrium with contemporary climate conditions.”
  • Burning is currently at its lowest in history. The previous minimum was between 1440 and 1700 CE, and was due to a decrease in droughts and temperatures, which were at a 1,500-year minimum. Prior to the current era, fire reached its lowest historical level at 1500 CE, which corresponded with the collapse of several Native American populations.

The authors of the paper conclude: “The divergence in fire and climate since the mid 1800s CE has created a fire deficit in the West that is jointly attributable to human activities and climate change and unsustainable given the current trajectory of climate change.”

Related research: An August 2015 study that appeared in Ecological Economics looks at the economic impact associated with wildland fires as a result of climate change. A 2015 study published in the International Journal of Wildland Fire explores the likelihood that climate change will increase the potential for very large fires (VLFs) in the U.S.

 

Tags: California, global warming, disasters

The post Long-term perspective on wildfires in the western United States appeared first on The Journalist's Resource.

]]>