OR WAIT null SECS
A large study of intensive care patients in California found that public reporting of patient outcomes did not reduce mortality, but did result in reduced admission of the sickest patients to the ICU and increased transfer of critically ill patients to other hospitals.
“Public reporting is designed to reduce mortality by steering patients towards high-quality hospitals and creating incentives for hospitals to adopt quality improvement programs,” says Lora Reineck, MD, a postdoctoral scholar at the University of Pittsburgh. “But the reality does not necessarily meet the expectation.”
In fact, adds Reineck, the study demonstrated that large policy initiatives like California’s can have unintended consequences. Whether those consequences left patients worse or better off is not a question the study was designed to answer.
“It could be that some hospitals didn’t take certain surgical cases, fearing that the patients were at high risk of dying in intensive care after surgery,” Reineck explains. “Or, it could be that the very sickest patients were not admitted to the ICU because they were not going to get better, and instead were transitioned to care that emphasized comfort rather than prolonging life.”
In their retrospective cohort study, presented at the 2014 American Thoracic Society International Conference, Reineck and Pitt colleagues took advantage of a natural experiment created in 2007 when California required every ICU in the state to report severity-adjusted mortality rates. (California has since stopped collecting and reporting this information, and no other state has launched a similar initiative.)
As a control, the Pitt researchers looked at Arizona, Nevada and Texas, which did not have a public report requirement. In all four states, the group gathered Medicare fee-for-service data from 2005 to 2009, two years before California initiated its program and continuing two years after the program was launched.
In all, they reviewed the records of 936,063 patients admitted to 646 hospitals. To determine the effect of public reporting, they compared changes over time in California ICUs to control ICUs.
The researchers analyzed admission patterns in each ICU by determining the proportion of patients with 3 or greater comorbidities and the proportion of patients needing mechanical ventilation in each ICU. They reviewed discharge patterns by examining where ICU patients were sent when they left the hospital. And they assessed mortality rates, including deaths that occurred in the hospital and those that occurred within 30 days of hospital admission.
In their analysis, the team adjusted for patient and hospital characteristics, as well as regional characteristics that could affect patient outcomes. These regional factors included the age and racial distribution of residents and the number of hospitals in the region, a statistic that could affect how often critically ill patients were transferred to another ICU.
In the first year after public reporting began, California ICUs experienced virtually identical changes in in-hospital and 30-day mortality rates as controls (OR 1, p=0. 97; OR 1, p=0 .84, respectively).
However, in the first year, both the admission and discharge patterns showed statistically significant changes. Patients with three or more comorbidities were less likely to be admitted to California ICUs than control ICUs (OR .98, p=0.03). They were also more likely to be transferred to another acute-care facility (OR 1.08, p=0.03).
In the second year of pubic reporting, these patterns continued. Changes in mortality rates in California were similar to controls (in-hospital mortality OR 0.99, p = 0.72; 30-day mortality OR 0.99, p = 0.55). Admission of patients with 3 or more comorbidities was reduced compared to controls, but this was not statistically significant (OR 0.98, p=0. 13). And critically ill patients were more likely to be transferred to another hospital than in control states-a pattern more pronounced in year two than in year one (OR 1.43, p<0.001).
The Pitt researchers also looked at the number of patients who were transitioned to skilled nursing homes or long-term care facilities. They hypothesized that hospitals would try to improve their mortality rates by sending patients to these facilities.
“We expected that the proportion of patients discharged to these post-acute care facilities would have increased, but it didn’t,” Reineck says.
Reineck noted that a similar, but much smaller, study conducted in Cleveland, OH, supported their original hypothesis. The study, which analyzed the effect of publicly reporting ICU in-hospital mortality rates during the 1990s, found that discharges to post-acute care facilities increased while in-hospital mortality decreased. (Sirio CA, Shepardson LB, Rotondi AJ, et al. Community-wide assessment of intensive care outcomes using a physiologically based prognostic measure: implications for critical care delivery from Cleveland Health Quality Choice. Chest 1999; 115:793-801.)
Reineck also noted that initial media stories in California erroneously credited public reporting of ICU mortality rates with reducing mortality.
“That turned out not to be true,” she says. “We determined that mortality rates decreased in the control states at the same rate as California, meaning that the quality of care of all ICU patients improved during the study period irrespective of the public reporting initiative in California.”
Reference: Abstract 50098: The Impact Of Publicly Reporting Intensive Care Unit (ICU) In-Hospital Mortality On ICU Case-Mix And Outcomes. Authors: L.A. Reineck, T.Q. Le, C.W. Seymour, A.E. Barnato, D.C. Angus, J.M. Kahn; University of Pittsburgh - Pittsburgh, PA/US
Source: American Thoracic Society (ATS)