Public Reporting of Infections and the CLABSI Mandate

Article

Beginning Jan. 1, 2011, the CDC's National Healthcare Safety Network (NHSN) will be the tool used by facilities electing to participate in the CMS HAI IPPS Hospital Inpatient Quality Reporting Program.

By Kelly M. Pyrek

Beginning Jan. 1, 2011, the CDC's National Healthcare Safety Network (NHSN) will be the tool used by facilities electing to participate in the CMS HAI IPPS Hospital Inpatient Quality Reporting Program. As part of that program, central line-associated bloodstream infection (CLABSI) data from each facilitys adult and pediatric intensive care units and neonatal intensive care units will be reported to NHSN and shared with CMS. Each facilitys data will be included in CMS Hospital Compare tool, which publicly reports hospital performance.

By January 2011, as outlined in the FY 2011 final rule, Medicare will require all hospitals participating in the Inpatient Prospective Payment System (IPPS) to:

- Enroll in the NHSN

- Collect and report data on specific CLABSIs

Those that do not meet these requirements by January 2011 and do not submit data to CDC/NHSN will be subject to a 2 percentage point reduction in their Medicare inpatient annual payment update for FY 2013.

According to the CDC, there are approximately 3,000 hospitals registered with NHSN, while CMS data shows almost 3,500 hospitals are currently subject to the pay-for-reporting requirements. A recent survey conducted by Premier healthcare alliance shows that hospitals are making progress on preparing for these reporting requirements. According to this survey of participants in Premiers Aug. 25 Advisor Live teleconference, "Using NHSN for new CMS requirements," nearly 60 percent are already enrolled in the NHSN and41 percent of respondents are both enrolled in NHSN and currently submitting CLABSI data. When asked what poses the greatest challenge for CLABSI reporting, of the nearly 1,000 respondents:

- 44 percent said experience and proficiency in using NHSN

- 41 percent said enrolling in NHSN by January 2011

- 16 percent said collecting denominator data

The teleconference featured Premiers Danielle Lloyd, senior director for reimbursement policy, and Daniel Pollock, MD, surveillance branch chief of the Division of Healthcare Quality Promotion Centers for the CDC, who discussed the use of NHSN to meet the new CMS reporting requirements for CLABSIs. According to Lloyd, "the final Medicare policies incorporate numerous quality measures that hospitals should understand to avoid payment cuts. The ability to focus on evidence-based care, compare against others, work with physicians, make changes in performance and use appropriate coding will help hospitals steer clear of these cuts while advancing patient safety and the quality of care they provide."

Experts say this new partnership with CMS creates greater transparency of healthcare-acquired infection (HAI) data, and also builds stronger accountability within the U.S. healthcare system. It also highlights the importance of strong facility-level support for infection prevention programs and professionals.

"While not included in this IPPS rule, healthcare reform legislation also imposes payment penalties for certain hospital-acquired infections as part of value-based purchasing (VBP), and a separate penalty for hospital-acquired conditions," says Salah S. Qutaishat, PhD, director of surveillance and epidemiology at Premier. "It is imperative that hospitals focus on these infections and get rates as close to zero as possible, as performance data collected through NHSN could be incorporated into the reform policies in the future, and NHSN data will be made public on CMS Hospital Compare Web site."

Surgical site infections reporting begins in 2012 with associated payment impact effective in 2014. Additional measures will likely be introduced in the future.

Quality of Data Sparks Concerns

There is currently some concern in the infection prevention and healthcare epidemiology communities about the quality of the bloodstream infection data that is being reported. A recent study in JAMA indicates that this quality may be affected by the variation in surveillance methods.

The authors write, "Public reporting of hospital-specific infection rates is widely promoted as a means to improve patient safety. Central line-associated bloodstream infection (BSI) rates are considered a key patient safety measure because such infections are frequent, lead to poor patient outcomes, are costly to the medical system, and are preventable. Publishing infection rates on hospital report cards, which is increasingly required by regulatory agencies, is intended to facilitate inter-hospital comparisons that inform healthcare consumers and provide incentive for hospitals to prevent infections. Inter-hospital comparisons of infection rates, however, are valid only if the methods of surveillance are uniform and reliable across institutions."

Michael Y. Lin, MD, MPH, of Rush University Medical Center in Chicago, and colleagues conducted a study to assess institutional variation in performance of traditional central line-associated BSI surveillance. The study included 20 intensive care units among four medical centers (2004-2007). Unit-specific central line-associated BSI rates were calculated for 12-month periods. Infection preventionists, blinded to study participation, performed routine prospective surveillance using Centers for Disease Control and Prevention (CDC) definitions. A computer algorithm reference standard was applied retrospectively using criteria that adapted the same CDC surveillance definitions.

Twenty ICUs in four medical centers contributed 41 twelve-month unit periods, representing 241,518 patient-days (total number of days beds were occupied by patients in the ICUs during the study period) and 165,963 central line-days (total number of days patients had a central line in place in the ICUs during the study period). Across all unit periods, the median infection preventionist-measured central line-associated BSI rate was 3.3 infections per 1,000 central line-days. The median rate determined by the computer algorithm was 9.0 per 1,000 central line-days. When unit periods were analyzed in aggregate across medical centers, overall correlation between computer algorithm and infection preventionist rates was weak. When stratified by medical center, the researchers found that the point estimates of the correlations varied widely. Additional analysis demonstrated significant variation among medical centers in the relationship between computer algorithm and expected infection preventionist rates. "The medical center that had the lowest rate by traditional surveillance (2.4 infections per 1,000 central line-days) had the highest rate by computer algorithm (12.6 infections per 1,000 central line-days)," the authors write.

The authors add, "In this study, we found strong evidence of institutional variation in central line-associated BSI surveillance performance among medical centers. Inconsistent surveillance practice can have a significant effect on the relative ranking of hospitals, which threatens the validity of the metric used by both funding agencies and the public to compare hospitals. As central line-associated BSI rates gain visibility and importancein the form of public report cards, infection reduction campaigns such as Getting to Zero, and financial incentives for reducing rates by private insurers and the Centers for Medicare & Medicaid Serviceswe should seek and test surveillance measures that are as reliable and objective as possible."

Russ Olmsted, MPH, CIC, president-elect of the Association for Professionals in Infection Control and Epidemiology (APIC), says, "APIC leadership and its headquarters personnel have been aware of the issue raised by the recent study in JAMA by Lin et al. -- specifically the importance of validation of findings from surveillance of HAIs -- for some time. This issue and training to address accurate application of CDC's NHSN criteria for HAIs has been a core mission of APIC. Examples of education and training can be found in APIC's courses for infection preventionists (IPs) that include Education for the Prevention of Infection (EPI) courses, its annual international conference, Infection Prevention in Ambulatory Care Settings. APIC's 24/7 presence via APIC Anywhere and its regular 'Wednesday Webinars' also are opportunities that the association is working to support the accuracy and precision of surveillance of HAIs. More needs to be done but much has already been established."

In a statement, APIC says it shares the concerns expressed in the JAMA study, but emphasizes the need for infection preventionists to ask themselves if the computerized algorithm used to detect CLABSIs are able to discern whether the infection could be attributed to another source, such as a urinary tract infection. According to APIC, this is a nuance that infection preventionists are trained to detect and may account for some of the variances observed in the study. Another challenge is ruling out contaminants from true infections, which may also account for some of the variation.

The findings of the recent JAMA study are consistent with a separate study published in APICs American Journal of Infection Control (AJIC) by Matthew Niedner, MD, who writes, "With increased mandatory public reporting of catheter-associated bloodstream infections and the insurance ramifications of such never events, the inter-institutional variability introduced by surveillance techniques warrants further scrutiny both to improve public health through accurate measurement, but also to reduce the possibility of gaming the system or being punitive to centers exercising diligence."

Olmsted adds, "As highlighted in the association's press release on this study in JAMA, we sense the exact rate of CLABSI lies in between cases identified by IPs versus use of surveillance technology such as a computerized algorithm. For example, the latter does not have the additional insight that an IP can glean from review of a patient's medical record that may reveal the source of a BSI is secondary to another site of infection such as an intra-abdominal abscess or a UTI. In addition, relatively few healthcare facilities have established methods that could more precisely identify the source of a patient's BSI. Specifically, there are methods that clinical microbiology laboratories can deploy that include semi-quantitative culture of central line segments, differential time to positivity for sets of blood culture and quantitative blood cultures. However all of these depend on clear information and aseptic collection of specimens by personnel in patient care units. In the busy clinical setting these key steps are often missed. The laboratory can only deal therefore with the information provided by clinicians. So some of the lack of precision identified by this JAMA study relates to these missed opportunities for better diagnostic information." Olmsted continues, " In addition, almost all of the state-based HAI prevention initiatives that were supported by ARRA funding include a validation component. Further, with state-based legislation for public reporting, validation of cases of HAI such as CLABSI are incorporated into these systems. Notable examples of core validation as part of public HAI reporting include programs in the states of New York and South Carolina."

APIC says the inconsistencies identified in these studies reflect the challenges inherent in HAI surveillance. Even when supported by the Centers for Disease Control and Preventions (CDCs) standard infection criteria and definitions, as available via the National Healthcare Safety Network (NHSN), complex medical cases often require in-depth analysis and case-by-case review to determine if the infection meets the CDC definition criteria. Inconsistencies in data review may reflect the differences used in case finding and HAI identification. Facilities performing greater surveillance would likely report a higher HAI rate due to a more precise measurement system.

However, inconsistencies may also indicate incomplete or inaccurate surveillance efforts or different ways of applying the same surveillance criteria. Therefore, the detection of inconsistencies in HAI data reinforces the increasingly urgent need for data validation. APIC, a champion of public HAI reporting, strongly supports validation of data which must include both internal and external review of infection data. This will help to ensure that infection rates accurately support both the comparisons among facilities, as well as informed decision making by consumers. Fortunately, funding provided by the American Recovery and Reinvestment Act of 2009 is currently supporting data validation studies in several states some of these include direct engagement of APIC. These projects will help direct future efforts to assure accuracy and comparability in state and national HAI statistics.

APIC does not support sole reliance on other sources of data, such as administrative or claims, as these are even less precise than surveillance data collected by trained infection preventionists. Under no circumstances, according to APIC, should the recent JAMA study be used to support use of claims or administrative data over surveillance data. Meanwhile, APIC says hospital administrators can best ensure accurate HAI rates by building a robust infection prevention infrastructure. The optimum infrastructure will likely necessitate a combination of electronic surveillance technology with appropriate staffing for infection preventionists and other personnel well-qualified to manage its use in complex and often challenging clinical situations. Engagement by leadership is essential to the success of all infection prevention programs to ensure the most accurate information for the public, as well as to provide optimal patient care. APIC has developed a Program Evaluation Tool to assist its members and their affiliates with identification of adequate infrastructure to assure a properly resourced infection prevention program. In addition, APIC, through its scientific journal AJIC, publishes case studies to improve the application of NHSN criteria for its members.

APIC says it is imperative that all infection prevention stakeholders work together to determine the true incidence of HAIs in order to reward good performance and allow consumers to make informed choices about their healthcare.

Olmsted acknowledges that this reporting requirement adds to an already- busy IP's workload. "With the concerns and attention placed of HAIs by almost every facet of society -- patients, consumer advocates, providers, payors, accreditation agencies, public health, and regulatory agencies -- the demands on the IP's time have grown exponentially," Olmsted says. "This means time management is a critical skill that IPs have to deploy as their subject matter expertise can have dramatic impact if the IP can be supporting the care team at the bedside instead of behind their computer work station. Some strategies to address the innumerable and growing demands on the time of the IP is to refresh their facility-based risk assessment on a periodic basis, and at a minimum annually. Identify areas of priority so attention and resources can be focused. APIC, with primary support from its members, has develop a tool to assess an IPC program. Vickie Brown is the lead author of this resource and more details are available at: http://www.apic.org/Content/NavigationMenu/Links/Publications/APICNews/Program_Evaluation_Tool-White_Paper.pdf

This tool will identify gaps between available resources and demands such as public reporting that can be shared with organizational leadership. There is ample evidence that despite the growth in public release of HAI data that there has not been a commensurate increase in staffing effectiveness for the IPC program. This needs to be identified by the organization and at least some plan to begin to address this disconnect. Surveillance technology, such as that demonstrated by this study by Lin, et al in JAMA holds promise to increase the efficiency of surveillance.

APIC has published a position paper on surveillance technology and there was an extensive supplement published in AJIC's April 2008 issue that contains many examples of use of IT to assist the surveillance program. "

CLABSI Definitions

Another concern being raised is an inadequate definition of CLABSI, according to Daniel J. Sexton, MD, from Duke University Medical Center in Durham, N.C. Sexton, along with researchers Luke F. Chen, MBBS, MPH, CIC, FRACP, and Deverick J. Anderson, MD, MPH, write in a commentary in Infection Control & Hospital Epidemiology,, " It appears that the surveillance definition for CLABSI from the Centers for Disease Control and Prevention (CDC) originally aimed to be highly sensitive to capture all cases of CLABSI. Unfortunately, its specificity was a secondary concern. In the past, when rates of CLABSI were higher in most American hospitals, hospital epidemiologists would review the CLABSI rate with the understanding that there was unavoidable 'data noise' because of the highly sensitive and poorly specific definition. This lack of specificity in the definition of CLABSI is an old problem that has moved to the forefront of our awareness for two important reasons. First, external pressures to improve safety outcomes are a concern for all hospitals. This pressure comes from national directives, mandated public reporting, and "pay for performance" that is tied to specific process measures. Second, now that the incidence of CLABSI has substantially decreased, hospitals are realizing that "getting to zero" (i.e., reducing the incidence to zero) is implausible, if not impossible, with the current definition of CLABSI. Additionally, a series of secondary problems and consequences emerge from flaws of the existing definition. These problems and consequences will continue or worsen unless the definition is modified."

Sexton, et al. (2010) point to a recent case at their hospital in which a patient was admitted to an ICU with hypotension and intraabdominal injuries following a car accident. The patient received a central line and underwent two laparatomy procedures; surgeons found no macroscopic evidence of established infection and did not collect specimens for microbiologic testing. Cultures of blood drawn from peripheral veins one day after surgical repair grew Pantoea agglomerans and Proteus mirabilis. Sexton, et al. report that "infection preventionists applied the NHSN definition for CLABSI and were unable to label this patients bloodstream infection (BSI) as secondary to another source, as there was no clear evidence of infection and no pus was encountered at any of the patients surgeries. Furthermore, there were no pathogens cultured from specimens collected from any site to indicate that an infection arose from the gastrointestinal tract or abdomen, even though there were compelling clinical reasons to suggest that this had occurred. Our infection preventionists argued that failure to follow the 'CDC definitions' would make our hospitals data 'dishonest.' Moreover, failure to include this infection as a case of CLABSI would undermine their authority and hurt the morale of our infection control team." Sexton, et al. continue, "We believe that this case and many other cases of gramnegative bacteremia or fungemia in seriously ill patients in ICUs likely occur either because of translocation of microorganism from edematous, abnormal, or adynamic segments of bowel or because of microscopic or macroscopic defects in the bowel wall. As in the case we describe, absence of proof (of an established infection at a site other than a central vascular catheter as a source of bacteremia or fungemia) cannot be cited as proof of absence. In fact, the origin of some episodes of bacteremia or fungemia is impossible to determine. To assign, by default, all such BSIs to a category of 'centralline associated' simply because a central line has been inserted is not only folly, it is also intellectually and operationally incorrect."

Sexton, et al. assert that, "Hospital epidemiologists and infection preventionists now face an unnecessary dilemma when adjudicating whether a patient with bacteremia has a CLABSI or a secondary BSI. As a result of the issues raised above, we suspect that criteria used to determine whether there is a secondary source of infection vary from hospital to hospital. Indeed, it is likely that some epidemiologists would conclude on the basis of circumstantial evidence that our patient who had Pantoea and Proteus organisms recovered from blood had bacteremia secondary to a peritoneal infection. However, other hospital epidemiologists might strictly apply existing criteria and come to the opposite conclusion. In fact, informal discussions with other hospital epidemiologists have led us to believe that such adjudication based on circumstantial evidence and intuition is commonly but nonuniformly applied at individual hospitals. Such unnecessary subjectivity undermines the utility and reliability of publicly reported data on rates of CLABSI. The current NHSN definition of CLABSI is also deficient because some microorganisms that are likely to be contaminants are considered to be pathogens. Specifically, we believe the inclusion in the definition of laboratoryconfirmed BSI of the criterion that a single blood culture positive for enterococci can be considered recovery of a 'recognized pathogen' rather than as a 'common skin contaminant' is incorrect The time for a change is now, because the CLABSI rate has emerged as an important performance measure. This rate is widely used to perform timetrend analysis on performance within individual units and to compare performance between units within a hospital and between hospitals. Erroneous data that overestimate rates of CLABSI can damage morale and lead to false conclusions about quality of care. Erroneously attributing a BSI to a vascular device damages the credibility of our infection prevention team and undermines the credibility of all the data we collect and disseminate. Finally, incorrect data on rates of CLABSI decrease, if not eliminate, the utility of publicly reported data on the incidence of these infections."

References:

Lin MY, et al. Quality of Traditional Surveillance for Public Reporting of Nosocomial Bloodstream Infection Rates. JAMA. 2010;304[18]:2035-2041.

Sexton DJ, Chen LF and Anderson DJ. Commentary: Current Definitions of Central LineAssociated Bloodstream Infection: Is the Emperor Wearing Clothes? Infect Control Hosp Epidemiol 2010;31:1286-1289.

Niedner MF. The harder you look, the more you find: Catheter-associated bloodstream infection surveillance variability. Am J Infect Control 2010;38:585-95.

APIC Position Paper: The Use of Administrative (Coding/Billing) Data

for Identification of Healthcare-Associated Infections (HAIs) in US Hospitals. October 12, 2010. Available at: http://www.apic.org/Content/NavigationMenu/GovernmentAdvocacy/PublicPolicyLibrary/ID_of_HAIs_US_Hospitals_1010.pdf

Brown V, et al., APIC IP Program Evaluation Tool. April 2010. Available at: http://www.apic.org/Content/NavigationMenu/Links/Publications/APICNews/IP_Program_Evaluatio.htm

Wright MO, Hebden JN, Allen-Bridson K, et al. Healthcare-associated Infections Studies Project: An American Journal of Infection Control and National Healthcare Safety Network Data Quality Collaboration. Am J Infect Control 2010;38:416-8.

Related Videos
Jill Holdsworth, MS, CIC, FAPIC, CRCST, NREMT, CHL
Jill Holdsworth, MS, CIC, FAPIC, CRCSR, NREMT, CHL, and Katie Belski, BSHCA, CRCST, CHL, CIS
Baby visiting a pediatric facility  (Adobe Stock 448959249 by Rawpixel.com)
Antimicrobial Resistance (Adobe Stock unknown)
Anne Meneghetti, MD, speaking with Infection Control Today
Patient Safety: Infection Control Today's Trending Topic for March
Infection Control Today® (ICT®) talks with John Kimsey, vice president of processing optimization and customer success for Steris.
Picture at AORN’s International Surgical Conference & Expo 2024
Infection Control Today and Contagion are collaborating for Rare Disease Month.
Related Content