The Association for Professionals in Infection Control and Epidemiology (APIC) says it shares the concern expressed in the recent Journal of the American Medical Association (JAMA) study, "Quality of Traditional Surveillance for Public Reporting of Nosocomial Bloodstream Infection Rates," by Michael Lin, MD, MPH, and colleagues.
The study, published in the Nov. 10, 2010 issue of JAMA, found significant variation among medical centers regarding surveillance of bloodstream infections and concludes that such variations "may complicate inter-institutional comparisons of publicly reported central line-associated BSI (bloodstream infection) rates." APIC says that while this was a well-designed investigation, there are some aspects that stimulate follow-up questions. For example, is the computerized algorithm used to detect central line-associated bloodstream infections able to discern whether the infection could be attributed to another source, e.g., a urinary tract infection? This is something that infection preventionists are trained to ferret out and may account for some of the variances observed in the study. Another challenge is ruling out contaminants from true infections, which may also account for some of the variation.
The findings of the recent JAMA study are consistent with a separate study published in APICs American Journal of Infection Control (AJIC) by Matthew Niedner, MD.(1) The AJIC author writes, "With increased mandatory public reporting of catheter-associated bloodstream infections and the insurance ramifications of such never events, the inter-institutional variability introduced by surveillance techniques warrants further scrutiny both to improve public health through accurate measurement, but also to reduce the possibility of gaming the system or being punitive to centers exercising diligence."
APIC says that the inconsistencies identified in these studies reflect the challenges inherent in healthcare-associated infection (HAI) surveillance. Even when supported by the Centers for Disease Control and Preventions (CDCs) standard infection criteria and definitions, as available via the National Healthcare Safety Network (NHSN), complex medical cases often require in-depth analysis and case-by-case review to determine if the infection meets the CDC definition criteria. Inconsistencies in data review may reflect the differences used in case finding and HAI identification. Facilities performing greater surveillance would likely report a higher HAI rate due to a more precise measurement system.
However, inconsistencies may also indicate incomplete or inaccurate surveillance efforts or different ways of applying the same surveillance criteria. Therefore, the detection of inconsistencies in HAI data reinforces the increasingly urgent need for data validation. APIC, a champion of public HAI reporting, strongly supports validation of data which must include both internal and external review of infection data. This will help to ensure that infection rates accurately support both the comparisons among facilities, as well as informed decision making by consumers. Fortunately, funding provided by the American Recovery and Reinvestment Act of 2009 is currently supporting data validation studies in several states some of these include direct engagement of APIC. These projects will help direct future efforts to assure accuracy and comparability in state and national HAI statistics.
APIC says it does not support sole reliance on other sources of data, such as administrative or claims, as these are even less precise than surveillance data collected by trained infection preventionists.(2) APIC adds that under no circumstances should the recent JAMA study be used to support use of claims or administrative data over surveillance data.
Meanwhile, hospital administrators can best ensure accurate HAI rates by building a robust infection prevention infrastructure. The optimum infrastructure will likely necessitate a combination of electronic surveillance technology with appropriate staffing for infection preventionists and other personnel well-qualified to manage its use in complex and often challenging clinical situations. Engagement by leadership is essential to the success of all infection prevention programs to ensure the most accurate information for the public, as well as to provide optimal patient care. APIC has developed a Program Evaluation Tool to assist its members and their affiliates with identification of adequate infrastructure to assure a properly resourced infection prevention program.3 In addition, APIC, through its scientific journal AJIC, publishes case studies to improve the application of NHSN criteria for its members.4
APIC says it is imperative that all infection prevention stakeholders work together to determine the true incidence of HAIs in order to reward good performance and allow consumers to make informed choices about their healthcare.
1. Niedner MF. The harder you look, the more you find: Catheter-associated bloodstream infection surveillance variability. Am J Infect Control 2010;38:585-95.
2. APIC Position Paper: The Use of Administrative (Coding/Billing) Data for Identification of Healthcare-Associated Infections (HAIs) in US Hospitals. October 12, 2010. Available at: http://www.apic.org/Content/NavigationMenu/GovernmentAdvocacy/PublicPolicyLibrary/ID_of_HAIs_US_Hospitals_1010.pdf
3. Brown V, et al., APIC IP Program Evaluation Tool. April 2010. Available at: http://www.apic.org/Content/NavigationMenu/Links/Publications/APICNews/IP_Program_Evaluatio.htm
4. Wright MO, Hebden JN, Allen-Bridson K, et al. Healthcare-associated Infections Studies Project: An American Journal of Infection Control and National Healthcare Safety Network Data Quality Collaboration. Am J Infect Control 2010;38:416-8.