Knowing that research drives practice, which then impacts patient outcomes, the infection prevention and healthcare epidemiology is striving to improve its embrace of implementation science (defined by Eccles and Mittman as "the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice"). Although federal agencies and professional societies have been churning out guidelines and standards for decades, practitioners have been struggling with what should inform daily practice and how the evidence should become accepted practice.
By Kelly M. Pyrek
Editor's note: In the July 2010 issue of ICT, we first explored the growing importance of the implementation science movement. Here, we provide an update on the role that infection preventionists can play in the dissemination and implementation science process.
Knowing that research drives practice, which then impacts patient outcomes, the infection prevention and healthcare epidemiology is striving to improve its embrace of implementation science (defined by Eccles and Mittman as "the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice"). Although federal agencies and professional societies have been churning out guidelines and standards for decades, practitioners have been struggling with what should inform daily practice and how the evidence should become accepted practice.
In a recent Safe Healthcare blog hosted by the CDCs Division of Healthcare Quality Promotion, Russell N. Olmsted, MPH, CIC, the 2011 president of the Association of Professionals in Infection Control and Epidemiology (APIC), muses, "Infection preventionists (IPs) are subject matter experts on the prevention of healthcare-associated infections (HAIs). IPs track the scientific literature related to HAI prevention, and then watch that evidence as it is distilled into recommendations by CDCs Healthcare Infection Control Practices Advisory Committee. But what is being done to ensure that these best practices are being implemented at the patient bedside?"
Olmsted says that the IP must take on the role of an "effector" in order to apply the recommendations to his/her healthcare organization in collaboration with healthcare workers who engage in direct patient care, adding that, "We are typically the 'linchpins' of applying research that appears in scientific, peer-reviewed journals to policies and practices implemented by our colleagues at the patients bedside."
It's a tall order for infection preventionists to be able to locate, absorb and synthesize the abundant amount of information contained in the medical literature while simultaneously performing one's daily tasks. It can be equally challenging to achieve adoption of best practices. As Olmsted notes, "Many of us know that the speed of adopting new findings in the literature to improving the safety of care delivery can be exceedingly slow. For example, a landmark study published in The Lancet in 1991 demonstrated the superior efficacy of 2 percent chlorhexidine for skin preparation prior to insertion of central lines. And yet, 14 years later, only 70 percent of hospitals in a national survey were using this product."
These challenges are among the impetus for APIC to help practitioners identify and prioritize research priorities. Olmsted points to how APICs Research Task Force recently reviewed the role of the IP in translating scientific evidence to improve patient safety and effectiveness of care, and emphasizes that "the goal of implementation science is not only to raise awareness but to also use strategies to adopt and integrate evidence-based health interventions and change practice patterns within specific settings."
In January 2010, APICs board of directors decided to update and clarify the associations approach to research. The efforts of the APICs Research Task Force were chronicled in a paper by Patricia Stone, RN, PhD, FAAN, and colleagues in the American Journal of Infection Control (AJIC) which also reviews the history of APICs role in research and reports on the recent vision and direction developed by a multi-disciplinary task force regarding appropriate research roles and contributions for APIC and its members. Stone, et al. (2010) assert that dissemination and implementation science, a type of research aimed at understanding how to translate research evidence into practice, must increasingly become one of infection prevention's core areas of expertise, and that this is an area in which APIC members can apply their unique skills and competencies to ensure that patients receive the most up-to-date and evidence-based infection prevention practices possible.
APIC's research agenda achieved more clarity in 2000 when the APIC Research Foundation funded and conducted a Delphi process that identified 21 research priorities that could be used as rationale and supporting evidence for the need for research funded by other organizations. As Stone, et al. (2010) explain, "At that time the highest ranked research priorities were related to obtaining evidence on how best to improve compliance with best practices, use antimicrobials appropriately and decrease resistance, measure the financial impact of complications of HAI and value of interventions aimed at preventing HAIs, perform surveillance of infectious and noninfectious complications across the spectrum of care delivery, and prevent complications at specific sites (e.g., ventilator-associated pneumonia)."
Stone, et al. (2010) acknowledge that while APIC is not the sole arbiter of the infection prevention research agenda, the association and its membership "should continue to inform and guide the type of research that is conducted by developing an up-to-date research agenda that is regularly reviewed to ensure its ongoing value and fit with member needs and the external public policy, practice, scientific, and biologic environments." Although the APIC Research Foundation eventually evolved into the Scientific Research Council, APIC continues its research efforts, including overseeing several recent major research studies on MRSA and C. difficile prevalence, as well as other partnerships with faculty from Columbia University and Harvard University in which APIC staff and members had input into developing the research design, recruitment, and/or dissemination of results.
As APIC grows its research agenda, there is a role that infection preventionist can play in the process. While these practitioners will find themselves at very different points along the research spectrum, the important consideration is that they understand the goals and processes involved in implementation science.
Stone, et al. (2010) acknowledge that not all infection preventionists will be able to become researchers: "The primary role of most of APICs members is in the clinical setting as IPs. It is likely their major contribution to research may be participating in research led by others and implementing research findings as well as identifying gaps in knowledge and setting research priorities."
"People play different roles in research," Stone says. "Not everyone is inclined to be a researcher, even though we do need more researchers. Clinicians might not want to develop their own research proposals, but they can participate in research in very appropriate and meaningful ways. One way is through participating in dissemination and implementation science, which is a different type of research -- it's not as controlled and the goal is to see if the evidence from studies works in everyday practice. Many more different studies are needed to eventually lead practice. There might already be some evidence that it works in a lab or a similarly controlled setting, but then you must determine how you can fit this evidence into everyday practice. That's the current movement toward dissemination and implementation science not only in infection prevention and control but across the healthcare spectrum."
Knowing that levels of engagement with research are going to be different depending on the individual, Stone encourages infection preventionists to at least become familiar with the basics of understanding the medical literature, knowing that being able to implement the best quality of care is usually based on what is contained in the research. "We all have a responsibility to understand the research, to understand scientific journal articles and know whether they are applicable or not to our practice," Stone says. "The hope is that infection preventionists and others will participate in research but I don't think every clinician needs to know how to design and conduct a research study. They should understand whether or not the basic design of the study is strong and know whether they want to implement the results or participate in the research, so they must have that basic knowledge. At the very least, I think all infection preventionists must try to implement the very best research into their own practices, and conduct their own analysis of how it's working in their own setting."
Stone says there are resources available from APIC and elsewhere to help infection preventionists understand evidence-based practices and how to access the clinical evidence. "It's advisable to refresh yourself on how to read and understand a study," Stone says. "There is some effort currently to try to eliminate the confusing words in the research vernacular and to make it more accessible to everyone, but it helps to also be part of a larger group -- whether it's through an APIC chapter or through a local organization, club or society -- that can help you understand the literature. I think a strategy such as developing or joining a journal club in your setting is very important. In a journal club, everyone reads the same paper and then they discuss it."
Stone adds that infection preventionist also must be able to differentiate research projects from quality improvement projects. "Quality improvement and research can look similar but there are differences," Stone explains. "Research will be conducted with the idea of developing generalized knowledge for others, whereas if it's a quality improvement initiative within your own facility, you are trying to understand what works in your own setting, but not trying to inform the whole practice."
Regardless of whether they conduct or participate in the research, infection preventionists must understand the basic tenets of dissemination and implementation science. As Stone et al. (2010) explain. "dissemination is the targeted distribution of information and intervention materials to a specific audience. Implementation implies that the goal of the communication is, however, to do more than increase awareness; it is the use of strategies to adopt and integrate evidence-based health interventions and change practice patterns within specific settings. Dissemination and implementation science has been defined as research that creates new knowledge about how best to design, implement, and evaluate quality improvement initiatives." Stone et al. (2010) explain further that the need for dissemination and implementation science "grew out of the reality that, even when new knowledge is discovered and adequate research is available, there are many barriers to translating research into practice. In the absence of effective implementation and evaluation, even the best research findings are only theoretical."
Stone et al. (2010) say that because infection preventionists must set and recommend policies and procedures in relation to prevention and control of infections based on the best evidence available, they must cultivate the ability to evaluate the methodologic rigor and quality of published studies, and add, "Other tangential skills include formulation of key clinical questions, searching the literature and applying findings to improve safety and quality of care. There is evidence that these skills along with certification in infection control and epidemiology correlate with more efficient and effective use of evidence to improve practice and prevent HAIs."
To better engage in dissemination and implementation science, Stone et al. (2010) says understanding how research is typically translated into practice is essential. There are three translational levels:
- T1: Clinical efficacy research: Studies to translate basic biomedical science discoveries into knowledge needed for clinical efficacy; for example, a comparison of the efficacy of hand hygiene agents in the reduction of bacteria and viruses
- T2: Health services/comparative effectiveness research: Studies to examine how these efficacious interventions actually work in everyday practice with different subgroups of patients/clinicians; for example, a cost-effectiveness analysis of commercially available antimicrobial-coated central venous catheters
- T3: Dissemination and implementation research: Studies that address the how of high-quality healthcare delivery and assess how best to disseminate and implement best evidence into actual practice; for example, a pragmatic cluster randomized active control trial in which settings are randomized to receive various quality improvement interventions
"In the past it has been recognized that we need to move beyond testing for efficacy of interventions (clinical efficacy research) to understanding how these interventions are implemented and how effective they are in actual practice," write Stone, et al. (2010). "This type of translational science has often been called health services and/or comparative effectiveness research (T2) Recognizing the distinction between clinical efficacy and clinical effectiveness is a critical first step to reduce infections and establish evidence-based practice. In addition, however, we must also better understand how to close the gap between research evidence and clinical (and public health) practice. This is the purpose of the relatively new field of research called dissemination and implementation science, which is the evaluation of translation of evidence into practice, sometimes referred to as T3. Studies of this type test the effectiveness of various dissemination and implementation techniques. These studies are multidisciplinary and are often guided by theory and expertise in behavioral change, marketing, and/or organizational management. While it may not be feasible to randomize patients to different settings with different implementation strategies, rigorous but pragmatic cluster-randomized approaches have been described."
Whatever the strategy, infection preventionists wonder at what point the evidence is ready to be implemented into daily practice. This is critical, as an increasing number of experts are pointing to suboptimal infection prevention practices as well as the lack of adherence to current prevention recommendations in healthcare institutions. As Saint, et al. (2010) emphasize, "Consistent implementation of evidence-based practices in everyday clinical situations remains a challenge."
"It will never be the same tipping point for everyone," Stone says. "Sometimes it might be a controlled randomized trial that changes everything; other times, it might take a number of studies to demonstrate viability. Take the issue of nurse staffing ratios and their impact on patient outcomes; regarding pinpointing the number of nurses that makes the difference in the quality of outcomes -- it wasn't just one study but multiple studies that said we're never going to come up with an exact ratio, but we do understand that the level of nursing care matters and makes a difference for patient outcomes. There is enough evidence out there now that we understand that correlation."
The same might not be easily said for the effectiveness of single dissemination or implementation strategies. As Stone, et al. (2010) observe, "More recently a composite of simultaneous implementation of evidenced-base interventions, also known as bundles, have resulted in significant reduction in incidence of HAIs. However, much work remains. There is a paucity of well-designed T3 studies specifically designed to inform how best to disseminate and implement HAI prevention practices. These types of studies should test various implementation and dissemination practices to see what is most effective."
Another ongoing challenge for the profession can be the immense variation in practice that persists, another sign that the uptake of research varies from institution to institution. "Part of the great variation in practice across the nation is due to resources available to implement best practices derived from the literature," Stone says. "Some of it is due to conflicting guidance because not all of the guidelines from the various agencies and organizations are consistent. Maybe one way to do things might not work in all settings, but I think we can begin to help organizations understand what's likely to work, given their settings."
To that end, Stone explains that the recent influx of research funding through the Change of administration and seen an influx of money into research from the American Recovery and Reinvestment Act and the Patient Protection and Affordable Care Act could help provide much-needed monies for facility-based researchers. Additionally, a focus on comparative effectiveness research -- essentially patient-centered outcomes research -- will help boost efforts to engage in research and translate this science into practice. "There is increased focus on understanding how we can have higher-quality, more efficient care and how we provide the right care for patients," Stone says. "There will also be increased scrutiny of how we synthesize and disseminate the evidence in an improved fashion."
Saint, et al. (2010) says that despite recent advances in implementation science, several important challenges remain, including determining how to sustain meaningful change. Another is using social science tools such as human factors engineering to better understand context in a healthcare setting that is unpredictable and nonlinear. Yet another challenge is determining the most appropriate level of the use of implementation science to avoid overdiffusion of an innovation. Bearing these challenges in mind, Saint, et al. (2010) make the following recommendations to help infection preventionists prepare for a national research agenda likely to focus on implementation science:
- Continue to define the technical components (what should be the key items in the toolkits or part of the bundle) while focusing also on the adaptive ones (how to adapt or tailor the intervention, given the context).
- Collaborate with organizational behavioralists, other social scientists, and each other to develop approaches to address the dynamic role of context.
- Establish a research network to find out not only what works but also how it works and in what settings.
- Determine how to "institutionalize" change while avoiding the over-adoption of fads.
- Identify funding sources that are willing to make large investments in understanding implementation science using infection prevention as an appropriate clinical model.
As Stone, et al. (2010) emphasizes, "Infection preventionists must be active consumers of research. Infection preventionists everywhere should be proficient in the skills needed to critically appraise science, keep up with the rapid generation of new knowledge, and apply wisdom to establish evidence-based practices to improve patient care delivery. With these skills, infection preventionists may act as an accelerant for the application and translation of evidence reducing the time lag that currently exists between the generation and adoption of knowledge As advanced clinicians, infection preventionists should be participating in the research process, setting the research agenda for themselves (not having others do this) and using the best level of evidence. We believe that APICs authority and the authority of the infection preventionist comes through working together to study how best to apply what is known (i.e., dissemination and implementation networks that will rigorously test how best to achieve success). It is through networking that evidence-based solutions can be shared on a broad basis (between settings as well as across continents)."
This year, APIC is emphasizing implementation science through its newly launched APIC Science, Knowledge & Implementation Network (ASK-IN), and has conducted a survey of its membership to determine infection prevention research priorities. The survey findings will be discussed at an advisory meeting taking place during APIC's annual meeting in late June, and the advisory group will review the findings and discuss next steps at that time.
"The field is evolving and that gets back to the heart of dissemination and implementation science efforts," Stone says. "We are all evolving and we need to learn how to do things the best way possible and get them done efficiently. When institutions have success in those efforts, it's important for them to share what are they doing and how we can get everyone to do likewise. We want to get people to do the right thing, always informed by the evidence."
References
Eccles MP and Mittman MB. Welcome to implementation science. Implement Sci 2006;1:1.
Olmsted RN. Healthcare Implementation Science + the Infection Preventionist = Safe Healthcare. Safe Healthcare blog, hosted by CDCs Division of Healthcare Quality Promotion. Feb. 10, 2011.
Pyrek KM. Healthcare Epidemiology: The Research Agenda for the Next Decade. Infection Control Today. July 2010.
Saint S, Howell JD and Krein SL. Implementation Science: How to Jump-Start Infection Prevention. Infect Control Hospital Epidem. Vol. 31, No. s1. 2010.
Stone PW, Larson E, Saint S, Wright MO, Slavish S, Murphy C, Granato JE, Pettis AM, Kilpatrick C, Graham D, Warye K and Olmsted RN. Moving evidence from the literature to the bedside: Report from the APIC Research Task Force. Am J Infect Control. Vol. 38, No. 10, Pages 770-777. December 2010.
Bolstering Infection Prevention Research Efforts
The need to bolster research efforts has never been greater. A paper in a recent issue of the Archives of Internal Medicine asserts that more than half of the recommendations in current practice guidelines for infectious disease specialists are based on opinions from experts rather than on evidence from clinical trials.
"During the past half century, a deluge of publications addressing nearly every aspect of patient care has both enhanced clinical decision making and encumbered it owing to the tremendous volume of new information," write Dong Heun Lee, MD, and Ole Vielemeyer, MD, of Drexel University College of Medicine in Philadelphia. "Clinical practice guidelines were developed to aid clinicians in improving patient outcomes and streamlining healthcare delivery by analyzing and summarizing data from all relevant publications. Lately, these guidelines have also been used as tools for educational purposes, performance measures and policy making."
Interest has been growing in critically appraising not only individual guidelines but also the entire sets of guidelines for specialists and subspecialists, the authors note. Lee and Vielemeyer (2011) analyzed the strength of recommendations and overall quality of evidence behind 41 guidelines released by the Infectious Diseases Society of America (IDSA) between January 1994 and May 2010.
Recommendations within the guidelines were classified in two ways. The strength of recommendation was classified in levels A through C, with A indicating good evidence to support the recommendation, B indicating moderate evidence and C indicating poor evidence; some guidelines also included levels D and E. The quality of evidence was classified in levels I through III, with level I signifying evidence from at least one randomized controlled trial, level II indicating evidence from at least one well designed clinical trial that was not randomized and level III indicating evidence was based on opinions of respected authorities based on clinical experience, descriptive studies or reports of expert committees.
The 41 analyzed guidelines included 4,218 individual recommendations. Of these, 14 percent were classified as backed by level I evidence, 31 percent as level II and 55 percent as level III. Among class A recommendations, 23 percent were level I and 37 percent were level III. In addition, the researchers selected five recently updated guidelines and compared them to their previous versions. In all but one case, the new versions cited an increased number of articles, and in every case the number of recommendations increased. However, most of these additional recommendations were supported only by level II or III quality of evidence. Only two updated guidelines had a significant increase in the number of level-I recommendations.
There are several possible explanations for these findings, the authors note. In comparison to other specialties, relatively few large multicenter randomized controlled trials have been conducted in the field of infectious diseases. "Many infectious diseases occur infrequently, present in a heterogeneous manner or are difficult to diagnose with certainty," Lee and Vielemeyer (2011) write. "For others, a randomized controlled trial would be impractical or wasteful or might be deemed unethical." In addition, some of the recommendations address questions about diagnosis or prognosis, neither of which could be studied in a randomized controlled trial and thus could never receive the highest quality rating.
"Guidelines can only summarize the best available evidence, which often may be weak," Lee and Vielemeyer (2011) conclude. "Thus, even more than 50 years since the inception of evidence-based medicine, following guidelines cannot always be equated with practicing medicine that is founded on robust data. To improve patient outcomes and minimize harm, future research efforts should focus on areas where only low-level quality of evidence is available. Until more data from such research in the form of well-designed and controlled clinical trials emerge, physicians and policy makers should remain cautious when using current guidelines as the sole source guiding decisions in patient care."
In an accompanying editorial in Archives of Internal Medicine, "Guidelines No Substitute for Critical Thinking," John H. Powers, MD, of Scientific Applications International Corp., writes, "What are providers to make of recommendations in guidelines if most of these recommendations are based on opinion? First, these data reinforce that absolute certainty in science or medicine is an illusion. Rather, evaluating evidence is about assessing probability." Powers adds, "Perhaps the main point we should take from the studies on quality of evidence is to be wary of falling into the trap of cookbook medicine.' Although the evidence and recommendations in guidelines may change across time, providers will always have a need to know how to think about clinical problems, not just what to think. As with individual research studies, providers should critically evaluate guidelines and the evidence on which they are based and how relevant recommendations are locally at their institutions and in their patients. Especially for subspecialists, guidelines may provide a starting point for searching for information, but they are not the finish line. The fact that many recommendations are based on opinion should also serve as a call to future researchers to critically evaluate and study the questions that need better answers."
In a recent commentary appearing in JAMA, two healthcare epidemiologists address the need for a greater degree of implementation science, as the evidence base on which potential infection-prevention strategies must be constructed is severely limited because very few of the necessary clinical trials have been conducted. The generation of this kind of scientific evidence is constrained by the perceived difficulty of completing the necessary studies and limited federal funding available for assessing infection-prevention interventions, say Eli N. Perencevich, MD, MS, of the University of Iowa, Carver College of Medicine and Center for Comprehensive Access & Delivery Research and Evaluation (CADRE), and Ebbing Lautenbach, MD, MPH, MSCE, of the Department of Medicine and Epidemiology and the Center for Clinical Epidemiology and Biostatistics at the University of Pennsylvania School of Medicine.
Perencevich and Lautenbach (2011) explore several methods for comparative effectiveness research in infection prevention, including cluster randomized trials, quasi-experimental studies, and mathematical models. They explain that a "well-designed and adequately powered randomized controlled trial (RCT) provides the most rigorous evidence for or against the efficacy of a given intervention. In healthcare epidemiology, interventions to reduce device-related infections (e.g., antimicrobial-coated central venous catheters) are often amenable to an RCT investigative approach because the intervention and the observed benefit occur at the level of a single patient and the effect of the intervention for one patient is independent of the effect on a different patient." They acknowledge, however, that many interventions are not amenable to an RCT approach: "For example, MRSA screening programs test patients for MRSA carriage and isolate colonized patients to prevent transmission of MRSA. These screening programs indirectly benefit patients who are not isolated. To assess population-level interventions, alternatives to RCTs are needed."
A solid alternative, Perencevich and Lautenbach (2011) say, is the cluster randomized trial, which they say is "well suited to study the comparative effectiveness of population-level interventions. Cluster randomized trials may involve randomization at different levels including the full hospital or individual hospital units. These trials are complicated, costly, and time-consuming but are absolutely vital if population-level interventions are to be adequately evaluated." They explain further that a third option, the quasi-experimental (QE) study, aims to "evaluate interventions but do not use a randomized control group. In the simplest QE design, a population serves as its own control during a baseline period of observation. An intervention is then implemented, and a subsequent period of observation is completed. Changes in the outcome of interest are then compared before and after the time of the intervention."
As Perencevich and Lautenbach (2011) note, "The focused and coordinated use of well-designed quasi-experiments, cluster randomized trials, and mathematical models offer significant potential opportunities for improving the scientific understanding and targeting infection prevention efforts. The scientific community, including investigators, scientific societies, and funding agencies, must be willing to consider these complementary methods and facilitate the creation and support of the collaborative research networks in which to complete the necessary investigations."
References
Lee DH and Vielemeyer O. Arch Intern Med. 2011;171[1]:18-22; and Arch Intern Med. 2011;171[1]:15-17.
Perencevich EN and Lautenbach E. Infection Prevention and Comparative Effectiveness Research. JAMA. 2011;305(14):1482-1483.
Strengthening Defenses: Integrating Infection Control With Antimicrobial Stewardship
October 11th 2024Use this handout to explain the basics of why infection prevention and control and antimicrobial stewardship are essential and how the 2 fields must have a unified approach to patient and staff safety
Blood Product Overtransfusion Is a Global Issue: Here Are 5 Reasons the Practice Must Change
October 9th 2024If a patient receives treatment or therapy that they do not need, it can cause unnecessary harm. This is true for medications, surgeries, and medical procedures, especially blood transfusions.