Quality Improvement in Environmental Services: Sustaining Gains Requires Communication, Collaboration

Article

Quality improvement (QI) in healthcare is a noble initiative but an increasing number of clinicians and administrators are wondering if such efforts are paying off in terms of better outcomes and whether or not they can be sustained long-term. Some studies indicate an ongoing disconnect between QI program goals and actual results. As Schouten, et al. (2008) remark, "The multi-institutional, quality improvement collaborative is widely accepted as a strategy in healthcare. Its widespread acceptance and use are not, however, based on a systematic assessment of effectiveness."

By Kelly M. Pyrek

Quality improvement (QI) in healthcare is a noble initiative but an increasing number of clinicians and administrators are wondering if such efforts are paying off in terms of better outcomes and whether or not they can be sustained long-term. Some studies indicate an ongoing disconnect between QI program goals and actual results. As Schouten, et al. (2008) remark, "The multi-institutional, quality improvement collaborative is widely accepted as a strategy in healthcare. Its widespread acceptance and use are not, however, based on a systematic assessment of effectiveness."

As Brennan, et al. (2012) state, "Underpinned by a philosophy that emphasizes widespread engagement in improving the systems used to deliver care, QI teams use measurement and problem solving to identify sources of variation in care processes and test potential improvements. QI methods have been used… as a tool for implementing specific models of care, and as the model for practice change in QI collaboratives. Despite this widespread emphasis on QI, research is yet to provide clear guidance for policy and practice on how to implement and optimize the methods in healthcare settings. Evidence of important effects and the factors that modify effects in different contexts remains limited." Brennan, et al. (2012) say that the extent to which specific contextual factors influence the use of QI methods and outcomes is not well understood, and add, "Measuring organizational context in QI evaluations is key to understanding the conditions for success, and for identifying factors that could be targeted by QI implementation strategies to enhance uptake and effectiveness."

Øvretveit (2011) explains that "context" can be defined as all factors that are not part of a quality improvement intervention itself, and adds, "More research could indicate which aspects are conditions for improvement, which influence improvement success. However, little is known about which conditions are most important, whether these are different for different quality interventions or whether some become less or more important at different times in carrying out an improvement. Knowing more about these conditions could help speed up and spread improvements and develop the science."

Øvretveit (2011) points to the variability in QI program success, using the Keystone initiative in Michigan by Peter Pronovost and colleagues as an example: "This program implemented a similar set of changes across 108 different ICUs in the state. What caught the headlines was the dramatic overall reductions in infections. But what was noticed by implementers elsewhere was the variation of results between the projects. Intriguingly, one of the studies reported that, ‘the intervention was modestly more effective in small hospitals.’ Was it due to differences in how thoroughly different ICUs implemented the changes? The study design did not provide for data collection to give details of the variations or allow explanation of the variations. For other QIs there also is evidence of variations in their effectiveness in different settings. These variations may be due to differences in implementation: many improvements are not discrete single ‘before/after’ changes but ‘facilitated evolution.' Differences in implementation may be due to differences in context: implementers often cite differences in resources for fewer results than other settings or in their access to expertise, or other ‘context factors.'"

Øvretveit (2011) makes a compelling case for the need to consider context: "To decide whether to carry out a QI, policy makers or managers and clinician say they need to know if the change contemplated is effective in improving outcomes. But to make use of this ‘efficacy knowledge,’ decision makers say they also need to know if it is likely to be effective in their setting, and how to implement it. This knowledge can come from a number of controlled trials of the same intervention in different settings, and this can help decision makers to answer the ‘will it work here?’ question: it can help discover how ‘context-sensitive’ or robust an improvement change is. However, using traditional controlled trial efficacy research designs to explore effectiveness in different settings is an expensive and time-consuming strategy. It gives limited help with the ‘how do we implement it?’ question, which controlled trials are not designed to answer. Neither does it answer the question about why an intervention varies by setting, because many features of the setting-context are ‘controlled-out’ in order to answer the efficacy question. Implementers need effectiveness 'strength of evidence' assessments about external validity, and need to know how to implement the intervention in their setting. An answer to the ‘why does it work or not work?’ question would help decision makers to decide better whether and how to try the change in other settings. It would show whether exact replication is critical, and, if not, help them to design a change which follows the principles in a way which is adapted to their setting."

There is still question about whether outcomes can be attributed to quality-related interventions and the extent to which individual intervention components contribute to results. As Brennan, et al. (2012) note, "There is limited consensus in the literature on what defines a CQI intervention and its components, and large variability in the content of CQI interventions across studies. In part, this is attributed to the evolution and local adaptation of QI interventions. However, it also reflects differences in how CQI interventions are conceptualized."

Øvretveit (2011) says that QIs have traditionally been conceptualized as interventions, defined as "discrete changes separated from their surroundings." in order to assess whether they cause other changes in outcome variables such as patient outcomes: "If the change is an improvement, the assumption is they can be repeated elsewhere to cause the same outcome changes. If an intervention is separated conceptually from its surroundings, then one research agenda is to explore how the intervention changes context, as well as vice versa. Another research agenda is not to conceptualize such a sharp separation, and to view improvement less as a specific change but as an interdependent set of actions which result in many types of changes, which in turn may result in better patient outcomes. Including context in understanding implementation and in improvement theory can advance improvement science and practice. It allows exploration of whether and how aligned changes at different levels may result, through complex influences, in better outcomes, and how these can be sustained. It moves from cause–effect understanding to conditional attribution, which allows qualified generalizations by discovering which conditions were necessary for the improvement to be carried out and its effects. This in turn allows decision makers to assess better likely results locally and how to adapt the change."

Glasgow, et al. (2013) say that a key area for future research is to understand the challenges faced as QI teams transition from improving care to sustaining quality and to ascertain what organizational characteristics can best overcome those challenges. Experts wrestle with the problem of why individual reports of successful QI projects as reported in the literature may not translate into widespread improvements in quality. Glasgow (2013) points to two theories that may shed light on this conundrum: "First, human factors theory advocates that when designing a device or a process careful attention must be paid to how innate limitations of human physical and mental capabilities will impact how people interact with the device or process. This concept means that even the greatest of technological solutions can be unsuccessful if people cannot successfully interact with the system. Building from this idea, a potential hypothesis for why there is little overall improvement in quality is that many QI projects propose and implement solutions that may impose too much additional cognitive burden on those tasked with providing high-quality care. In this situation, there may be initial success as excitement and energy related to the project are sufficient to overcome the additional cognitive burden. However, as time passes and the improvements become less of a focus there is a reduction in task specific energy. This eventually leads to a point where the additional cognitive burden becomes too overwhelming and performance begins to decline. This sort of process would suggest a QI project that initially appears successful, but over time cannot sustain performance which would result in a slow decline in quality likely back to baseline performance as providers abandon the new solution for their original process."

A second theory that Glasgow (2013) finds instructive is change management theory: "This theory acknowledges that going through and accepting change is a difficult and emotional process that people often resist. This theory suggests that even if a QI effort is technically correct from a human factors perspective, resistance to change from healthcare providers could still result in an unsuccessful QI project. This sort of change resistance could help identify why a successful QI project at one hospital is not successful when translated to other settings. Without the correct institutional QI culture or change-management process, QI projects will not sustain their improvements and likely will have difficulty achieving even initial improvements."

Quality Improvement in Environmental Services
Mark E. Rupp, MD, of the Department of Internal Medicine at the University of Nebraska Medical Center in Omaha, Neb., knows first-hand the challenges of sustaining performance improvement, something he calls an "often overlooked aspect of quality improvement and implementation science." As Rupp, et al. (2014) note, "The objective of quality improvement (QI) projects is to quickly produce changes that result in positive outcomes. Although many reports of QI projects relate initial success, few detail the sustainability of a program, and sustainability is a neglected aspect of implementation science."

Over the course of a four-year study in an academic medical center, Rupp and his colleagues observed that monthly feedback of performance data in face-to-face meetings with frontline personnel was crucial in maintaining environmental-cleaning effectiveness in adult critical-care units. The researchers studied several programmatic interventions directed toward sustaining improvement through a critical-care environmental cleaning QI project consisting of the introduction of a 43-point room-cleaning checklist, development of a housekeeper educational program, production of a training DVD, use of an objective measurement of cleaning, and feedback to housekeepers conveyed in monthly face-to-face meetings. Performance maintenance was analyzed during five phases in which the frequency and method of data feedback were altered. The thoroughness of cleaning was determined through use of a fluorescent-tagged marking solution on 15 high-touch surfaces per room: patient room door handle, thermometer, monitor, tray table, bed rail, bed rail release button, nurse call monitor, sink faucet handle, computer mouse, light switch, hand gel dispenser and cabinet handle. The percentage of cleaned surfaces was analyzed.

The researchers found that the pre-intervention cleaning compliance, based on assessment of 90 intensive care unit (ICU) rooms and 1,117 surface measurements, was 47 percent. After initiation of the QI campaign, compliance improved to approximately 75 percent within three months. During the intervention period, 45 ICU rooms were sampled each month; in all maintenance phases, the cleanliness scores remained significantly better than the baseline scores.

In maintenance phase 1 (M1), the number of rooms sampled was scaled down to 10 rooms per month, and reports were filed to Environmental Services (EVS) administration on a quarterly basis. Other intervention measures, such as use of the checklist and training new personnel with the use of the educational DVD, were continued. Rupp, et al. (2014) report that compliance deteriorated, and within nine months compliance fell to near baseline. In maintenance phase 2 (M2), reports were filed monthly with EVS administration, and the number of rooms sampled per month was increased to 15; compliance improved to approximately 70 percent to 75 percent. In maintenance phase 3 (M3), monthly feedback was delivered in face-to-face meetings with EVS housekeepers by infection prevention personnel, and compliance increased to greater than 80 percent. An additional attempt to limit time commitment was made in maintenance phase 4 (M4), in which immediate feedback was given at the time of room testing to the EVS supervisor and monthly summaries were supplied to EVS administration; compliance fell to approximately 70 percent. In maintenance phase 5 (M5), monthly face-to-face presentation of compliance data to housekeepers was restored, and compliance again increased to greater than 80 percent. Monthly meetings were continued with EVS housekeepers to provide regular feedback, and the thoroughness of cleaning was maintained for 14 months of observation in the final maintenance period.

Rupp, et al. (2014) explain that "Consistent implementation of evidence-based infection prevention practices is sporadic, and oftentimes performance improvement gains are not sustained. Practice changes requiring modifications in human behavior seem to be particularly problematic, with hand hygiene being the classic example. Numerous programs document improvement in compliance that unfortunately slips as soon as attention is turned elsewhere. In our environmental-cleaning QI program, we were rewarded with prompt improvement in response to a multimodal intervention that featured education and training, an immediate reminder (checklist), an objective and quantitative measure of performance, and rapid-cycle feedback of data to frontline personnel. However, when we attempted to transition the program into maintenance mode, we encountered unexpected challenges."

"A variety of issues impact and impede sustainability," Rupp says. "First, oftentimes, the maintenance portion of the quality improvement project is not carefully planned or appropriately budgeted. Thus, when it comes time to sustain and maintain the gain, no one has assigned responsibility and resources are lacking. Second, although the maintenance phase is critically important, for most persons, it is not as interesting and therefore it is more difficult to get healthcare personnel to commit their time, attention, and energy to the task."

There are several key reasons for sporadic implementation of evidence-based infection prevention practices, according to Rupp: "Sometimes the evidence base is not as iron-clad as guideline writers represent. Infection prevention departments and other areas charged with implementing infection prevention and control practices are oftentimes under-resourced and trying to implement and maintain practices is difficult. Additionally, changing human behavior and implementing improvements is hard work!"

Rupp, et al. (2014) say that a combination of immediate feedback of individual room-cleaning data to EVS personnel and monthly reporting of composite information in face-to-face meetings between the infection preventionist and unit housekeepers was the most effective program., adding, "We speculate that the most important aspect of the maintenance program was the face-to-face interaction between the infection preventionist and the EVS personnel. We believe that our willingness to expend time resources reinforced the importance of the program and furthered collaborative team building."

Rupp says that communication (and the resulting collaboration) is essential to the sustainability of QI programs. "Achieving and maintaining improved environmental cleaning /disinfection required teamwork between infection prevention, environmental services and intensive care unit staff. We required support from EVS administration and the leadership of the hospital and critical care units. The involvement of the IP was absolutely crucial in the success of our project. What we found in our institution is that the direct face-to-face interaction between the IP and the front line EVS workers was 'the secret sauce.' Each time we tried to streamline the process and limit the time commitment of the IP, we saw performance deteriorate. The IP was crucial in serving as the conduit for surveillance information and being the cheerleader in keeping EVS workers focused on the project and in reinforcing how important EVS is in patient safety." 
Rupp emphasizes the importance of education and adds that the information shared with environmental services personnel must to be actionable. "The data need to be focused on a small group of persons (in our example, the EVS workers in a specific unit) so they can take ownership and action," he says.  "In general, I think whenever possible, the message should be positive and reinforce the goal.  However, on occasion, when you note persistent poor performance and the worker or the group has the resources to accomplish the job that has been assigned, then the data and feedback can be used in a manner needed to make personnel changes or penalize negative outliers."

Not all outliers are disruptive; in fact, another study by Rupp and colleagues identified a subgroup of hospital housekeepers among a larger group of personnel who were significantly more effective and efficient in their terminal cleaning tasks than their coworkers. Rupp, et al. (2014) say these optimum outliers may be used in performance improvement to optimize environmental cleaning.

The effectiveness and efficiency of 17 housekeepers in terminal cleaning 292 hospital rooms was evaluated through adenosine triphosphate detection. The study was conducted in a seven-bed burn unit, a 32-bed telemetry unit, and a 40-bed medical surgical unit) from April 2011 to August 2011 at a 689-bed academic medical center. Following routine terminal cleaning by EVS personnel, a convenience sample of rooms was assessed during regular work hours by measuring ATP levels on 18 designated surfaces; a composite cleanliness score was calculated on the basis of the percentage of surfaces that were below a cutoff point of 250 relative light units. The amount of housekeeper time spent cleaning a room was documented through use of an automated system that required personnel to document by telephone when they arrived at the room and when room cleaning was complete.

The researchers calculated the cleaning effectiveness rate as measured by ATP detection for each housekeeper. Pairwise comparisons were performed using odds ratios to compare the rate of effectiveness of each housekeeper to all other housekeepers. Analysis of variance was used to compare the efficiency of cleaning (the average time to clean a room) between housekeepers, and pairwise comparisons were performed. The association between effectiveness and efficiency was analyzed by plotting the median time to clean hospital rooms versus the median percentage of surfaces graded as clean per housekeeper.

Rupp, et al. (2014) found that housekeeper cleaning effectiveness ranged from 46 percent to 79 percent and the average time to clean a room for the 17 housekeepers ranged from 24 minutes to 47 minutes. For each housekeeper, the median effectiveness of cleaning versus the median efficiency of cleaning was plotted; there was no correlation between the median effectiveness and median efficiency. As the researchers note, "We also documented that an optimum outlier model of EVS performance improvement is a viable option. Optimum outlier models have been used previously in infection prevention to improve compliance with hand hygiene and prevent methicillin-resistant Staphylococcus aureus transmission. Boyce, et al. noted that, despite attempts to standardize training and cleaning techniques, there is great variation between housekeepers with regard to cleaning practices. In our study, we demonstrate that there is a subset of housekeepers who regularly clean hospital rooms more effectively and more efficiently than their coworkers. The next step is to learn from these optimum outliers and translate the knowledge into improved practice for all housekeepers."

Advice for Sustaining Quality Improvement Initiatives
Kliger, et al. (2014) acknowledges the difficulty in sustaining quality improvement initiatives: "There is no 'magic bullet' to ensure that the benefits achieved by quality improvement initiatives endure. Quality improvement work can be a lot like dieting, in that lasting success requires behavior modification. As with weight loss, achieving and maintaining improvements can seem overwhelming at the start, as implementation often proves difficult and the initiative frequently may seem to be on the brink of failure. And, like weight loss, sustaining quality improvement requires adherence to disciplined routines, ongoing measurement, and constant vigilance, even after the goal has been achieved. Senior leaders often do not appreciate the need for a constant effort. Data must continually be reviewed and processes regularly reevaluated to identify what does and does not work. This type of constant vigilance can be very difficult to operationalize. These leaders typically do not allocate staff time for quality improvement efforts; they want improvement to occur, but do not want to release people from their regular duties to do the work required to generate such improvements. In addition, the fundamental systems and infrastructure that underlie improvement work are not in place in many organizations, including training in process improvement methodologies and the capacity to run PDSA cycles or to track and analyze data on an ongoing basis."

The Health Resources and Services Administration (HRSA) says that variation in how work is performed is a frequently encountered contributor to low performance in QI programs. It is advisable, the HRSA says, to focus on assuring that improvements "stick" and to evaluate opportunities to spread improvement to other parts of the healthcare organization. As the HRSA notes, "It is important that improvement leaders take the time to reflect and plan the next steps forward; the most critical question must be: did we accomplish our aim? If not, it is prudent to reconsider how to move forward."

According to the HRSA, three scenarios are commonly seen in the field:
1. Improvement efforts were unsuccessful due to lack of resources or focus. This is commonly the situation when organizational improvement projects have been derailed by competing priorities or significant changes in leadership. If the situations that impacted your progress have resolved, it is reasonable to re-calibrate and start the QIP again.
2. Unanticipated challenges arose during your work that will prevent you from achieving your aim. Resource cuts, changing organizational priorities or unanticipated changes in demand for services are examples of situations that might not be readily remediated. In these circumstances, it is often best to regroup and focus once again on what is most important.
3. The aim was not achieved within our timeframe but we are progressing. In these situations, often times it is best to recalibrate the time frame and continue the work.

To sustain the gains, a QI program must stabilize the systems that result in excellent performance so that it is resistant to derailment. In order to create a sustainability plan, the HRSA says it is useful to be very clear about those processes and systems you want to sustain. Once the QI team is clear about what needs to be sustained, the work can begin. Typically the work will involve three categories: Leadership & Finance; Operations: Policies and procedures, information technology; and Staff. Let's look at these categories more closely.

Leadership & Finance
According to the HRSA, "Leadership must understand what has changed and the benefits of those changes. Most often, the recognition will come at the end of the QI program when progress is measured against the aim. But at times, leaders may view quality improvement as just a project not an ongoing concern. So it is important to have the conversation and assure that leaders are committed to sustaining (perhaps also spreading) these improvements. In particular, any costs incurred due to the changes must be covered. Examples of changes often linked to improvements with financial impact include staffing changes, hours of operation, facility utilization and supplies. Many organizations have found that successful quality improvement opens other doors for revenue – the so-called business case for quality improvement. Once leaders are convinced that the improvements make sense financially and will further the organizational mission, it is important that they communicate that the improved systems are here to stay. One popular way leaders have communicated this to staff is by explaining that this is the new way of doing business. Accompanied by messages about how the improvements will benefit patients, staff and the organization as a whole, this commitment to the improvements from leadership is an essential step in sustaining the gains."

Operations
As the HRSA notes, "Assuring that changes are completely incorporated into how the daily work is accomplished is critical to sustaining change. Even though the changes were tested during the redesign phase, it is important that managers observe the impact of these changes on a daily basis. It is impossible to think about every possibility that might come up, so being vigilant is helpful to handle unexpected challenges early on. Operations work will typically look at the impact of the changes on staff work flow, gaps in training created by the change, the need to update operating procedures as well as the impact to staff morale and the effect the changes have on the patient experience."

Staff Workflows
According to the HRSA, "The redesigned system should function smoothly and not be a strain for staff to accomplish. Systems with redundancies or inefficiencies will not survive as staff will naturally develop 'work-arounds.' Teams that have used TPS/Lean thinking in their redesign are less likely to have challenges. It is better to tweak systems early if problems do arise. In particular, managers should look at the hand-offs – a word used to describe a transition from one person or role to another. Information and responsibility must flow smoothly for optimal patient benefit." Also key to QI program sustainability is training and education. As the HRSA notes, "Many great redesigns have been sabotaged by lack of attention to training. No one enjoys being put in a position where they are asked to do something they are not comfortable doing. Adequate training and or mentoring should be provided to staff who may have adopted new roles or new tasks. Staff that feel competent and comfortable with their role within the care team will contribute to the success of the redesigned systems."
Spreading Improvement is the next step. In quality improvement jargon, "spread" means taking improvements and applying them to different care systems. As the HRSA explains, "Spread can involve sharing improvements with other teams within the same site, with teams in other sites and can also involve translation of the process of the approach to improvement to other topics or conditions."

The HRSA adds, "Teams that have successfully redesigned a system has resulted in improvements are typically excited and ready to share. If change were easy, the improvement team could just share what was different, spread teams would adopt it immediately and all would be well. But we know that this strategy is flawed although many of us have experienced it. The rationale for a different approach comes from an understanding of human nature and the response to change. The process of spread is really social. One group of individuals is trying to convince another to change the way they do things. One way to think about this process is that teams who will accept change must be ready, willing and able to change. Readiness to accept a change is necessary. Spread teams must agree on some level that the redesigned system will be an improvement. Remember that they are typically not involved day-to-day in the improvement process and may be wary of any changes, fearing the worst. It is helpful for member of the improvement team to explain the rationale for the changes, the process of testing and the success that has been gained. Describing a few failures is helpful to humanize the process and is reassuring to those who are trying to assess the process that led to the redesigned systems. Spread teams will be more ready to change if they see a pathway that is reasonable. Data that was used to monitor the journey as well as PDSA logs are helpful adjuncts to the discussion.

Once a spread team hears the rationale for the change, they need to assimilate that information within their own context. It cannot be assumed that changes that work for one team will work for another without modification. Spread teams often ask if this is really starting over with an improvement process and it is not. But teams who will be changing need to embrace the change. Sometimes that means some tweaks to make it fit the particular circumstances. Sometimes it means that the spread team needs to fingerprint those changes as a way of making them their own. Once a spread team embraces the change as valuable and worthwhile, they need the time, authority and resources to make the change. Leadership is instrumental in ensuring that the spread team has what it needs to spread redesigned systems."

The HRSA recommends that a plan be developed to help spread system changes. Components of this plan include:
- List changes to be spread
- Evaluate new system against current system: process mapping helpful and identify key differences. These will be the areas that the spread team will focus on to make the changes.
- Discuss potential impacts or challenges that adopting the redesigned system will have and develop a plan to manage those impacts
- Create an action plan to make the changes including how you will capture information to monitor. Identify those areas you can just change and those areas where it might be more prudent to test changes using PDSA methodology. Consider using PDSAs when there are associated impacts due to differences between the improvement team site and the spread team site.
- Once the spread team is satisfied that the redesign has been adopted with or without modifications, follow the sustain steps. This will assure that the redesigned system will "stick" at the spread site.

How quickly spread can happen depends a lot on the change, anticipated impacts of the change on the spread team and the culture of the healthcare organization.

Kliger, et al. (2014) address the factors that can contribute to the sustainability of healthcare innovations: "Clearly, ongoing processes to monitor performance must be developed and implemented. In addition, organizations must give frontline personnel dedicated time to create, monitor, and improve care processes. These individuals know what needs to be changed, as they work on the front lines every day and understand where the fault lines are. Consequently, they-not senior leaders-are best positioned to identify solutions. Senior leaders, however, must provide clear, direct communication and support to those on the front lines. They cannot simply pass down a quality improvement directive (e.g., 'do this') or list quality improvement goals in a memo. Too often, there is a disconnect between what senior leaders want and how they convey that vision to frontline staff. To overcome this problem, organizations need vertical quality improvement teams that include both unit-based workers and senior leaders. In addition, communication about quality improvement efforts should occur regularly, even if for only a few minutes at a monthly meeting. These meetings provide an opportunity for senior leaders to hear about progress and for frontline workers to discuss their needs and expectations related to senior-level support."

In a survey of QI program coordinators across 20 healthcare organizations regarding key sustainability strategies, Parand, et al. (2012) identified   three overarching factors: using program improvement methodology and measurement of its outcomes; organizational strategies to ensure sustainability and alignment of goals with external requirements. Within these were eight themes identified by the coordinators as helping to sustain the efforts of the program and its successes. According to the program coordinators to whom the researchers spoke, certain program features were designed to engender sustainability.
As Parand, et al. (2012) explain, "The coordinators in our study particularly equated small steps of change methodology or PDSAs with sustainable change. They believed that embedding and sustaining improvement in one area before moving on helped overall sustainability of improvement on Safer Patients Initiative (SPI) measures. Often when describing small steps of change, interviewees mentioned the benefits of measurement of these changes. Measurement was mostly used as an indicator of sustainability in a given area and the demonstration of maintained measured outcomes, such as sustained compliance with SPI prescribed processes, was highlighted as something that helped to achieve and maintain standards through increasing staff interest. In addition to using the improvement measures, the coordinators reported strategies that enabled them to embed and continue with program features and achievements. They reported that incorporating improvement aims into organizational strategies for safety and quality helped sustain the program's success. Integration of aims and elements into the governance structure and strategies and making the program aims part of performance targets along with accountability and reporting structures were perceived to aid continuation of program target achievements."

A primary strategy for integrating improvement techniques into systems was their inclusion within formalized training sessions and induction training, say Parand, et al. (2012), who add, "This was to ensure that both existing staff and newcomers were provided with the relevant improvement tools. Alongside re-working program elements to fit within current process, changing program-associated terminology appeared to be a strategy that was used in order to combine initiatives, make staff to understand it better and in order to focus on patient safety rather than a project."

Building the capacity and capability of staff to continue improvements was agreed by many of the program coordinators to be a powerful tool for sustaining existing program-related gains and making new gains, according to Parand, et al. (2012). They note, "This comprised ensuring and transferring improvement knowhow and engaging staff across the hospital. In addition to formalized training systems, the interviewees spoke of the benefits of further spreading knowledge on relevant aspects and methods of the program as well as creating internal expertise and forming expert groups specifically for this purpose. Allowing staff participation in patient safety and clinical governance committee meetings was also suggested to be helpful. A highly recommended strategy to continue hospital-wide improvement was to achieve multi-disciplinary participation across the board by including a variety of staff from different disciplines and across organizational levels. In particular, doctor and management involvement was suggested to be a facilitating strategy."

Maintaining a high profile through continued target setting, campaigning and PR of the program ideologies was reported to be necessary to sustain progress of the SPI efforts, Parand, et al. (2012) report. This included inspiring and reiterating the purpose of implementation strategies as well as ensuring its place as targets and on agendas. Most notably, the coordinators emphasized strategies to keep the focus of SPI aims high on managerial agendas and feedback was relayed as especially important when disseminating information back to the top to enthuse senior management and to improve understanding of the program within the hospital. As the researchers note, "Resources, both people's time and funding, were mentioned as valuable, if not essential for the continuation of SPI. Strategies included securing such resources or integrating improvement activities into the present work and job descriptions of staff. Interviewees highlighted the importance of continued support to collect, collate, analyze and interpret data, especially in the form of posts filled by people whose entire role is around data collection and processing. Strategies included utilizing existing teams for extra resources, including the use of audit teams and informatics posts."

References:

Brennan SE, Bosch M, Buchan H and Green SE. Measuring organizational and individual factors thought to influence the success of quality improvement in primary care: a systematic review of instruments. Implementation Science 2012, 7:121.

Glasgow JM. A critical evaluation of healthcare quality improvement and how organizational context drives performance. PhD thesis, University of Iowa, 2013. http://ir.uiowa.edu/etd/2503.

Glasgow JM, Yano EM, Kaboli PJ. Impacts of organizational context on quality improvement. Am J Med Qual 2013;28(3):196-205.

Health Resources and Services Administration (HRSA). Redesigning a System of Care to Promote QI. Accessible at: http://www.hrsa.gov/quality/toolbox/methodology/redesigningasystemofcare/

Kliger J and the AHRQ Innovations Exchange Team. Sustaining and Spreading Quality Improvement. 2014. Accessible at: http://www.innovations.ahrq.gov/content.aspx?id=3433 

Øvretveit J. Understanding the conditions for improvement: research to discover which context influences affect improvement success. BMJ Qual Saf  2011;20:i18-i23.

Parand A, Benn J, Burnett S, Pinto A and Vincent C. Strategies for sustaining a quality improvement collaborative and its patient safety gains. Int J Qual Health Care. 2012 Aug;24(4):380-90.

Rupp ME, Fitzgerald T, Sholtz L, Lyden E and Carling P. Maintain the Gain: Program to Sustain Performance Improvement in Environmental Cleaning. Infection Control and Hospital Epidemiol. Vol. 35, No. 7, July 2014.

Rupp ME,  Huerta T,  Cavalieri RJ, Lyden E, Van Schooneveld T, Carling P  and Smith PW. Optimum Outlier Model for Potential Improvement of Environmental Cleaning and Disinfection. Infect Control Hosp Epidemiol. 2014 Jun;35(6):721-3.

Schouten L, Hulscher M and Grol R. Evidence for the impact of quality improvement collaboratives: systematic review. British Medical Journal BMJ. Jun 28, 2008; 336(7659): 1491–1494.

   

 

Related Videos
Infection Control Today Topic of the Month: Mental Health
Infection Control Today Topic of the Month: Mental Health
Cleaning and sanitizing surfaces in hospitals  (Adobe Stock 339297096 by Melinda Nagy)
Set of white bottles with cleaning liquids on the white background. (Adobe Stock 6338071172112 by zolnierek)
Association for the Health Care Environment (Logo used with permission)
Woman lying in hospital bed (Adobe Stock, unknown)
Photo of a model operating room. (Photo courtesy of Indigo-Clean and Kenall Manufacturing)
Mona Shah, MPH, CIC, FAPIC, Construction infection preventionist  (Photo courtesy of Mona Shah)
UV-C Robots by OhmniLabs.  (Photo from OhmniLabs website.)
CDC  (Adobe Stock, unknown)
Related Content