The Institute for Healthcare Improvement’s (www.ihi.org) model for improvement provides the basis for the commonly used quality improvement techniques of clinical audit and PDSA cycles.
Clinical audit is a systematic process involving the following stages:
Identify the problem or issue. Selecting an audit topic should answer the question, ‘What needs to be improved and why?’ This is likely to reflect national or local standards and guidelines where there is definitive evidence about effective clinical practice. The topic should focus on areas where problems have been encountered in practice.
Define criteria and standards. Audit criteria are explicit statements that define what elements of care are being measured (e.g. ‘patients with asthma should have a care plan.’). The standard defines the level of care to be achieved for each criterion (e.g. ‘care plans have been agreed for over 80% of patients with asthma’). Standards are usually agreed by consensus but may also be based on published evidence (e.g. childhood vaccination rates that confer population herd immunity) or on the results of a previous (local, national or published) audit.
Monitor performance. To ensure that only essential information is collected, details of what is to be measured must be established from the outset. Sample sizes for data collection are often a compromise between the statistical validity of the results and the resources available for data collection and analysis.
Compare performance with criteria and standards. This stage identifies divergences between actual results and standards sets. Were the standards met and if not, why not?
Implement change. Once the results of the audit have been discussed, an agreement must be reached about recommendations for change. Using an action plan to record these recommendations is good practice. This should include who has agreed to do what and by when. Each point needs to be well defined, with an individual named as responsible for it, and an agreed timescale for its completion.
Complete the cycle to sustain improvements. After an agreed period, the audit should be repeated. The same strategies for identifying the sample, methods and data analysis should be used to ensure comparability with the original audit. The re-audit should demonstrate that any changes have been implemented and improvements have been made. Further changes may then be required, leading to additional re-audits.
Title of the audit: Audit of management of obese patients
Reason for the choice of topic: All team members have noted the increasing prevalence of overweight and obesity across the practice population.
Dates of the first data collection and the re-audit: 1 March 2013 and 1 September 2013
Criteria to be audited and the standards set:
Criterion – the health records of adults with a BMI >30 should contain a multi-component weight management plan.
Standard – 100%. According to National Institute for Health and Care Excellence guidelines, adult patients with a BMI >30 should have a documented multi-component weight management plan setting out strategies for addressing changes in diet and activity levels, developed with the relevant healthcare professional. The plan should be explicit about the targets for each of the components for the individual patient and the specific strategies for that patient. A copy of the plan should be retained in the health record and monitored by the relevant healthcare professional.
Results of the first data collection: Of 72 patients with documented BMI >30, only 8 (11%) had copies of weight management plans in their records.
Summary of the discussion and changes agreed: The results were reviewed at the next clinical governance meeting, where it was felt that hard copies for the paper record were less important than documentation of the process in the electronic record.
Results of the second data collection: Of 48 patients with BMI >30, 16 (33%) had documented weight management plans in their electronic record.
The PDSA cycle involves repeated, rapid, small-scale tests of change, carried out in sequence (i.e. changes tested one after another) or in parallel (different people or groups testing different changes), to see whether and to what extent the changes work, before implementing one or more of these changes on a larger scale.
The following stages are involved:
Plan. Develop a plan for the change(s) to be tested or implemented. Make predictions about what will happen and why. Develop a plan to test the change. (Who? What? When? Where? What data need to be collected?)
Do. Carry out the test by implementing the change.
Study. Look at data before and after the change. Usually this involves using run or control charts together with qualitative feedback. Compare the data to your predictions. Reflect on what was learned and summarise this.
Act. Plan the next test, determining what modifications should be made. Prepare a plan for the next test. Decide to fully implement one or more successful changes.
Title: Improving monitoring of azathioprine
Date completed: 1 June 2012
Description: This was a quality improvement project focusing on improving monitoring of commonly used disease-modifying antirheumatic (immunosuppressant) drugs (DMARDs, i.e. methotrexate, azathioprine) in the practice.
Reason for the choice of topic and statement of the problem: DMARDs are commonly prescribed under shared care arrangements with specialists. The general practitioner has a responsibility for ensuring that the drugs are appropriately monitored for evidence of bone marrow suppression and liver dysfunction.
Priorities for improvement and the measurements adopted: The aim of this quality improvement project was to improve monitoring of the two most commonly used DMARDs in the practice, methotrexate and azathioprine. The criteria agreed for monitoring were: methotrexate – full blood count and liver function tests performed within the previous 3 months; azathioprine – full blood count performed within the previous 3 months; renal function within the past 6 months.
Baseline data collection and analysis: The first data collection presented in the run and control charts from week 1 to 6 showed inadequate blood monitoring of these drugs with rates of complete blood monitoring for 10 patients (four patients prescribed methotrexate and six prescribed azathioprine) on these drugs at around 70% (see Figures 5.3 and 5.4).
Quality improvement: The team met to plan how to measure monitoring and how to improve this. Clinical and administrative staff discussed the topic. During the baseline measurements for 6 weeks, improvements were planned. The first improvement introduced was a protocol for a search and prescription reminder for patients on these drugs. All patients receiving DMARDs were put on a 3-month prescription recall and an automatic prescription reminder to attend for blood monitoring at every 3-month recall was set up. Following an initial improvement to 80% compliance with monitoring it was decided to send a written recall letter for blood tests and a follow-up appointment with the doctor.
The results of the second data collection: The subsequent data collection showed monitoring rates consistently at 100%.
Intervention and the maintenance of successful changes: We provided a system for more consistent monitoring of DMARDs.
Quality improvement achieved and reflections on the process: This project enabled members of the practice to
Significant event analysis (SEA) is the process by which individual cases, in which there has been a significant occurrence (not necessarily involving an undesirable outcome for the patient), are analysed in a systematic and detailed way to ascertain what can be learned about the overall quality of care and to indicate changes that might lead to future improvements (National Patient Safety Agency 2013).
SEA improves the quality and safety of patient care by encouraging reflective learning relating to individual episodes that have been identified by a member or members of the healthcare team as ‘significant’ and, where necessary, the implementation of change to minimise recurrence of adverse events or increase likelihood of positive events in question.
The aim of SEA is to:
The following six steps in describe the process of significant event analysis:
Identify and record significant events for analysis and highlight these at a suitable meeting. Enable staff to routinely record significant events using a logbook or pro forma.
Collect factual information including written and electronic records, the thoughts and opinions of those involved in the event. This may include patients or relatives or healthcare professionals based outside the practice.
Meet to discuss and analyse the event(s) with all relevant members of the team. The meeting should be conducted in an open, fair, honest and non-threatening atmosphere. Notes of the meeting should be taken and circulated. Meetings should be held routinely, perhaps as part of monthly team meetings, when all events of interest can be discussed and analysed allowing all relevant staff to offer their thoughts and suggestions. The person you choose to facilitate a significant event meeting or to take responsibility for an event analysis again will depend on team dynamics and staff confidence.
Undertake a structured analysis of the event. The focus should be on establishing exactly what happened and why. The main emphasis is on learning from the event and changing behaviours, practices or systems, where appropriate. The purpose of the analysis is to minimise the chances of an event recurring. (On rare occasions it may not be possible to implement change – for example, the likelihood of the event happening again may be very small, or change may be out of your control. If so, clearly document why you have not taken action.)
Monitor the progress of actions that are agreed and implemented by the team. For example, if the head receptionist agrees to design and introduce a new protocol for taking telephone messages, then progress on this new development should be reported back at a future meeting.
Write up the SEA once changes have been agreed. This provides documentary evidence that the event has been dealt with. It is good practice to attach any additional evidence (e.g. a copy of a letter or an amended protocol) to the report.
Example significant event analysis
Date of report: 12 March 2014
Patient identifier: 1234
Date of event: 15 February 2014
Summary of event:
While entering data on her template, Nurse X noticed a previous glucose of 7.7 recorded on 3 February 2014. She initially assumed that this was normal because the template did not distinguish between fasting and random glucose tests. She checked the result and found that it was in fact a fasting glucose, which may have indicated diabetes. Nurse X explained the problem to the patient, apologised and checked whether she had any symptoms or complications. The patient was adhering to her diet and was asymptomatic. Nurse X arranged for a repeat fasting glucose, cholesterol, thyroid function, electrolytes and HbA1c and to review the patient with the results of these investigations. The fasting glucose came back as 8.8 mmol/L (normal less than 6.0 mmol/L), confirming diabetes.
The template was unclear and made this error more likely.
Agreed action points:
Adjust template to distinguish between fasting and random glucose.
Responsible person: AB
Clinical audit, PDSA and SEA all involve gaining a deeper understanding and reflecting on what we are trying to achieve and what changes can be made to improve.
Clinical audit and PDSA use a measurement process before and after implementing one or more changes to assess whether improvement has actually occurred: in clinical audit this is usually a single measure before and after the change, whereas PDSA involves continuous repeated measurement using statistical process control with run or control charts. The main difference between clinical audit and PDSA is that audit involves implementation of change after the first measurement followed by a further measurement, whereas PDSA involves continuous measurement during implementation of multiple changes conducted in sequence (i.e. one after the other) or in parallel (i.e. different individuals or groups implementing different changes at the same time).
SEA should ideally lead to changes in policy or practice but does not involve measuring the effects of this.