Measurement

All work is a process. Measurement is itself a process that not only helps us to assess other processes but also can be used to drive improvement.

The structure (e.g. the equipment and materials used or the work environment), processes (e.g. how people work and their work methods) and outcomes of care are all important for evaluating quality.

Different types of improvement project use different types of measurement, but not all are equally useful or informative. Measurements are generally of three types:

  • before-and-after studies,
  • continuous assessments and
  • comparative assessments.

Clinical audits characteristically employ before-and-after measurements where standards are compared before and after an intervention. The advantage of this approach is that it provides the analyst with a target to aim for and it is usually simple to analyse and present data. A disadvantage is that the standard may be arbitrary. This may lead to gaming or unintended consequences. Furthermore, and most important, a change comparing a single measurement before and after an intervention may be an artefact of measurement rather than demonstrating a real improvement.

Quality improvement projects tend to use data that repeatedly measures processes over time. Rather than just two measurements (i.e. before and after), multiple measurements are taken before, during and sometimes after the intervention has taken place. Relatively simple statistical methods are then used to analyse whether the process is showing a natural (or random) variation over time, and if so, can demonstrate the extent of this variation and whether real improvement (over and above natural variation) is occurring.

The advantage of this approach to measurement is that it helps us understand whether real improvement has taken place and it can demonstrate the extent of this improvement. It avoids interpreting natural variation as real change, and it enables us to see the effects of multiple interventions over time. The disadvantage of this method is that measurements need to be taken repeatedly during the process of change and some basic analytical concepts and techniques need to be learned.

Measurement comparing clinical audit and PDSA

Understanding variation

Every measure of a process, a combination of processes or an outcome will show variation over time. Variation is therefore part of any process. It is inevitable, and ubiquitous, but it is amenable to measurement and control. If we want to demonstrate improvement, it is essential to select the key variables to measure quality in terms of outputs or outcomes that will signify improvement.

The natural variation of a stable process unaffected by the external factors affecting it or attempts to improve it is called ‘common cause variation’. We see common cause variation in, for example, repeated measures of blood pressure. These may be due to changes in the physiological state of the individual, subtle differences in the technique of measurement or in the response to the measuring instrument.

Variation that falls outside the ‘common cause variation’ is termed ‘special cause variation’ which is caused by an ‘external’ factor, whether this is planned or unplanned, intended or unintended.

Responding to variation

Any improvement in ahealthcare process requires a change in a process to reduce the effect of ‘common cause variation’ and to trigger a ‘special cause variation’ that will represent a significant improvement.

Responding to common cause variation as though it is special cause variation has the opposite effect to that which is intended. It may actually increase the variation in the system. This is called ‘tampering’. An example of tampering is when an organisation responds to a single reduction (or increase) in a measure before checking that the change is due to common cause variation.

A strategy for correctly responding to special causes calls for investigation and explanation, which will sometimes lead to specific changes depending on the special cause identified.

Common cause variation requires a different approach. A common cause strategy firstly requires us to explore the variation more closely using stratification to reveal any special causes. Next, one should seek to understand variation through the processes and systems that cause a problem. Finally, we should redesign processes to reduce inappropriate and unintended variation in an agreed measure in a way that is responsive to patients’ needs.

Run charts

Run charts are the simplest way of plotting data in chronological order.

Data for a particular indicator are plotted as dots (data points) on a simple graph, with time plotted on the x-axis and the value of the indicator plotted on the y-axis.

The time intervals should be ordered and sequential but not necessarily equal. They are often regularly spaced but need not be. At least 16 dots are usually required to see if a process is stable. The dots are connected by lines and a centre line, the median, is drawn.

A run chart showing hypnotic prescribing data for a single general practice is shown.

Rules for run charts

A ‘run’ is one or more data points above or below the median. Common cause variation is represented in a run chart as runs randomly distributed about the median.

Three simple statistical rules have been developed to show whether there has been a significant change in a measure over time (i.e. a special cause variation).

These rules are helpful because they prevent individuals or groups just ‘eyeballing’ a chart of measurements over time and misinterpreting them. Following certain rules leads to consistent interpretation of what constitutes a significant change over time. This also prevents an inappropriate response to common cause variation as if it were a special cause.

The three rules that identify the most important types of special cause variation are shifts, trends and runs:

A shift is a sequence or ‘run’ of seven dots above or below the median.

A trend is a sequence of seven dots all going upwards or downwards (dots on the same level are excluded from the count).

A run is a sequence of dots above or below the median. Runs should be randomly distributed about the median when there is only common cause variation. Therefore, we can calculate if there are the right numbers of runs (between upper and lower limits) depending on how many dots there are in total in the chart using a probability table for runs.

Probability table for runs

Run chart showing special causes

Two special causes are shown in the run chart, which represents the rate of hypnotic drug prescribing in a general practice.

In the sequence of 25 dots there are 11 dots below the median from January 2008, indicating a shift.

In addition, there are only four runs, when we would expect between 10 and 16.

Control charts

Control charts are a chronologically ordered sequence of at least 15–20 data points, with data points connected by lines, a centre (mean) line and upper and lower control limits.

A control chart is a more sophisticated form of run chart. Control charts are more sensitive at detecting special causes but are also a little more complex and require greater resources. The principles of its construction and interpretation are very similar to those of run charts.

Data for a particular indicator are plotted as dots on a simple graph, with time plotted in chronological sequence on the x-axis and the value of the measurement or indicator of interest plotted on the y-axis. The time intervals are sequential and often regularly spaced but need not be. The dots are connected by lines but this time the centre line is the mean line.

The control chart has two additional lines: the upper and lower control (or sigma) limits, which define the variability in the data. They are different from confidence intervals or standard deviations and should not be confused with these.

Control limits are lines representing estimates of the dispersion or the boundaries in the data; standard deviations are used in specific charts of normally distributed physiological variables such as glucose or cholesterol. The mean and control limits should be calculated according to the population, sample size and type of data (i.e. the normal distribution for biological variables such as blood pressure, the Poisson distribution for count data, and the binomial distribution for yes/no or percentage performance data). This is usually done using computer software.

Rules for control charts

The following represent special causes:

  • Single point outside the control limit
  • Shift: eight or more data points above or below the mean
  • Trend: at six or more data points going successively up or going down, including those on the mean but not counting repeat values
  • Abnormal variability: two of three successive values more than two sigma limits from the mean

Control chart showing special causes

The control chart shows hypnotic prescribing data for a single general practice corresponding to the previous run chart

The second dot in the sequence falls above the upper control limit.

A shift, a sequence of eight dots below the median (from January 2008), is also present.

This practice has reduced its hypnotic drug prescribing significantly.

Funnel plots

As well as looking at an indicator across a period of time, control charts can also be used to compare organisational units at a single point or during a fixed period of time. In this type of control chart, organisational units are arranged on the x-axis with their performance as a count, rate or proportion (or percentage) on the y-axis. The mean is represented and control limits are calculated for each organisational unit based on all the data provided.

The control limits are denoted by the outer curved lines, which when data are arranged according to the size of the sample denominator provided by each organisational unit, produces a funnel plot.

The funnel plot shows care bundle performance for acute myocardial infarction (AMI or heart attack) in 12 larger regional ambulance services in England. Each service is represented by a dot labelled between 1 and 12. The sample denominator varies from a few cases (service 12) to over 200 cases (service 7) in the month that performance is measured. The mean performance across all the services is around 45%.

The control limits are wider for services with small samples of AMI and become narrower as the sample size increases. This produces the characteristic funnel shape of the control limits. Because this looks like the bell of a trombone, funnel plots are sometimes referred to as ‘trombonograms’.

Systems

  • The healthcare system describes a complex interacting network of healthcare staff, organisations and technologies.
  • Positive change in health systems depends on these interactions.
  • A focus on systems can help to design strategies for better and safer healthcare that are less reliant on individual actions and effort.
  • Successful improvement and its spread requires an understanding of the system, including the context for improvement, the networks and interactions involved and an understanding of how to deal with complexity

Healthcare including primary care has become more complex because of shifts from secondary to primary care, greater availability of drugs and other health technologies, and higher levels of multimorbidity. Many different players and processes now contribute to care of chronic diseases, encouraging the need to ‘integrate’ different elements of the healthcare system to form a coherent whole.

A system is a set of interdependent and interacting elements or actors, together with the context in which they operate, that seeks to achieve a common aim.

Healthcare systems may be considered in terms of size and complexity as:

  • macro-level (healthcare organisations interacting at a geographical, regional or national level)
  • meso-level (healthcare organisations themselves) or
  • micro-level (groups of clinical and/or non-clinical staff working together within an organisation or a healthcare setting) – clinical microsystems.

Clinical microsystems comprise groups of clinicians interacting to provide specific types of care for patients. The clinical microsystem is made up of the actors in the system, how they relate to their patients and one another, and the context in which they do this. Contextual factors include regulation, payments and resources, leadership, culture, training, capability and aims or targets.

The design of the healthcare system can profoundly affect quality of care, more so than the individuals or elements of care from which the system is formed.

This leads to the oft-repeated adage of health systems experts, that ‘every system is perfectly designed to get the results it achieves’.

Much more than individual workers in the system, it is the design of the health system that is critical to its success or failure – and for improvement to occur attention needs to be paid to the system and its (re)design.

Networks

Natural networks can be conceptualised as a web, with individuals or organisations as vertices (the points where lines intersect) and social interactions as edges (of these lines).

Natural networks often behave in unpredictable ways: the boundaries of these networks are vague or fuzzy because members of a network often interact or are associated with other groups or organisations; the individuals and groups adapt and co-evolve with others in response to a variety of stimuli; their actions are often based on tacit internalised rules as well as explicit ones.

Complex interactions within and between natural networks can also lead to novel behaviours in response to external forces. This is because responses to various stimuli are often non-linear and unpredictable rather than a simple linear cause and effect reactions. Different types of intervention (tools, communication, behaviours, and so forth) rather than simple levers need to be employed to influence networks: these so-called ‘attractor patterns’ involve more subtle efforts at attracting rather than directing behaviour change.

The characteristics of networks within complex systems, include self-organisation, weak interactions and informal communications. The central nodes or hubs in natural networks are opinion leaders; these key individuals are often but not always in leadership positions, but they are always better connected, have greater influence on others and are therefore important as change agents.

Communication in natural networks is predominantly informal rather than formal; messages that are heard and conveyed by recipients are those that have natural appeal and are termed ‘sticky’. For more complex information to be accessible and sticky, it needs to be organised and simplified into natural categories or maps. Such information eventually becomes part of the collective knowledge reaching a natural ‘tipping point’, where it is so well diffused that it becomes acted upon.

There are various barriers and facilitators to communication between networks and their members, such as professional identity, organisational culture, homophily (attraction to those who are similar to us in various attributes) and communication style.

An example of a healthcare network is a clinical community focused on quality improvement, such as the quality improvement collaborative.