B-6 Evaluation competency
“Has my intervention achieved the goal we set? Are there side effects, and are they worse than the benefit? What should we do now?” Answering these questions is the last step in the rehabilitation cycle; the evaluation determines the next plan. The decision will likely be one of continuing the same path, trying something else, or stopping attempts to resolve this problem. The judgement between these choices requires information on the changes in factors associated with the situation. The rehabilitation expert must know how to make simple, relevant measurements of change, usually ones that the patient and family can undertake. They need the skills to simplify data collection and engage the patient and family. Equally important is a shift in attitude from the taught features of data collection, such as using standardised, reliable measures, to using ecologically valid, feasible, patient-centred measures focused on the crucial component. This page discusses some aspects of this competency. As always, repeated practice will significantly improve a trainee’s expertise.
Table of Contents
The competency is that the rehabilitation professional is “Able to measure and judge the benefits and harms associated with treatments using simple, patient-centred measures.” This is set in the context of clinical interventions with individual patients being treated in various settings.
Measure, assessment, data collection tool.
Below, I have reproduced the similarities and differences between measurement and assessment from the page on competency in assessment (B-3).
A measure quantifies a phenomenon, sometimes compared with a standard extent of the phenomenon (e.g. a gram against the standard kilogram). Most rehabilitation measures have no universal metric; they can be compared between people or populations, but they are relative, not absolute. The outcome of measurement has no meaning; it is an observation needing interpretation separately.
An assessment is, strictly speaking, the process of collecting data for some purpose, such as determining the amount of tax you owe or, in medieval times, the number of soldiers you supplied to the King. In rehabilitation, the purpose of assessment is to enable a formulation of the situation so that (a) one can identify the main causative, prognostic, and treatment-determining factors and (b) one can make a rehabilitation plan with appropriate patient-centred goals. It is also often used to refer to a structured presentation of data in a form, which is an intermediate step. Thirdly, the word assessment is often used interchangeably with measure; this is incorrect.
Therefore, it is better to refer to a data collection tool, which is precisely that – a tool, usually a form, used to collect data. Sometimes the data can be quantified, in which case the data set is also a measure. Often, the data are collected as part of the assessment process, in which case the data set could be referred to as an assessment data set. However, the data collected are not themselves an assessment.
On this page:
- An assessment refers to the process of collecting data to help make a better formulation of a patient’s situation and rehabilitation plans.
- A measure relates to a specific set of data which can be quantified.
- A structured assessment involves collecting and organising data into a fixed framework, often in a form.
- A data-collection tool collects data items as part of either assessment, measurement, or both.
What is evaluation?
We frequently say that we will evaluate something, such as a course of therapy given to a patient. The word means “the making of a judgement about the amount, number, or value of something; assessment” [Oxford English Dictionary].
The vital point to note is that it involves a judgement and, although this is not stated, the judgement appertains to deciding what actions will follow. For example, one may judge the intervention as ineffective, so stop that intervention. You will note that the OED gives assessment as a synonym, illustrating the common focus on using data shared by assessment and evaluation.
Thus, in rehabilitation, we will take evaluation as “deciding on further rehabilitation based on how much an intervention has had its desired effect and whether the balance between benefit and harm is positive”.
I must emphasise that this evaluation cannot prove that the change is due to the intervention. The evaluation being considered concerns the difference in the patient associated with some interventions. Direct clinical observations cannot determine any causal link between the intervention and change. Evidence concerning causation comes from research designs that allow one to establish that intervention is the cause of the change. Although one can conclude causality from studies on individual patients, this depends upon using a suitably designed single-case analysis, which takes time and effort and is rarely feasible in daily clinical practice.
You will already evaluate treatments globally. You probably always ask your patient whether they have been helped by your intervention, whether it is a course of treatment or contacting someone to do something. You are evaluating your action in the simplest way possible; just asking. This approach can be formalised using Global Assessment of Change scales, usually on a 100mm line or numerical rating scales with 0 equating to no change. They ask, “compared with how you were before, are you now .. much better, the same, or much worse?” The scale data correlate with other observed changes, for example, in people with neck pain.
You often need to establish whether the target impairment or activity has changed; for example, did the baclofen prescribed reduce spasticity, improve walking speed, or reduce pain? Again, simple measures such as visual analogue and numerical rating scales allow targeted measurement of most phenomena. There are many studies on rating scales for pain; they can also be used to measure most other phenomena, such as fatigue after stroke.
Goal attainment scaling is often proposed as a universal measure of change. It is appropriate for measuring the change in an individual patient, helping decide what to do. However, it is often misused, for example, to determine whether rehabilitation should be continued or funding should be stopped. I have written about the significant risk of misuse, and anyone considering goal attainment scaling should be aware of its limitations and risks.
Sometimes you may be interested in a broad range of activities when multi-item measures such as the Barthel Activities of Daily Living (ADL) index, Rivermead Mobility Index, or Frenchay Activities Index may be helpful. These are less useful when evaluating rehabilitation for individual patients but are often used to study groups of patients, for example, when considering a service or research.
In summary, this competency mainly concerns evaluating the effects of interventions on individual patients, with independent evidence being used to deduce causality. In that context, simple measures focused on the item of interest are needed rather than broader multi-item scales.
Measuring effects in a patient.
There are innumerable books, articles, and reviews covering measurement and measures. They cover issues such as validity, reliability, sensitivity etc. When working with a patient, the pre-eminent necessary feature of a measure is its feasibility – can it and will it be used? To be feasible within routine clinical work, the measure must be short, simple, and self-evidently relevant to the patient (ecological validity).
There are various other considerations which I mention below.
Benefit and harm
While the main concern is with benefit, one must acknowledge that many interventions may carry risks of harm. Some risks, such as reduced cognitive skill secondary to anti-spasticity drugs, are dose-dependent, while others are stochastic, events that may occur by chance, such as a fall.
When evaluating an intervention, one should choose one measure of benefit. If there are known or predictable harms, one should measure one or more to enable the person to decide whether the benefit is worth the associated harm.
Focus on the item(s) of interest.
The patient and the clinician should already have identified what they expect an intervention to affect, and the data collected should reflect the area of interest directly or as closely as possible. For example, if trying to improve mobility so that a person can walk to and from the shops, do not use the Rivermead Mobility Index “because it is a well-known, valid, short, simple, standardised measure.” You should measure walking to and from the shops.
Validity is not an intrinsic feature of a measure; validity is concerned with how closely the data collected match the construct you are interested in. If the patient is interested in walking from house number 31, Sycamore Avenue, to the Cooperative shop at 12, High Street, then measuring the distance walked on the route, the time it takes, or the amount of shopping carried are all valid measures.
Quantify the performance.
Most activities of interest can be quantified simply using one or more methods:
- Timing. You can take a specified task or action and time how long the person takes to complete it. For example, how long does someone take to:
- Move from lying in bed when awake to starting to eat breakfast?
- Walk from the kitchen stove to the front door?
- Complete the quick crossword in a specified newspaper?
- Counting. You can take some repeated or identifiable action and count how often it occurs over a specified time. For example,
- How many pieces of cutlery can the person move from the kitchen table into the cutlery draw in 5 minutes?
- How many falls does someone have in a week?
- How far can someone count before running out of breath?
- Rating performance. The patient and/or a carer or relative can rate the performance of an activity using, for ease, a numerical rating scale with defined steps. For example, on a scale of 0 (complete failure) to 10 (entirely expected), how well did the person
- speak when asking a question?
- feed themselves at a table?
- Do the washing up?
Most people vary from day to day or hour to hour. Consequently, if only a few measurements are made, chance alone may explain differences, and both patient and clinician may draw invalid conclusions. Daily (or more frequent) measurements will show the patient that variability is expected.
Daily measures are usually too onerous; for most interventions, three or four measurements each week is satisfactory. Over time, the trend can be seen; if it is positive, measuring gives additional feedback and may motivate the patient to practice more.
Measurement by the patient (or carer).
Asking the patient or a family member to make the measurement has many advantages. It provides many more measurements because it can be done daily. It increases engagement because the clinician and patient must discuss and agree on an activity to measure, making it likely that the patient will measure and undertake the activity. It is estimated in the relevant context.
The last matter to discuss is recording the measures. At a minimum, the data need to be written down with the date and, if relevant, time of day. Additional information can be helpful, and the person should be encouraged to write down any comments or observations they consider of interest. The comments may increase your understanding of the patient’s situation and lead to a better intervention.
Many patients will have access to computers, tablets, or smartphones and may prefer to record data and comments in a spreadsheet or word processor. Some may plot data on a graph.
As the clinician receiving the data, it is well worth plotting them using a spreadsheet because it may give you a better overview and demonstrating your interest will further motivate the patient.
The vital feature of evaluation is making a judgement about the measured outcome; measurement, which we have now discussed, is a part of the evaluation process, but it is not the goal. The goal is to decide on the next steps.
Learning the competency.
The behaviours, knowledge, and skills associated with this competency and a list of relevant references can be downloaded.
The knowledge needed to evaluate is primarily related to knowing the benefits and risks associated with rehabilitation interventions.
The primary skills needed concern the process of setting goals and being inventive in generating activities to measure that can be timed, counted, or rated.
For many people, significant adaptations in attitude are also needed. One must understand and be unconcerned about the inability to be definitive about cause and effect; it is easiest to say that the change was associated with the intervention. One must also divorce the scientific need for control data and unbiased data collection from the clinical need to measure changes associated with interventions.
In other words, the patient’s performance and reported experience are the central concerns. Far from being subjective, unreliable, and invalid, measures tailored to the patient, including quantification and qualitative reports, are the only essential and valid measures.
As usual, the primary way to become competent is to practice, seek feedback from patients and colleagues, reflect on your experience, and adapt your practice in light of these processes.
Although the process described is relatively straightforward, feasible within everyday work, and helpful, some cautions must be highlighted. The evaluation process I have described is the ideal. This does not mean that every intervention should be evaluated in this way.
When learning professional expertise, one is taught how an activity should be undertaken thoroughly when the action is required. To become confident and capable, one must repeatedly practice the activity in its ideal form until it is ingrained. However, it is well-established that experienced experts only use the perfect version of any procedure when necessary and usually carry out a much-reduced version.
Consequently, someone in training should undertake the process fully until they can do it without difficulty. After that, they should learn when it is unnecessary to evaluate fully in this way and not feel guilty or that their performance is poor.
In addition, rehabilitation is a team activity, and several people may undertake interventions simultaneously. It would be a significant stress on the patient to be involved in collecting data on several different outcomes. Thus, any detailed evaluation contemplated should be discussed with the team. Further, the team should consider whether a team-level evaluation might be more appropriate, possibly using a multi-item scale.
Specific uses of evaluation
On the other hand, there are some circumstances when formal evaluation procedures may help achieve some other goal.
Suppose a patient continues to ask for treatment despite being informed that you do not think it will help. In that case, you could reasonably agree to a trial of treatment, negotiating a defined time, data set, and criteria showing unexpected benefit (or expected harm). This respects the person’s wishes without agreeing that they are correct. However, you should not use this technique if you consider there is a significant risk of harm.
Alternatively, suppose a patient will not accept your recommended treatment which you consider likely to help. In that case, you might persuade them to participate in a trial of therapy with agreed criteria on what would justify continuation.
In both these situations, it is best to plan, before starting, on the consequences of the likely outcomes by agreeing that if the result is ‘A’, then we will do ‘X’ and so on. This is a helpful approach when starting or continuing life-sustaining treatment in someone with a poor prognosis where there is a possibility of stopping treatment.
The key to the evaluation of rehabilitation is to be patient-centred when collecting data. Your scientific knowledge about the intervention will guide you in choosing the particular intervention and discussing with the patient what is likely to change. Your expert skills will help you devise feasible, simple measures, which may include goal attainment scaling provided the cautions I have emphasised are considered. Your wisdom will help you judge what to do next, the outcome of the evaluation.