Assessing service expertise
Dr Shipman, a UK general practitioner (family doctor), murdered at least 215 patients. Yet, in the early stages of the investigation, many local patients protested, complimenting him on his quality as a doctor. How did they evaluate his quality? More importantly and generally, how can we assess the clinical standards of a healthcare service such as rehabilitation?
Skills such as good communication evidently and correctly impact professional practice. However, possessing a lot of knowledge and broader skills is also crucial. The UK General Medical Council has introduced a system of medical appraisal to monitor and improve doctors’ professional performance. Many people question its effectiveness, which may cost £100M yearly. This investment in a single profession should be contrasted with the lack of evaluation of the clinical knowledge, skills, and performance of specialist services.
Service structure and process are monitored externally, but we pay little attention to monitoring the clinical competence of teams and services. For example, how can you evaluate the clinical competence of an intensive care unit or obstetric care service? This post discusses this challenge for rehabilitation services. I became interested in this topic after working with Dr. John Burn and many others on the British Society of Physical and Rehabilitation Society’s document, “Rehabilitation and Complex Disability Management in Specialist Nursing Homes and Other Residential Units: Guidance to Best Practice.”
Table of Contents
Introduction
Many people need rehabilitation services. Disability, not disease, is now the leading challenge for health and social care services. However, only a minority of people experience multi-professional rehabilitation services. Most are seen by single services, such as community physiotherapy, outpatient speech and language therapy, or an occupational therapist who arranges hospital discharge.
Organisations outside the NHS have developed many small rehabilitation services in the UK. These services are usually residential and centred on a particular patient group, such as people with brain injury and challenging behaviour. Patients may be referred and paid for by various agencies, such as the NHS, social services, insurance companies, and private case managers.
These commissioning organisations rarely understand rehabilitation and cannot judge the clinical quality of the service: Does it deliver safe and effective rehabilitation? This is unsurprising. They also have no means of judging the clinical competence of any other services they pay for, such as complex cardiac surgery or psychiatric intensive care.
What service assessments exist?
Skills for Care, an organisation that evaluates social care, lists the various methods used to assess social care services. These methods include many healthcare methods, such as NICE quality standards.
The general approach is usually based on Donabedian’s structures, processes, and outcomes framework. This framework concerns a service’s physical and organisational structures, bureaucratic and clinical processes, and patient outcomes.
The National Institute for Health and Care Excellence (NICE) also bases almost all its guidelines, indicators, and quality standards on disease, often a specific disease such as chronic obstructive pulmonary disease. Although the quality standards may mention rehabilitation, they do not clarify what is meant or how its quality should be assessed.
The Care Quality Commission monitors all health and social care organisations in the UK. They aim to “make sure health and social care services provide people with safe, effective, compassionate, high-quality care and we encourage care services to improve.”
Their fundamental standards cover:
- Person-centred care, meeting needs and preferences
- Visiting and accompanying (by relatives and friends)
- Dignity and Respect (for the patient)
- Consent (to be treated)
- Safety: “Providers must assess the risks to your health and safety during any care or treatment and make sure their staff have the qualifications, competence, skills and experience to keep you safe.”
- Safeguarding from abuse
- Food and drink
- Premises and equipment
- Complaints
- Good governance
- Staffing: “The provider of your care must have enough suitably qualified, competent and experienced.”
- Fit and proper staff: “The provider of your care must only employ people who can provide care and treatment appropriate to their role. They must have strong recruitment procedures in place and carry out relevant checks such as on applicants’ criminal records and work history.”
- Duty of Candour
- Display of ratings
There is a brief reference to the professionals having appropriate expertise, but the CQC does not assess whether the staff have it. More importantly, they do not evaluate the quality of the service team.
Founded in 1966 in the United States, now working internationally, the Commission on Accreditation of Rehabilitation Facilities (now called CARF) monitors the quality of rehabilitation services. “Through accreditation, CARF assists service providers in demonstrating value by the quality of their services and meeting internationally-recognized organizational and program standards.”
CARF’s standards are summarised in the acronym ASPIRE to ExcellenceR Quality Framework:
- Assess the environment
- Set strategy
- Persons served and other stakeholders
- Implement the plan
- Review results
- Effect change
They say, “This quality framework focuses on integrating all organizational functions while effectively engaging input from all stakeholders, including persons served. It provides a logical, action-oriented approach to ensure that organizational purpose, planning, and activity result in the desired outcomes.”
CARF does not assess service competence; it assesses structures and processes.
Peers assessing service expertise
A review by professional and service peers is the only approach to evaluating professional and service clinical performance I know of. This was used in England when trauma services were introduced in about 2012; people with clinical expertise would spend a day interviewing and exploring a local service and give feedback on improvements needed.
A similar approach was used for stroke services in about 1990, and, for many years, the Intercollegiate Stroke Working Party undertook one-off reviews when requested by a hospital.
In 2012, Professor Barry McCormick suggested peer review in a paper, Pathway Peer Review to Improve Quality. He particularly noted that “Self-regulation in the NHS does not currently take into account multi-specialty teams or the increasingly important patient pathways between both primary and secondary care and clinical specialties within hospitals. The use of pathway peer review could be highly beneficial when developing and assessing standards, protocols and guidelines for patients with complex care needs or long-term, chronic conditions.”
His paper discusses many aspects of peer review, including its emphasis on self-regulation, making experts in a service responsible for ensuring its clinical standard. Competing interests arise.
Husam Bader and colleagues provide a less enthusiastic perspective based on a peer review process mandated in the United States: a retrospective review of case notes. Being retrospective is an obvious disadvantage. Furthermore, they describe evident biases and competing interests associated with their approach,
In their literature review of peer review models in healthcare, Axel Kaehne and colleagues considered what peer review is, how it is undertaken, with what goals, who by, and whether it is effective. They conclude “that the efficacy of peer review processes remain poorly evidenced, mainly due to their complex nature and a lack of clearly articulated logics of intervention.”
Peer review can potentially evaluate a service’s expertise and competence. However, it requires considerable resources, especially if quality is to be checked regularly. There is also a risk of bias and competing interests, especially in a small field such as rehabilitation.
Outcome assessment of service expertise.
Rehabilitation aims to facilitate a person’s adaptation to their disease and disability. One might argue that determining patient outcomes is the best way to assess a service’s effectiveness. While this principle is good, it is entirely impractical for many reasons.
The outcome is the difference between the patient’s outcomes without the service after receiving the service. Unfortunately, we can never estimate the likely outcome without the service. Although group studies demonstrate the effectiveness of rehabilitation, they cannot identify the effect on an individual.
As I have shown in a post on rehabilitation potential, we cannot select patients who will benefit. We cannot accurately predict a person’s outcome. Next, different people will experience benefits in various ways, and no single measure will capture the multiple benefits achieved. Fourth, many benefits are hard to measure and arise over several years. Last, research suggests that measuring outcomes is inefficient in evaluating service quality.
Assessing expertise in trainees.
So far, we have learned that the clinical quality of services is not routinely monitored, and peer review is too costly and subject to bias to be a way to evaluate many services routinely. Measuring patient outcomes is not a feasible option.
Professional training also has problems evaluating its outcomes. Examinations test knowledge and a few skills but do not assess crucial professional behaviours such as communication and managing uncertainty. The number of trainees and increasingly complex clinical activities have made measuring competence in clinical activities exponentially challenging. Relying on unevaluated experience has failed under pressure.
The difficulties arose from the narrow focus of the assessment, which confirmed that the trainee had practical competence in many small, defined tasks. A professional might need to undertake hundreds, probably thousands, of functions, but once in a job, they will only use a small proportion of these tasks. More importantly, training did not evaluate vital skills, such as knowing when not to undertake an action or balancing conflicting imperatives.
The solution is to judge a professional’s ability to perform complex, high-level activities, such as conducting outpatient clinics, planning a patient rehabilitation programme, or running a ward round. These tasks could not be completed successfully unless someone had the necessary competencies, so the test indirectly assesses competence while also assessing the person’s ability to use their knowledge and skills effectively.
These high-level training outcomes are typically referred to as Entrustable Professional Activities, but other terms are used, such as Capabilities in Practice in the medical specialities in the UK.
Assessing trainee expertise
The system has three crucial features and makes one vital assumption, which I will first illustrate for professional practice and then show, in an adapted version, for service performance.
A limited number of high-level outcomes.
One central goal is to control the assessment load and ensure it is focused on essential features. For example, the UK General Medical Council reviewed complaints against doctors and found nine themes encompass most professional failures. They introduced six generic capabilities that, together, represented good professional practice. These capabilities can be modified slightly to apply to all healthcare professions.
At the same time, they asked every registered medical speciality (including surgery, etc.) to identify a limited number of high-level entrustable professional activities that would ensure a doctor’s safety and effectiveness as an independent practitioner in that speciality. Medical specialities, including Rehabilitation Medicine, were asked to specify 6-8 Capabilities in Practice. These can also be modified slightly to apply to all rehabilitation professionals.
Assessments are formative and summative.
Formative assessments lead to advice and actions to improve a trainee’s knowledge, skills, and performance. Typically, they consider what went well, what could have been done better, what the trainee learned, and what further actions they will undertake. They are based on professional activities required by an independent practitioner, such as teaching and teamwork.
Summative assessment is focused on assessing performance against an expected standard, and formal examinations are an extreme example – the outcome is binary. These are intrinsically more stressful for both parties and inevitably are subject to bias, such as wanting to please the trainee or seeing them move on. Examinations are reasonably applicable to knowledge and possible for practical skills, but they are challenging for complex, high-level professional activities.
Therefore, the assessment process emphasises training and improving performance rather than the trainee’s performance standard.
The output is future-facing.
Competencies document that, at some specified time, the trainee performed an activity effectively and safely. Inevitably, in a four-year programme, many activities are learned but never used again during the programme. The person will retain much of the expertise but are unlikely to be as competent as they were in a challenging, complex activity; they will need a refresher.
In contrast, the new system records that the trainee can be trusted to undertake the higher-level activities required in future. This prediction is not indefinite.
Professional responsibility.
Measuring high-level outcomes does not encompass one further essential matter that helps ensure safe, effective clinical practice: a professional must always practice within competence. Therefore, they must ensure they stay up-to-date in their areas of practice and train or retrain in any new areas of practice they enter.
This vital aspect of professional practice must be recognised because it requires the professional to ensure competence in the activities required in their job. The professional must consider their competence and act to maintain, improve, or gain competence as needed. Attention to this professional duty is built into an annual appraisal for doctors and many other professions.
Application to assessing service expertise
This approach to evaluating individual professionals can be translated into assessing services.
A limited number of high-level outcomes.
There are two aspects, just as there are for professionals. The first covers generic service characteristics. One might imagine the Care Quality Commission and CARF would cover this, but their focus is on management, not clinical matters. Thus, I have developed some generic service features shown in the figure below.
These attributes are derived from the adapted clinical capabilities required of professionals. They are especially relevant for rehabilitation services, though they should apply to all services.
The second set of attributes concerns the specialist rehabilitation expertise needed. They are also derived from professional capabilities and are shown below.
Assessments are formative and summative.
This principle is crucial. The UK Care Quality Commission (CQC) is primarily an inquisition and ends in a published single grading that most external people focus on. It does intend to improve standards and make recommendations. Nevertheless, the service organisation is being assessed by an outside organisation; they are not two bodies collaborating to improve services.
No service, organisation, or system of examination is perfect. The CQC inquiry depends on interpretation and judgement, which are subject to uncertainty and inevitable bias. A recent independent report found significant weaknesses across many areas of work. A more collaborative approach might be better, especially as both organisations are within the Department of Health and Social Care.
The output is future-facing.
The third principle is that the process is intended to improve services. A commissioner may conclude that improvement is needed before agreeing to send patients. However, this should be a mutually agreed-upon decision, with a plan outlining the enhancement required to deliver a safe, effective service. The output should generally revolve around trusting the service provider to deliver.
Provider responsibility.
A fourth principle, implicit in professional practice, must be made explicit: the service must first describe its areas of clinical practice using an appropriate balance of patient and clinical service characteristics. For example, a service may primarily focus on an intervention, such as providing assistive technology, or a patient group, such as challenging behaviour.
Second, it should identify 4-6 specific competencies needed to deliver a safe and effective service to its target population and provide evidence that they possess these competencies. The competencies are indicative, representing a small proportion of all necessary competencies, but they should cover the range.
Applying principles to assessing service expertise.
In training, the educational supervisor and trainee have contrasting perspectives and roles, but both have the goal of the trainee being a new independent professional. The service commissioner and service provider also have contrasting perspectives and roles; nevertheless, they should aim to achieve a trustworthy, safe, and effective service provider.
In the training context, the educational supervisor better understands the product than the trainee. Each party has some power in a commissioning context, but the source differs. The commissioner ultimately pays the provider, which gives them considerable influence. Sometimes, this power may be limited by competition: there may be no other suitable providers, so they can only exert limited influence. The provider will have a much greater understanding of the clinical aspects, allowing them to dictate what clinical actions must be delivered.
The asymmetrical nature of these two strengths makes them incomparable. This may lead to distrust and confrontation, or it can lead to greater understanding in both parties. This situation is similar to many regulatory processes. Collaboration, with a shared vision and, as Onora O’Neil argued, trust, is likely to be more effective.
This approach is illustrated in the Nursing Home Guidelines, a published article, this site, and the MindMap figure below.
Conclusion
This page has developed ideas we used in the guideline to suggest a solution to an unresolved and possibly unacknowledged difficulty in ensuring that specialist healthcare services paid for by people with much less specialist knowledge are providing a clinically safe and effective service. It requires all parties to move from a model based on purchasers encouraging competition between providers with external regulation to one of a shared goal of good healthcare services. The market model of healthcare services is not working, and rehabilitation is one of many services that fare poorly in a market-based system. The system depends on specialist services being open and honest about clinical matters so that commissioners trust them; conversely commissioners must commit to supporting providers who spend time and other resources improving their service.