Practice-based evidence can complement evidence-based practice so very well
Last updated on 23rd April 2015
Yesterday I wrote a blog post "Routine Outcome Monitoring can really help therapists clarify where they need to try harder". Today's post extends this extremely important point. About twenty years ago Howard and colleagues (Howard, Moras, Brill, Martinovich, & Lutz, 1996) introduced a crucial new approach for improving our outcomes. They wrote "Treatment-focused research is concerned with the establishment of the comparative efficacy and effectiveness of clinical interventions, aggregated over groups of patients. The authors introduce and illustrate a new paradigm – patient-focused research – that is concerned with the monitoring of an individual's progress over the course of treatment and the feedback of this information to the practitioner, supervisor, or case manager." This practice-based evidence complements the effectiveness of evidence-based practice. As Boswell and colleagues, in their fine recent paper "Implementing routine outcome monitoring in clinical practice: Benefits, challenges, and solutions" (Boswell, Kraus, Miller, & Lambert, 2015) wrote about Howard et al.’s ground-breaking ideas "Their approach differed from traditional efficacy and effectiveness research, which focuses on the average response of participants in either experimental or naturalistic settings. As a complement to traditional nomothetic approaches, these researchers proposed directing attention to a more idiographic approach, asking 'Is this treatment, however constructed, delivered by this particular provider, helpful to this client at this point in time?”’
(Note the ideas in this blog are explored in more detail in the chapter "Client feedback: an essential input to therapist reflection" in the forthcoming Haarhoff, B. and Thwaites, R. (2016) "Reflection in CBT: Increasing your effectiveness as a therapist, supervisor and trainer." London: SAGE Publications Ltd.)
A recent paper commissioned by the US Department of Health and Human Services – "Strategies for measuring the quality of psychotherapy: A white paper to inform measure development and implementation" (Brown, Hudson Scholle, & Azur, 2014) – makes it clear that routine outcome monitoring is at the heart of an evidence-based strategy for improving delivery of effective psychotherapy. They conclude "In this paper, we describe how structure, process, and outcome measures could be used to monitor and improve the delivery of psychotherapy ... We review the strengths and limitations of each type of measure and the data sources that could be used to support them. We focus on measures assessing the effectiveness and outcomes of care rather than other domains … ". The authors go on to underline that routine outcome monitoring " ... can serve at least two purposes: (1) to help track consumer progress and identify individuals who fail to respond to treatment; and (2) to encourage consumer engagement in treatment."
Boswell, Kraus, Miller and Lambert in their recent paper on implementing routine outcome monitoring (Boswell et al., 2015) describe the improved outcomes achievable with this approach: "The meta-analysis (Shimokawa, Lambert, & Smart, 2010) involved both intent-to-treat (ITT) and efficacy analyses on the effects of various feedback interventions in relation to TAU (treatment without feedback) on clients who were predicted to have a negative outcome. When the not-on-track feedback group was compared to the not-on-track TAU group, the effect size for post-treatment OQ score difference averaged a g=.53. These results suggest that the average at risk client whose therapist received feedback was better off than approximately 70% of at risk clients in the no feedback condition. In terms of the clinical significance at termination, 9% of those receiving feedback deteriorated while 38% achieved clinically significant improvement. In contrast, among at risk clients whose therapists did not receive feedback, 20% deteriorated while 22% clinically significantly improved. When the odds of deterioration and clinically significant improvement were compared, results indicated those in the feedback group had less than half the odds of experiencing deterioration while having approximately 2.6-times higher odds of experiencing reliable improvement.
The OQ feedback system went beyond progress feedback by asking clients who were predicted to deteriorate to complete a 40-item measure of the therapeutic alliance, motivation, social supports, and recent life events. Therapists were provided with feedback on these domains, a problem-solving decision tree, and intervention suggestions to assist them in resolving issues that may be causing clients to have a negative treatment response. Together this intervention was referred to as a Clinical Support Tool. When the outcome of clients whose therapist received the Clinical Support Tool feedback were compared to the treatment-as-usual clients, the effect size for the difference in mean post-treatment OQ scores was g=0.70. These results indicate that the average clients in the Clinical Support Tool feedback group, who stay in treatment to experience the benefit of this intervention, are better off than 76% of clients in treatment-as-usual. The rates of deterioration and clinically significant improvement among those receiving Clinical Support Tools were 6% and 53%, respectively. The results suggest that clients whose therapists used Clinical Support Tools with off-track cases have less than a fourth the odds of deterioration, while having approximately 3.9-times higher odds of achieving clinically significant improvement."
Much more research needs to be done – not least in assessing how well these very encouraging results hold up across more diverse populations (Davidson, Perry, & Bell, 2014). However there are clearly exciting improvements achievable using these methods. There are a whole series of studies (Gilboa-Schechtman & Shahar, 2006; Gunlicks-Stoessel & Mufson, 2011; Lewis, Simons, & Kim, 2012; Van et al., 2008) looking at treatment response trajectories (particularly with depression treatments) and highlighting that, on average, encouraging outcomes in early psychotherapy sessions suggests that the client will do well over the full course of treatment. As Oscar Wilde underlined however ‘The truth is rarely pure and never simple’. There are question marks about how important it actually is to get fast (log-linear) initial client progress to produce good post-treatment outcome. Recent careful work with a fairly large sample (N=362) of depressed clients treated with CBT (Vittengl, Clark, Thase, & Jarrett, 2013) showed a variety of response trajectories (not just log-linear, but also linear and one-step) that produced similarly successful results. Other studies with depression (Percevic, Lambert, & Kordy, 2006) and anxiety (Chu, Skriner, & Zandberg, 2013) suggest that it is somewhat naïve to assume one requires rapid log-linear improvement to have any hope of satisfactory overall response. And this seems even more probable when assessing more prolonged treatments – for example, 50-session, 2-year treatment of personality disorders where the first 6 sessions may primarily involve assessment (Bamelis, Evers, Spinhoven, & Arntz, 2014). As Lambert has pointed out ‘Aspects of patient functioning show differential response to treatment, with more characterological (e.g. perfectionism) and interpersonal aspects of functioning responding more slowly than psychological symptoms.’ (Lambert, 2013). He highlights that ‘Research suggests that a sizable portion of patients reliably improve after 7 sessions and that 75% of patients will meet more rigorous criteria for success after about 50 sessions of treatment. Limiting treatment sessions to less than 20 will mean that about 50% of patients will not achieve a substantial benefit from therapy (as measured by standard self-report scales).’ And again ‘Given the variability in rates of change, it appears that time limits for treatment uniform to all patients would not adequately serve patients’ needs. Standard, fixed, low doses of treatment are not justified for the majority of individuals who enter treatment and are akin to establishing a set minimal time to keep a broken leg in a cast, rather than removing the cast when sufficient healing has taken place. A major unanswered question is how long to continue treatment that the patient has not yet responded to.’
Despite these caveats, the current overall research picture suggests that there is real value in careful routine outcome monitoring, especially for therapists who are committed to this process (de Jong, van Sluis, Nugter, Heiser, & Spinhoven, 2012) and for clients who are not responding well. Remember dropout rates are high – for example a meta-analysis of 34 effectiveness studies of CBT for depression found a mean dropout rate of 25% (Hans & Hiller, 2013).
The young but developing literature on the characteristics of more effective therapists is throwing up some intriguing suggestions. One of these is that the difference between excellent therapists and their colleagues is particularly evident ‘for more severe patients’ (Saxon & Barkham, 2012) and another is that ‘a portion of the variance in outcome between therapists is due to their ability to handle interpersonally challenging encounters with clients’ (Anderson et al., 2009). These are clients at risk of deteriorating or dropping out – better monitoring and managing these challenging cases is something virtually all therapists could benefit from. This is a strong argument for careful routine session to session outcome and alliance assessment, and this seems especially beneficial for initially less effective therapists (Anker, Duncan, & Sparks, 2009).
A recent helpful Canadian publication, that is freely downloadable from the internet, lists and describes 10 routine outcome monitoring systems (Drapeau & al, 2012) and others continue to be added. The major UK Increasing Access to Psychological Therapies initiative potentially provide a particularly useful new example for UK therapists (Brown et al., 2014), although I think this is currently more relevant for seeing how one’s overall success rates compare with other therapists and other IAPT centers. I am unaware so far of widely available predicted session to session trajectories of improvement using the IAPT data. Hopefully this will emerge.
So to reiterate, we have a situation in psychotherapy where 1.) The field as a whole has been painfully slow at producing better client outcomes. 2.) A lack of clear correlation between qualifications or experience and improved therapist effectiveness strongly suggests that our training methods need to be improved. 3.) There are however major differences between the effectiveness of different therapists. 4.) Identifying and studying more effective therapists looks like a useful research strategy that should be explored more fully. 5.) In parallel with this we can try harder to identify when we ourselves, as therapists, are being more or less effective – and then self-correct when necessary. 6.) Therapists seem to be bad at identifying when their clients are doing poorly – using routine outcome monitoring strategies to alert us to lack of client progress is very likely to outperform our surprisingly poor clinical judgement.
So where from here? It looks very likely that reflection on regular sessional outcome and alliance feedback has the potential to make significant and much needed improvements to our helpfulness as therapists. The CORE, OQ and PCOMS systems are all examples of approaches that can be used right away (Drapeau & al, 2012) and, in the UK, we have the rapidly growing IAPT data to compare to and learn from. One might think that a relatively simple addition to our current psychotherapy practice that promises so many potential gains would be an easy innovation to sell. This is probably not true (Boswell et al., 2015). There are many personal and organizational factors that resist this kind of change, but as has been said ‘They laughed at Beethoven!’ Tracking progress with patients has the potential for actively engaging them more with the therapy.
The greatest gains seem likely to accrue though from identifying those who are not progressing at expected rates after roughly the initial 3 to 6 sessions of therapy. As I have discussed, slower than predicted improvement rates are not the death knell of a particular client being able to benefit to a worthwhile extent from therapy. A focus on slow improvers however has been shown repeatedly to be an excellent use of reflection time. It is worth remembering the general finding that ‘Research suggests that a sizable portion of patients reliably improve after 7 sessions ... Limiting treatment sessions to less than 20 will mean that about 50% of patients will not achieve a substantial benefit from therapy (as measured by standard self-report scales).’ (Lambert, 2013). So we’re not expecting a high percentage of our patients to make sizable improvements after 3 to 6 sessions, but it’s likely to be very worthwhile to reflect on any that do not show any appreciable change – ideally highlighted by some pre-agreed criterion like falling below the 25th percentile on predicted trajectory of improvement with a well-validated assessment measure. A good initial step here is to discuss the issue with the patient (preferably recording the discussion for potential later review and possible use in supervision). The outcome scores have not been progressing well, what has happened to the therapeutic alliance measures? Is there agreement on goals and therapeutic methods, and does the client feel adequately understood and cared for? Have their been external life events that have interfered with progress? What about the client’s expectancies and motivation? Are there other ways of augmenting the therapy – for example, by bringing in their partner, friend or other family member, or by introducing a group intervention or training, or by upping the frequency of sessions, or by adding or altering medication or other biological interventions? It makes sense to use or develop a checklist to work through with clients who are not on track. Remember the OQ feedback system achieved better outcomes through supplementing progress feedback by asking clients who were predicted to deteriorate to complete a 40-item measure of the therapeutic alliance, motivation, social supports, and recent life events. Therapists were provided with feedback on these domains, a problem-solving decision tree, and intervention suggestions to assist them in resolving issues that may be causing clients to have a negative treatment response. Together this intervention was referred to as a Clinical Support Tool.