In October 2013,
Bill Miller (the primary developer of motivational interviewing [MI]) addressed
a group of MI trainers in Krakow, Poland. He took note of the fact that despite
its nearly mythical status, randomized clinical trials of MI have only shown treatment
effects 58% of the time; 42% of studies have found little or no effect. Make no
mistake about it: MI has produced significant effects across diverse areas of
psychotherapy, including within prison-based
treatment settings. Just the same, the wisdom and courage of Miller’s
statement belies the understated tone in which he made it. As our field
patiently awaits the results of gold-standard studies proving that what we do
works, some researchers, like Bill Miller, have gone beyond the
has-it-been-effective-in-a-randomized-clinical-trial question and are taking
note of an emerging but often unrecognized trend: treatments competently
implemented in many areas are not necessarily effective in all of them.
A few years ago,
this was the case with an implementation of multi-systemic therapy in Ontario (for a more
complete description of these findings, click here). More recently, another
examination of MST in Canada appears to have produced beneficial preliminary
effects, but is not without acknowledged methodological problems such as a
small sample size and process issues (e.g., 65% of participants who provided
scores on the Therapist Adherence Measure –Revised rated their therapists as
being sufficiently consistent with MST principles. This is below the
recommended target of 80%). Between the experiences of multi-systemic therapy
and motivational interviewing, professionals should always keep in mind the
bigger picture of their efforts and bear in mind that in program implementation
(as in life) we don’t always get what we want.
In 2012, a
review of studies examining a parenting-skills program appeared, and did not
get the level of attention that it deserved. Philip
Wilson and his colleagues conducted a systematic review and meta-analysis
of 33 studies of the Triple P parenting program. Although this may seem
unrelated to the treatment of people who have sexually abused, their findings
are valuable to all policymakers. At first glance, the Triple P parenting
program boasts numerous successful randomized-clinical trials and
meta-analyses; numerous jurisdictions have promulgated and paid for its
implementation. While these accomplishments have been praiseworthy, Wilson and
his colleagues found numerous problems with the research and question basing
public policy on flawed research. Among the authors’ conclusions:
In volunteer populations over the short term, mothers generally
report that Triple P group interventions are better than no intervention, but
there is concern about these results given the high risk of bias, poor
reporting and potential conflicts of interest. We found no convincing evidence
that Triple P interventions work across the whole population or that any
benefits are long-term. Given the substantial cost implications, commissioners
should apply to parenting programs the standards used in assessing
pharmaceutical interventions (p. 1).
The examined
bias across studies as well as bias within studies, blinding of assessors, percentage
of clients who dropped out, etc. In one instance, the authors noted that:
Although it claimed to have achieved a reduction in the incidence of
episodes of child maltreatment [5], it actually demonstrated an unexplained
rise in reports in control areas rather than a drop in Triple P intervention
sites. The description of the random allocation was poor, and the analysis was
simplistic, being a two-sample t-test of county-wide measures. In particular,
although some form of stratification or matching was used (it was not clear
exactly how this had been done), there was no evidence that this had been
accounted for in the analysis. For example, if counties were randomized within
pairs, then the within-pair differences in the changes from baseline would have
been of interest, but these were not reported. Therefore, although there are
positive conclusions from this study, some doubt remains as to their validity
(P. 8).
In this author’s
estimation, Triple P appears to have produced very good results and has
doubtless improved many lives. Just the same, Wilson et al.’s points are well
taken: where large-scale public policy is concerned, we should be very careful
how we place stock in single studies or even groups of studies, and ask more
questions than simply “does it work.” Likewise, there is a body of research
finding that bona fide treatments often produce equivalent results (Wampold,
2001), returning us to the question “what works with what client under what
circumstances.” Ultimately, professionals and policymakers should be data
driven.
Wampold, B.E. (2001). The great psychotherapy debate. Mahwah, NJ: Lawrence Erlbaum
Associates, Inc.
Thanks David... nice blog.
ReplyDeleteClear, concise, and prudent as ever.
Delete