By David
S. Prescott, LICSW, & Kieran
McCartan, Ph.D.
At first, it all
seems so easy. The large institution or agency decides they are going to get
serious about professional development and quality improvement. “I’ve done some
research on evidence-based practices and have concluded that we need to
implement the Forensic Version of the XYZ-PSB model. It has all the qualities
that we’re looking for, combining elements of all the popular models that are
available, and even has some mindfulness. The fact that there are some deep
breathing exercises at the start of some sessions qualifies it as a
biopsychosocial approach.”
We’re kidding,
of course. It often seems to us that the latest/greatest models make the
largest promises until the implementation effort begins. The history of
psychotherapy is certainly replete with examples of fad treatments, each one
appearing to be bigger, better, faster, or just plain more. Many a
well-intended agency and director (including the first author, David) have
sought training in a particular method because it had worked in some other
setting or been proven in a study or two, only to find out that the old adage
is true: All too often what is new is not what makes a treatment approach effective.
At the same time, what makes the same treatment effective is not new.
The above
example of the fictitious XYZ-PSB: FV is ironic because there is a chance that
it will work if implemented with diligence, confidence, and a shared belief
between therapists and clients that it will work (Wampold & Imel, 2015). In
other words, the belief that something will work very often contributes to its
success. This is one reason why we have science: to understand not only what
works, but how and in what ways.
The rest of the
picture may not be so pleasant, however. The unfortunate reality at the front
lines, often not reported in research, is that there are any number of ways
that good treatment can go bad under the wrong conditions. Let’s take the above
director’s plan for implementing XYZ-PSB: FV. Even before implementation, what
kinds of exploration of the agency’s needs and staff attitudes takes place? Are
the staff excited for the opportunity or feeling beleaguered that they are
having to learn yet another approach at high risk of passing into history like
the others?
Other questions
follow. Will the director participate in the training? The absence of key
decision-makers from the process itself can have a significant effect on staff,
even though it is not mentioned in any manuals. Likewise, does the agency or
institution bring in an outside trainer who trains, perhaps does some consult
calls, and leaves without a succession plan? Some way to keep the spirit and
practice of the treatment alive? And then during the initial phases of this
implementation, what other barriers occur, such as the director getting a new
job, or another influential actor going out on medical leave?
Of course, the picture can become even more pernicious. Are there other challenges competing
with the meaningful implementation of a high-quality approach? For example, many
agencies experience severe pressure to ensure complete adherence to complicated
licensing requirements or accreditation. At what point is the search for
excellence – that burning desire to become more effective – compromised by the need
to ensure timely documentation? Does adherence to regulations end up
compromising adherence to a new model? Do we then expend so much effort pursuing
fidelity to the model that we then forget to maintain fidelity to the actual
client and his or her individual characteristics?
These are
questions too often omitted from any manual or introductory training, but they
threaten treatment integrity nonetheless. This is why collaboration between researchers,
trainers and professionals is so important in the creation of evidence-based
practice that is fit for purpose in the real world (see another blog by Kieran
on the
importance of co-creation). One
of the sadder outcomes of implementation efforts, in our view, is when
professionals work treatment jargon into case notes as a signal to auditors and
licensors that they were using a model when in fact they really weren’t.
We (David and
Kieran, along with our collaborator Danielle Harris) have argued in our
training, and in a recent
paper, that we can learn a lot about improving services by listening
to the voices of the service user. Yet, most
treatment providers work in environments where the same service user has little
or no voice in their treatment planning.
Out hope is that
by raising these questions we may better inspire dialog among professionals,
researchers, and trainers as to how we might better anchor our practice in the
evidence. All too often the enemy to successful implementation is ourselves.
No comments:
Post a Comment