By David S. Prescott, LICSW
In past decades, programs and practitioners treating individuals who had abused (whether as adults or juveniles) often designed their work around available research as they could apply it with their specific clientele. In brief, programs were innovative, often because they had to be. In the case of inpatient programs, this was especially true. Even now, very little knowledge is easily available to those working with clients who have special needs. Virtually all programs operated within a context in which a governmental office authorized the program to practice; typically, these authorities kept their fingers on the pulse of these program, intervening only when necessary.
It’s difficult to pinpoint how and when it happened, but increasingly programs in the US have opted to gain additional forms of certification through organizations such as JCAHO, CARF, and COA. And why not? Additional accreditation can only be good, right? Likewise, programs began to bring in empirically supported treatments (ESTs), a welcome addition in virtually every respect. In some regions, systems of treatment have involved a case-management agency making referrals to treatment programs (which they often come to think of not as specialized entities with expertise, but as “vendors” for hire). While this structure is not inherently problematic, past disasters have led some case-management agencies to be authoritarian, leaving programs to make few decisions about treatment independently. While teamwork and collaboration are always welcome, the committees to review these decisions and proposals may only meet occasionally.
As is often the case, great ideas often become problematic in stages that are only barely perceptible. The onboarding of accreditation and wholesale adoption of ESTs have, in some cases, met with problems in their actual implementation. Often, efforts at implementation are stymied by the structures in which they occur and the processes surrounding them.
As one case in point, the author has watched as clinical directors who once charted the therapeutic course of their programs had to spend increasing amounts of time assisting in ensuring quality in areas such as documentation for outside review, very often at the expense of time spent ensuring that the actual treatment delivered is of the highest quality. While there is no question as to the importance of areas such as documentation, broader questions often go unconsidered, such as, “What is the best use of my time as a director to ensure the best outcomes.” For some, the race to make an agency look good has gained a higher priority than making a program actually do good. Too often lost along the way is the therapeutic alliance, which itself has an overwhelming evidence base.
In some cases, however, much time has been lost in the process of appeasing outside entities. “David,” said the Clinical Director of one program, “We had this great evidence-based program for trauma-informed care. It took us years to get it this good. But the CEO wanted (XYZ) accreditation, and (XYZ) only recognizes EMDR. The problem is that most of our clients aren’t actually good candidates for EMDR. We stopped doing what was really working and started getting trained in EMDR. It’s great stuff, but it’s not really working in this environment. So now we’ve been spending all our time doing that. I tried talking with the CEO but he wouldn’t listen.” In the end, those who once steered the ship and produced innovations have increasingly ended up managing processes imposed from outside the program, whether they help the clients or not.
Along similar lines, many programs have understandably raced to use ESTs, even when they are not the best match for clients. For example, the adolescent who, after his mother’s death and father’s return to prison, burned down an outbuilding on his grandmother’s property. A psychological evaluation recommended that he receive grief counseling with family involvement. However, the referring agency (which had not done an assessment) said that because burning a building is an antisocial act, then they would only fund Multi-Systemic Therapy (MST). On the surface, this could have worked. However, it didn’t and as implemented only made matters worse. The case-management agency demanded that the young man go through MST a second time because, “It has the strongest evidence.” Again, the author has seen treatments such as EMDR and MST produce results akin to miracles. The issue isn’t the treatment; it’s the misapplication.
Where does all of this leave us? Since 2005, the American Psychological Association has defined evidence-based practice as “the integration of the best available research with clinical expertise in the context of patient characteristics, culture and preferences.” Perhaps its time to re-visit the “clinical expertise” part of this equation. Lost in much discussion about the importance of ESTs is the extensive body of research pointing to the importance of the practitioner delivering treatment. It has recently seemed that treatment programs and their practitioners are viewed by outsiders as entities to be directed and managed, often by people lacking credentials and without liability. As the saying goes, the question to ask is not, “Does nothing work?” but instead, “Is nothing implemented?”
To that end, my vote is that we never
forget the expertise that individual programs and professionals bring to our
work.
No comments:
Post a Comment