Friday, January 19, 2024

Extending a Previous Blog Post: Ethical Considerations on the Costs of Resources

By David S. Prescott, LICSW, ATSA-F

In our December 13, 2023 blog post, Dr. Sophie King-Hill asks:

In many harmful sexual behaviour (HSB) services for children and young people (CYP) how resources are funded, developed, and delivered is coming under increasing scrutiny as frontline and third sectors organisations are having budgets cut and services reduced. Given this context, is it ever ethical to charge for these resources? . . . At face value the ethical principles of HSB work may appear clear-cut (i.e., work in a trauma informed way, do no harm, protect the patient/service user) . . . However, after scrutiny, the lines seem blurred. . . Whilst a multi-agency approach is clearly needed for HSB, a by-product of this way of working is that no steadfast and explicit ethical principles exist due to the range of specialisms involved. This lack of a sense of measure, accountability and consistent public pledge has perhaps created an environment where profitable endeavours have gained traction and power without the rigour of adequate ethical questioning.”

This last sentence regarding “profitable endeavors” is particularly intriguing and leads to questions about how we prioritize and think about resource allocation. In her discussion, she also notes the way some services are trained and delivered. She raises the age-old question of how best to combine implementation and training efforts in situations where staff turnover is a reality (this blog post from 2015 explores this question further).

Here in the US, I’ve long wondered about how we prioritize not just our resources, but the way we think about them. I’ve never forgotten an experience many years ago in which I was on a grant to implement an empirically supported trauma treatment package. The content of this treatment was clinically sound and under most circumstances easy to implement. However, it had been developed for use with adult women, while our agencies were tasked with implementing it with adolescent males and females. The positive findings in studies had occurred in outpatient treatment settings. We were tasked with implementing it in home-based services. In some cases, the clients were very clearly not ready to advance at the pace of the curriculum, while for others the curriculum itself was getting in the way of more substantive conversations that the clients were desperate to have. The curriculum had not been written specifically with the caregivers of these young clients in mind.

The clinicians in this project found themselves in a dilemma: meeting each client’s needs to ensure treatment engagement meant slight changes in adherence to the manual. On the other hand, even the slightest changes were considered a problem for treatment fidelity and needed to be approved by the outside consultant. Further, every session was video recorded for quality assurance purposes, making clinicians more likely to make momentary clinical decisions that prioritized the video review over the needs of the client. All of this took place in a context where those licensed professionals charged with administration of the curriculum had to take their orders from an unlicensed and sometimes irritable consultant.

There were many ways that these dilemmas could have been resolved, and doubtless many who are reading this post could have helped move the process forward. Unfortunately, the constellation of players was, as a group, ill-suited to get this implementation right. It can serve as a lesson for all of us. These were high-stakes circumstances: some of the clients felt retraumatized after participating in this treatment. The problem was not the content, which was indeed evidence-based, but in the implementation processes, which were not.

Virtually everyone wants to engage in evidence-based practice (EBP). Yet so many of us remain unaware that there is more to EBP than the research studies telling us that a treatment method has found to be effective. For example, the above efforts would have benefited from a solid foundation in implementation science, which examines the application of research. For example, Dean Fixsen and his colleagues outlined numerous conditions under which implementations of EBPs will be more and less effective. As encouraging as some studies can be, others have found that it can take a considerable amount of time to demonstrate significant improvements in wellbeing at the individual-client level.

Likewise, there is very little accumulated knowledge on adapting EBPs to meet local conditions. In the example above, applying a treatment developed in one context to another created problems and arguably caused harm to some clients. On one hand, there are the understandable concerns that changes to an empirically supported protocol reduces fidelity to the model, which in turn can potentially reduce its effectiveness. On the other hand, the APA definition of EBP emphasizes how it is a tripartite model involving the integration of best available research, clinical expertise, client characteristics, culture, and preferences. This discrepancy leads to questions about how those with genuine clinical expertise can effectively use protocols that may not be the best fit for clients. 

Dr. King-Hall’s original questions lead to others. We might well ask about the ethics not only of training costs and access to treatments, but of emphasizing implementation of a particular treatment approach without considering the evidence regarding successful implementation, or local conditions involving clinical expertise or client characteristics and culture.



No comments:

Post a Comment