Quantcast
Channel: psychiatric electronic medical record – Carepaths
Viewing all articles
Browse latest Browse all 5

Why Outcomes Failed & Does Feedback Informed Treatment Work?

$
0
0

WHY OUTCOMES FAILED

Despite the expenditure of tens of millions of dollars and scores of outcomes initiatives over the last 20 years, there are few viable outcomes programs in operation.

Most have failed. Some examples from the private insurance market:

–Aetna/HAI’s ambitious nationwide outcomes initiative implemented in the 1990s
–VRIs multistate initiative also implemented in the 1990s.
–Pacificare Behavioral Health’s nationwide program which was terminated around 2004
–Humana’s behavioral health internet based outcomes and behavioral disease management project ended after a year in 2001
–Massachusetts BCBS statewide outcomes program terminated around 2007

A variety of states also initiated ambitious outcomes projects, such as the states of Washington and Oregon, only to jettison them after a few years operation.

JCAHO made outcomes mandatory nationwide in 2000 with its ORYX project. It was ended barely two years after it began.

Companies set up to develop and implement outcomes have also collapsed. Compass, Inc had millions in investment funding in the mid 90s and developed an outcomes system based on the well regarded work of Ken Howard. The company failed after a few years of operation. A number of other outcomes companies collapsed, as well. The few that are remaining have been drastically downsized. The business model of these companies was akin to those of medical laboratories: to sell a “lab test” of mental health functioning to health plans that would enable the plans to determine the necessity and effectiveness of treatment. They reckoned that health plans would buy these services rather than build them themselves.

The outcomes projects in operation now are implemented by true believers; they are mainly financed by providers. For instance Miller and Duncan developed a feedback informed treatment system based on the Outcomes Rating Scale (ORS). It is software based and used by clinicians mainly in the US and Northern Europe. See http://www.centerforclinicalexcellence.com/ICCE. Jeb Brown, an early champion of outcomes who spearheaded the Aetna/HAI and Pacificare projects that were insourced, has a site http://www.clinical-informatics.com in which he provides outcomes tools for clinicians. He has a number of pilot projects underway. But outside of the true believers, outcomes have not gained traction in the mental health community. This despite the fact that 1) outcomes are regarded by the American Psychological Association as an evidenced based approach; and 2) controlled studies and naturalistic studies show conclusively that outcomes with feedback to clinicians improves the effectiveness of treatment.

So the question, why have outcomes failed?

I think there are a number of reasons:

1) Most outcomes systems were developed by psychologists who eschew the medical model. Their instruments (OQ 45, ORS) measure measure general distress. The dominant health care culture is simply not interested in general distress. It is interested in diseases such as Major Depression, Bi-Polar illness, Schizophrenia, and the like. Measures that are disease-specific are of interest to the medical community. That interest turns into funding. For example, CMS will now pay PCPs to administer disease specific outcomes measures such as the PHQ 9 for depression. Behavior follows funding. When mental health clinicians do the extra work involved in collecting outcomes data, they, unlike PCPs, receive no payment for that extra work. This makes sustaining outcomes difficult; only the true believers stay with it.

2) Clinicians resistance has been a big factor in torpedoing outcomes initiatives. There are a number of reasons for this. First clinicians resent the paternalism of managed care companies that have the arrogance to attempt to micromanage their clinical practices. No physician would stand for it. Incidentally, all or almost all the outcomes projects that have been implemented, excluded psychiatrists from participating in outcomes. Why? They would not comply. Also, clinicians are rightly suspect of managed care companies. As one senior executive of a managed care company said–a company that touts its committment to outcomes–”we really don’t care about outcomes.” In addition, many outcomes instruments contain questions that constitute a HIPAA violation. Take this question from perhaps the most widely used outcomes instrument in the world, the OQ 45: “I have an unfulfilling sex life.” Aetna, Pacificare, Value Options and other companies for years routinely collected this information as part of their outcomes initiatives. Third, outcomes could be used a health care company to impair the clinician’s ability to make a living. e.g. poor outcomes could lead to loss of referrals and outcomes decision support data could result in treatment being curtailed, further eroding clinician’s income. Willed ignorance on the part of clinician’s about outcomes is then fully justified. As Sinclair Lewis remarked: “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”

3) Feasibility. Until very recently outcomes projects were expensive and cumbersome to implement–clinicians had to make an extra effort to make sure the client filled out the instrument, then it would have to be faxed to the managed care organization, etc. The lack of any tangible benefit, eg. real time feedback, contributed clinician demoralization. “Empty compliance” has often been the norm for clinicians involved in outcomes initiatives. Another factor is that clinicians involved in these projects deal with many payers. For one payer to use a procedure that is applied to, for instance, only a handful of the patient’s a clinician sees a week is rightly viewed as an unfair imposition.

Outcomes–Quo Vadis?

1.Outcomes with feedback to clinicians improves behavioral health outcomes, but it is unlikely they will be adopted if the measure is one of general distress. Psychologists who develop these measures need to get out of their silo and develop measures that the health care community is interested in. That means disease specific instruments.

2. Outcomes need to be a standard of care. PCPs do a number of routine procedures, e.g. blood pressure monitoring. They don’t do blood pressure monitoring for one health care company but not another. It is unfair to ask behavioral clinicians to use different procedures for different companies.

3. The technology still has a ways to go–outcomes ought not to burden the clinician and should be fully automated; alerts to the patient for followup assessments should be provided via e.g. email or text message. Clients should be able to complete outcomes measures on the Internet or a smart phone. Reports should include decision support and be provided instantly to the clinician. Also, outcomes must be integrated into electronic health records. Separate outcomes systems which provide e.g. monthly reports are expensive and inefficient. Naturalistic research via Practice Research Networks would be dramatically enhanced if outcomes data and a robust set of clinical data resided in the same database.

4. While outcomes data should be available to health care companies to insure that care is medically necessary, that data should not violate HIPAA, nor should it be used punitively against the clinician.

5. Clinicians need to be rewarded for providing outcomes informed care; reimbursement rates need to go up to defray the cost of implementing these systems.

6. Effective therapists should be rewarded by higher rates of reimbursement. (h/t Ed Wise, Ph.D.)

7. Less than average therapists should be offered state of the art evidence based treatment workshops. (h/t Ed Wise, Ph.D.)

PS–New Developments question the whether feedback to clinicians improves outcomes

“Feedback informed treatment,” “outcomes informed care,” “client directed outcomes informed care” refer to the practice of providing psychotherapy treatment that is informed by real time patient-reported treatment outcomes. The method uses algorithms derived from actuarial data and compares actual treatment response with expected treatment response to provide feedback–a signal– to the clinician about the adequacy of response to treatment. Lambert and others (including the writer) developed a system that provides the following alerts: recovered (“white”), on track (“green”), no change (“yellow”), and inadequate (“red”) to inform clinicians about treatment progress. Lambert based his system on a reliable and valid instrument, the OQ 45. Another system PCOMS, also known as the ORS/SRS also functions in the same way, except that it uses a visual analogue scale instead.

The underlying theory of these systems is that decision support in the form of a feedback signal of response to treatment will enhance clinical effectiveness by improving the clinician’s treatment decision-making. For instance, a red alert tells the clinician that the client is doing poorly and is at risk for dropping out of treatment and recommends that a change in course should be implemented; a green alert indicates that progress is adequate and no change in course is indicated; etc.

What was perhaps most important about the outcomes informed care approach is that it was not tied to a particular theoretical model. Most systems of therapy that have sought to distinguish themselves as superior to others are based on a specific therapeutic model, for example, cognitive behavioral therapy. Feedback informed treatment, eschews theoretical approaches, and uses actual response to treatment–outcomes–as it’s method.

Over the last decade a considerable body of research have seemed to show that outcomes informed care, actually does lead to greater treatment effectiveness. As early as 2003 Lambert wrote that “integrating client-based assessment into everyday practice has doubled the effectiveness of counselors in some settings.” The developers of the PCOMS, Miller and Duncan, have made the case for outcomes informed care most persuasively. Here is Scott Miller’s summary of the findings of outcomes informed care:

Currently, 13 RCT’s involving 12,374 clinically,culturally, and economically diverse consumers:
•Routine outcome monitoring and feedback as much as doubles the “effect size” (reliable and clinically significant change);
•Decreases drop-out rates by as much as half;
•Decreases deterioration by 33%;
•Reduces hospitalizations and shortened length of stay by 66%;
•Significantly reduced cost of care (non-feedback groups increased

Miller recently gave a seminar entitled “How to Improve Your Practice by 65% Without Trying.” He describes the seminar this way: “Discover how to increase your clinical power and dramatically improve treatment outcomes by practicing simple techniques for gathering and using ongoing client feedback.”

Barry Duncan is equally enthusiastic writing that “When you consider that outcome informed practice improves outcomes more than anything in our field since its inception (sounds like hyperbole but it isn’t), it is really a wonder that everyone isn’t doing it.” And, “I think it is only a matter of time until it is considered standard practice.“

Well that was then and this is now. A recent randomized controlled study by Murphy, etal concluded that “Contrary to previous studies, the feedback on the client’s progression provided to the therapist had only a small effect on improving therapy outcome.” Last week on his blog Scott Miller wrote a recantation of sorts:
“In fact, the latest feedback research using the ORS and SRS found in small, largely insignificant effects! … Such findings can be disturbing to those who have heard others claim that “feedback is the most effective method ever invented in the history of the field!” And, “Consider, for example, the following findings: (1) therapists do not learn from the feedback provided by measures of the alliance and outcome; (2) therapists do not become more effective over time as a result of being exposed to feedback.  Such research indicates that focus on the measures and outcome may be misguided–or at least a “dead end.”  Better research designs and control for allegiance effects (which Luborsky estimates as being responsible for 69% of the variance in outcomes) will likely confirm these findings.

What can we conclude from this latest bubble of therapeutic enthusiasm? First, that the dodo bird verdict is alive and well and confirmed once again. Second, that it is probably a dead end searching for a silver bullet therapy. And third, TS Eliot had it right when he wrote, “Humility is boundless.”

The post Why Outcomes Failed & Does Feedback Informed Treatment Work? appeared first on Carepaths.


Viewing all articles
Browse latest Browse all 5

Latest Images

Trending Articles





Latest Images