Tuesday, December 23, 2008

Practice-Based Evidence?

In today's New York Times, Benedict Carey looks at whether evidence-based practices improve patients' success rate in treatment. Programs like BCI are increasingly accountable for showing our effectiveness, yet few have the stats to do so and there's no universal standard for success. Delaware is one of the states taking part in the Advancing Recovery project, in which we implement -- and track the results of -- techniques that science says are effective.

In 2001 the Delaware Division of Substance Abuse and Mental Health began giving treatment programs incentives, or bonuses, if they met certain benchmarks. The clinics could earn a bonus of up to 5 percent, for instance, if they kept a high percentage of addicts coming in at least weekly and ensured that those clients met their own goals, as measured both by clean urine tests and how well they functioned in everyday life, in school, at work, at home.

By 2006, the state’s rehabilitation programs were operating at 95 percent capacity, up from 50 percent in 2001; and 70 percent of patients were attending regular treatment sessions, up from 53 percent, according to an analysis of the policy published last summer in the journal Health Policy.

Carey suggests these Performance Based Contracts are an example of “‘Practice-Based Evidence,’ the results that programs and counselors themselves can document, based on their own work.” Why has this worked for Delaware? We focus on getting people in the door and keeping them here, because length of time in treatment is associated with successful outcomes. We’re rewarded financially when we do a good job at this, and penalized when we don’t.

But we also use many of the Evidence-Based Practices mentioned in the article, like motivational interviewing and cognitive behavioral therapy. Sometimes our results are great, and sometimes they’re not. You can read more about our work here.

This topic generates lots and lots of questions within the addictions field and the recovering community. Here’s just a few:
  • What should be the definition of success in treatment?
  • How do we provide individualized treatment within a treatment curriculum?
  • What kind of evidence are we most interested in – evidence that comes from science, or from practice?
  • And, how do we collect data to measure success in treatment without increasing costs?

No comments: