top | item 38637727

(no title)

qumpis | 2 years ago

So therapy is not effective as per the paper. This seems like an astounding conclusion. Has anyone read the paper in detail and has a deeper opinion?

discuss

order

derbOac|2 years ago

I'm pretty familiar with that literature for professional reasons and the linked paper provides a pretty fair assessment, or at least it's consistent with my impressions overall.

One thing worth noting that's been skimmed over, maybe because of the target article, is that second-order meta-analysis and review concludes that psychotherapy and pharmacotherapy efficacy have both been overestimated or overstated, and that the combination of both is more effective on average than either alone. This is also consistent with my impressions.

There's a lot that could be said about it.

First, like a lot of academics, there's a lot of inflated hype and publication bias. It applies to behavioral and mental intervention research as well as other things, so you end up with overstated effects of interventions.

Second, both psychiatry and clinical psychology suffer from a certain amount of insecurity about being seen as "real sciences." As a result, I think (this is just my personal opinion) there's a certain tendency to apply poorly-fitting models from other fields to treatment research, sort of blindly, and it results in poor investigation of underlying mechanisms. Writing this out, it might not be obvious what I mean by this to someone outside the field, but one way of explaining it is that intervention research in psychopathology (pharmacology or psychotherapy) has historically been distracted by concerns about emulating "real scientific disciplines", and as a result a certain amount of self-criticism that might have resulted in faster improvements (by shedding actual dead ends) were not pursued, and research into methods and approaches (for approaching interventions in general) that are more uniquely suited to behavior and psychological phenomena were kind of neglected. I think this trend continues. Basically people in a lot of very high-profile institutions are afraid to say their grade A "empirically supported" interventions aren't so great, or not actually better than other, supposedly "grade B" interventions, because they're afraid it will be jumped on as a sign of weakness of the discipline, rather than of rigorous attempts at improvements. To be fair to these people, a lot of the time that is what happens to these fields: instead of saying "well that was a good idea that didn't work out, good people in the field are looking at this closely", some critics will tend to target the entire field as being incompetent. This is counterproductive, really, and sometimes it looks like a lose-lose position for those in the field.

Third, there is fairly strong evidence that different treatments work well for some people but not others, but that we don't really have a good way to predict what will work for whom. So, for example, the efficacy for drug A might be sort of small to modest overall, but high for one subset of people, and low for another subset of people; the efficacy for psychotherapy B follows similar patterns. This is one reason why combination treatments work: you're kind of applying a shotgun approach with the idea that one thing will stick. People have tried to predict what works for whom but it doesn't work out so well in a replicable sense. As someone else noted, a more accurate way of understanding things is that there's little evidence for the general differential efficacy of one intervention over another in most cases (most things tend to have about the same probability of "working" on their own, and that probability is lower than you might think based on the way they're sometimes discussed in the literature).

ajb|2 years ago

This.

Another thing that is missing is for the field to adopt decision theory properly. At the moment it's usually left to the patient to decide that a treatment is not working for them and pick a different therapist. If we can't predict which treatment will work, we should at least have some research into how long to try each one, and in what order, and any indicators that might allow us to switch early from one that is not working. At the moment I think a huge amount of suffering is endured because therapists don't have good incentives to give up when their treatment isn't working. (At least in my country in the private sector. The public sector has different issues - they are incentivised to declare victory early - and I don't know how it works in the US)

dang|2 years ago

Thanks for this! I have a couple questions:

> methods and approaches (for approaching interventions in general) that are more uniquely suited to behavior and psychological phenomena were kind of neglected

Can you give some examples and expand on this? I realize it's speculative but I'd like to hear your speculations :)

> we don't really have a good way to predict what will work for whom. So, for example, the efficacy for drug A might be sort of small to modest overall, but high for one subset of people, and low for another subset of people; the efficacy for psychotherapy B follows similar patterns

In the case of psychotherapy my feeling is that there's an additional complication, which is that efficacy depends not only on what (is it psychotherapy B1, B2, etc.) but also on who - because efficacy has also to do with the quality of relationship between therapist and client, this varies a lot, and it is not a function of modality*. Do you agree? and if yes, what methods do you think would be best suited to studying this?

(* I dislike that word "modality" but it's what people use to describe the different therapeutic methods, so it's at least clear in this context.)

throwup238|2 years ago

The metastudy compares against "placebo or treatment as usual" which means the placebo group is getting some kind of therapy, just not the specific kind tested.

If this metastudy is accurate, it means that therapy is effective but your choice of therapy method doesn't matter very much.