|
|
ECNP doesn’t do guidelines. They do not, after all, much concern the translational end of clinical neuroscience. Which is not to say they are not important or worthwhile. But they do present immense challenges. Their purpose is to provide a summary of what works and how to choose what works for patients. Therefore they should be balanced, dispassionate in the treatment of the evidence base yet simple enough to allow use in everyday practice and provide standards for audit of performance.
This requires an enormous exercise of data synthesis. The method I instinctively support is systematic network meta-analysis. This brings together as many trials as possible with comparable designs in specific indications. The result is a kind of ranking by efficacy (on standard scales) and acceptability (as drop outs). I think this provides an ‘as unbiased as possible’ description of what randomised trials (RCTs) can tell us. The coherence of these studies (if A beats B, and B beats C, A should also beat C) is a test that all is broadly well. However, the detailed ranking never pleases everyone. Neither should it because the confidence intervals around the ranks are often very large. We should be not be too confident about much more than the worst options for treatment.
For some colleagues the actual methodology is repugnant. This is understandable because individual studies may often seem to answer a convincing question very well and to submerge such a study in the mess of meta-analysis is like putting a nicely cooked dish into the food blender and making soup. Unfortunately, we fool ourselves if we think that small studies can give us precise answers, ever. The only rule of clinical trials is that confidence requires size and that only goes up as the fast as the square root of the sample number. That means the definitive trial is a mega-trial. The study that told us almost all we need to know about simvastatin for patients with vascular disease randomized over 20,000 people (http://www.ncbi.nlm.nih.gov/pubmed/15016485). There have been no such trials in psychiatry. When John Geddes and I designed BALANCE we wanted to enter thousands of patients to compare long term treatment of valproate and lithium for bipolar patients. We actually managed 331: we could not raise the money to do more, and it took us a long time to do what we did.
So psychiatry is stuck with inconclusive arguments about the comparative size of average effects. And if these differences are not very large, then they will not actually matter very much in shaping how we treat our patients. This is not going to change until we can further personalise treatment. The simplest way to do this is with routinely collected baseline measures like age, education, employment, married, previous episodes, recent life events, previous treatment failures and personality. Some simple studies of trials comparing drug treatment with CBT for depression suggest this could be more promising than it sounds (Fournier et al. (2009). Journal of Consulting and Clinical Psychology, 77, 775-787). It would be a quick win from the possible availability of clinical trial data from companies. Of course the current fantasy is that we may achieve it more satisfyingly by using biomarkers. Biomarkers are simply ways of stratifying patients before treatment or identifying a proximal response during treatment. If only they existed in a cheap and accessible form. Either way we want to move to higher probabilities of individual benefit than average effects in trials can give.
What we should not do is to lapse into nihilism. Our treatments do work moderately well and no worse than in other braches of medicine. The trials show that. Furthermore, so-called quasi-experimental designs are over-coming the inherent confounding problems of naturalistic data for psychiatric disorders. The Karolinska group led by Paul Lichtenstein and their collaborators have quietly conducted major studies of whole populations in Sweden. In bipolar disorder, for example, effect sizes from long term drug treatments are substantial for important outcomes like violence, suicide and hospital admission. If psychiatric treatments are not valued enough by the powers that be to fund big trials, these whole population samples assume vital importance in supporting practice.
So, no guidelines for ECNP but good luck to those writing them for other organisations, among whom I currently number myself.
Guy Goodwin
ECNP President |
|
|
|
|
|
|
|
|
|
|