See-through science
3rd December 2013
This is a guest post by Daniel Shanahan, Associate Publisher at BioMed Central, which is a supporter of the AllTrials campaign.
The movement for greater transparency in clinical trials has been gaining momentum, with a focus on prospective trial registration in databases, such as the ISRCTN register, and the complete reporting of all trials – finished and abandoned, positive and negative. But what makes transparency so important?
Evidence-based medicine has been around for a while. Although its use in practice was first described in 1992, its philosophical origins extend back to the mid-19th century or earlier. Simply put, it is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. This seems logical enough – medicine is fundamentally patient-focussed and you want to use what works, when it works, in the best way available.
But what happens when the evidence is incomplete? If you try something five times and it works once, you are unlikely to consider that a good bet. But what if you are unaware of the four failures? If you try something once and it works, for all you know, there’s no downside.
Furthermore, because patients are such a frustratingly diverse group, any suggestion of establishing causality is an educated-guess at best. Rigorously-conducted randomised controlled trials are the cornerstone of evidence-based medicine and seek to control this natural variation; even then, the basis of statistical significance is that it is an estimation that any effect observed is not a fluke. Statistical significance itself does not give any direct information regarding what caused the effect and does not guarantee that there is an effect; simply that there is likely to be one.
This really highlights the hypocrisy in referring to the results of clinical trials as either ‘positive’ or ‘negative’. To return to my previous example, if five clinical studies were conducted and only one was ‘positive’ you wouldn’t consider that particularly convincing evidence, but for some reason, for years, there has been a bias towards writing up and publishing only ‘positive’ results, hiding those that weren’t.
As a rule, people tend to be more eager to share good news, but the impact of this can be far reaching. To paraphrase Alexander Pope ‘a little bit of knowledge is a dangerous thing’ and by only sharing ‘positive’ results we’re ensuring health professionals have a little bit of knowledge.
Journals have been called ‘the minutes of science’. Publication is a way of documenting what was done and, particularly in the case of open access, sharing the outcome to provide a complete record for others to build on and avoid repeating work, which can be costly in terms of both time and money. This makes the complete reporting of all clinical research of vital importance. Not just going forward, but retrospectively too, as current clinical decisions will be based on what is available now. Knowing what did not work, in addition to what did, will give health professionals the tools they need to make fully-informed decisions.