The challenges for journalists writing about clinical trials

28th June 2013

This article was written by Ruth Francis, Head of Communications at BioMed Central.

Three weeks before this year’s World Conference of Science Journalists (WCSJ) in Helsinki, I was asked to participate in a last minute session on the challenges of reporting clinical trials. At just 40 minutes long, the panel was supposed to be: ‘fast tempo, tweeted and with interaction.’ It certainly covered those bases.

I should declare upfront that I am not an expert. It’s a while since I worked with medical research, and I’ve never been a journalist. I’ve been on the publicity side, working with basic science for almost a decade. Agreeing to be on this panel would provide the perfect excuse to revise and find out more, about a subject relevant to my new role as Head of Communications at BioMed Central. We own Current Controlled Trials, which hosts the ISRCTN register, one of a small number of internationally approved trial registers on behalf of the non profit ISRCTN organisation. We also publish the Trials Journal and the Journal of Negative Results in BioMedicine.

Saying yes was an opportunity to learn as much as I could about best practice in a field that is both of interest and relevant to my job. I set about crowdsourcing; asking real experts for their top tips. Ben Goldacre, a founder of the AllTrials campaign, Henry Scowcroft and Paul Thorne at Cancer Research UK, and Sense About Science gave me advice and thoughts. I had conversations with journalists who cover trials well, in particular TV Padma from SciDev.Net who wrote this helpful and practical guide for reporters covering clinical trials.

There are various points where clinical trials can make news and the below focuses on reporting results, either from conferences or when they are published. Sometimes clinicians and funders will drum up publicity in order to recruit patients, but if there are results it is probably best to wait until trials are published and in the public domain. There are times when conference talks make headlines, this cannot be avoided. When this happens, extra thought is required over whether the trial has concluded and what affect an early report without available data could have on patients, and on healthy individuals. Occasionally a trial may be discontinued and this throws up communications challenges.

Below are the ten tips I put together for the presentation, and which I tweeted afterwards. There’s a Storify that lays them out simply. I’ve tried to add a little context below.

  1. Was this trial registered before it began? It should have been! So that we can scrutinise it and make sure negative results don’t get hidden.
  2. Is the primary outcome reported in the paper the same as the primary outcome specified in the protocol? If not, why not? Though researchers may publish their protocol in advance when registering a trial, not all registries require it.
  3. Look for other trials by this company, or group, or on this treatment and on registries: have these all been published? If not, then this report possibly represents a biased and cherry picked finding.
  4. ALWAYS mention who funded the trial – it matters. Whether some of the ethics committee people have some interest with the funding company.
  5. Look at whether the country where the work is done will benefit from the trial. Will they get the drug at a lower cost or not? Is it investing a disorder or disease that is a problem in that country.
  6. How many patients were on the trial, and how many were in each arm. Some trials can only have small patient groups, but others will need to study many more patients to draw the conclusions that may be being claimed.
  7. What was being compared? Was it drug vs placebo? Drug vs standard care? Drug with no control arm?
  8. Be precise about the sort of people or patients who benefited – was it just in advanced disease, people with a particular form of a disease? Contextualise how common these patients are in the bigger picture.
  9. Report natural frequencies as these are much more meaningful to both laypeople and experts: 13 people per 1000 experienced x is clearer than 1.3% of people experienced x
  10. Avoid relative risks and try to paint the findings in meaningful terms. So if you say the drug improved survival by X% it’s hard to know quite what that means. If you wrote: people taking the drug lived four months longer on average, you’d be clearer. Best of all would be to say: patients taking the drug lived four months longer on average, which is two months longer than the control group.

I realise now that I left out the eleventh tip, which is to always seek advice from another expert, who is not involved in the trial and has no vested interest. This is true for reporting any story, but it is worth reiterating here.

BioMed Central, the open access publisher, has signed the petition supporting the AllTrials campaign.