Guidelines for Reporting Health Research: A User's Manual

EDITED BY

David Moher

Ottawa Hospital Research Institute and University of Ottawa, Ottawa, Canada

Douglas G. Altman

Centre for Statistics in Medicine, University of Oxford and EQUATOR Network, Oxford, UK

Kenneth F. Schulz

FHI360, Durham, and UNC School of Medicine, Chapel Hill, North Carolina, USA

Iveta Simera

Centre for Statistics in Medicine, University of Oxford and EQUATOR Network Oxford, UK

Elizabeth Wager

Sideview, Princes Risborough, UK

List of Contributors

  1. Douglas G. Altman Centre for Statistics in Medicine, University of Oxford, Oxford, UK
  2. Andrew Booth Cochrane Collaboration Qualitative Research Methods Group
  3. Andrew H. Briggs Health Economics and Health Technology Assessment, Institute of Health & Wellbeing, University of Glasgow, Glasgow, UK
  4. Patrick M.M. Bossuyt Department of Clinical Epidemiology & Biostatistics, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
  5. Isabelle Boutron Centre d'Epidémiologie Clinique, Assistance Publique-Hôpitaux de Paris, Paris, France
  6. Centre Cochrane Français, INSERM U738, Université Paris Descartes, Paris, France
  7. Marion K. Campbell Health Services Research Unit, University of Aberdeen, Aberdeen, UK
  8. Margaret M. Cavenagh Cancer Diagnosis Program, Division of Cancer Treatment and Diagnosis, National Cancer Institute, Bethesda, MD, USA
  9. Myriam Cevallos CTU Bern and Insititute of Social and Preventative Medicine, University of Bern, Bern, Switzerland
  10. An-Wen Chan Women's College Research Institute, Toronto, ON, Canada
  11. ICES@UofT, Toronto, ON, Canada
  12. Department of Medicine, Women's College Hospital, University of Toronto, Toronto, ON, Canada
  13. Mike Clarke All-Ireland Hub for Trials Methodology Research, Centre for Public Health, Queens University Belfast, Belfast, Northern Ireland
  14. Frank Davidoff Annals of Internal Medicine, Philadelphia, PA, USA
  15. Don C. Des Jarlais Baron Edmond de Rothschild Chemical Dependency Institute, Beth Israel Medical Center, New York, NY, USA
  16. Michael F. Drummond University of York, York, UK
  17. Matthias Egger Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
  18. Diana R. Elbourne London School of Hygiene and Tropical Medicine, London, UK
  19. Jeremy Grimshaw Ottawa Hospital Research Institute and University of Ottawa, Ottawa, ON, Canada
  20. Karin Hannes Cochrane Collaboration Qualitative Research Methods Group
  21. Angela Harden Cochrane Collaboration Qualitative Research Methods Group
  22. Janet Harris Cochrane Collaboration Qualitative Research Methods Group
  23. Allison Hirst Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK
  24. John Hoey Queen's University, Kingston, ON, Canada
  25. Sally Hopewell Centre for Statistics in Medicine, University of Oxford, Oxford, UK
  26. INSERM, U738, Paris, France
  27. AP-HP (Assistance Publique des Hôpitaux de Paris), Hôpital Hôtel Dieu, Centre d'Epidémiologie Clinique, Paris, France
  28. Univ. Paris Descartes, Sorbonne Paris Cité, Paris, France
  29. Timothy T. Houle Department of Anesthesiology, Wake Forest University School of Medicine, Winston-Salem, NC, USA
  30. Samuel J. Huber University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
  31. John P.A. Ioannidis Stanford Prevention Research Center, Department of Medicine and Division of Epidemiology, Department of Health Research and Policy, Stanford University School of Medicine, and Department of Statistics, Stanford University School of Humanities and Sciences, Stanford, CA, USA
  32. Thomas A. Lang Tom Lang Communications and Training International, Kirkland, WA, USA
  33. Julian Little Department of Epidemiology and Community Medicine, Canada Research Chair in Human Genome Epidemiology, University of Ottawa, Ottawa, ON, Canada
  34. Elizabeth W. Loder British Medical Journal, London, UK
  35. Division of Headache and Pain, Department of Neurology, Brigham and Women's Hospital, Boston, MA, USA
  36. Harvard Medical School, Boston, MA, USA
  37. Hugh MacPherson Department of Health Studies, University of York, York, UK
  38. Lisa M. McShane Biometric Research Branch, National Cancer Institute, Bethesda, MD, USA
  39. Donald Miller Department of Anesthesia, The Ottawa Hospital, Ottawa Hospital Research Institute and University of Ottawa, Ottawa, ON, Canada
  40. David Moher Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
  41. Jane Noyes Centre for Health-Related Research, School for Healthcare Sciences, College of Health & Behavioural Sciences, Bangor University, Bangor, UK
  42. Mary Ocampo Ottawa Hospital Research Institute, Ottawa, ON, Canada
  43. Greg Ogrinc Dartmouth Medical School, Hanover, NH, USA
  44. Donald B. Penzien Department of Psychiatry, Wake Forest University School of Medicine, Winston-Salem, NC, USA
  45. Gilda Piaggio Statistika Consultoria Ltd, São Paulo, Brazil
  46. Jason L. Roberts Headache Editorial Office, Plymouth, MA, USA
  47. Philippe Ravaud Centre d'Epidémiologie Clinique, Assistance Publique-Hôpitaux de Paris, Paris, France
  48. Centre Cochrane Français, INSERM U738, Université Paris Descartes, Paris, France
  49. John F. Rothrock Department of Neurology, University of Alabama at Birmingham, Birmingham, AL, USA
  50. Margaret Sampson Children's Hospital of Eastern Ontario, Ottawa, ON, Canada
  51. Willi Sauerbrei Department of Medical Biometry and Medical Informatics, University Medical Centre, Freiburg, Germany
  52. David L. Schriger UCLA Emergency Medicine Center, Los Angeles, CA, USA
  53. Kenneth F. Schulz FHI 360, Durham, and UNC School of Medicine, Chapel Hill, NC, USA
  54. Dugald Seely Ottawa Integrative Cancer Centre, Ottawa, ON, Canada
  55. Iveta Simera Centre for Statistics in Medicine, University of Oxford, Oxford, UK
  56. George C. M. Siontis Clinical Trials and Evidence-Based Medicine Unit, Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece
  57. Cassandra Talerico Neurological Institute Research and Development Office, Cleveland Clinic, Cleveland, OH, USA
  58. Sheila E. Taube ST Consulting, Bethesda, MD, USA
  59. Jennifer Tetzlaff Ottawa Methods Centre, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
  60. Allison Tong Sydney School of Public Health, University of Sydney, Sydney, Australia
  61. Dana P. Turner Department of Anesthesiology, Wake Forest University School of Medicine, Winston-Salem, NC, USA
  62. Elizabeth Wager Sideview, Princes Risborough, UK
  63. Laura Weeks Ottawa Integrative Cancer Centre, Ottawa, ON, Canada
  64. Merrick Zwarenstein Schulich School of Medicine and Dentistry, Western University, London, ON, Canada

Foreword

Guides to guidelines

Drummond Rennie, MD

University of California, San Francisco, USA

Introduction

Good patient care must be based on treatments that have been shown by good research to be effective. An intrinsic part of good research is a published paper that closely reflects the work done and the conclusions drawn. This book is about preventing, even curing, a widespread endemic disease: biased and inadequate reporting. This bias and poor reporting threatens to overwhelm the credibility of research and to ensure that our treatments are based on fiction, not fact.

Over the past two decades, there has been a spate of published guidelines on reporting, ostensibly to help authors improve the quality of their manuscripts. Following the guidelines, manuscripts will include all the information necessary for an informed reader to be fully persuaded by the paper. At the same time, the articles will be well organized, easy to read, well argued, and self-critical. From the design phase of the research, when they may serve as an intervention to remind investigators, editors, and reviewers who find it easy to get the facts, and to note what facts are missing, all the way through to the reader of the published article who finds it easy to access the facts, all of them in context.

To which, given the ignorance, ineptitude, inattention, and bias of so many investigators, reviewers, and journal editors, I would add a decisive “Maybe!”

How did it start? How did we get here?

In 1966, 47 years ago, Dr Stanley Schor, a biostatistician in the Department of Biostatistics at the American Medical Association, in Chicago, and Irving Karten, then a medical student, published in JAMA the results of a careful examination of a random sample of published reports taken from the 10 most prominent medical journals. Schor and Karten focused their attention on half of the reports that they considered to be “analytical studies,” 149 in number, as opposed to reports of cases. They identified 12 types of statistical errors, and they found that the conclusions were invalid in 73%. “None of the ten journals had more than 40% of its analytical studies considered acceptable; two of the ten had no acceptable reports.” Schor and Karten speculated on the implications for medical practice, given that these defects occurred in the most widely read and respected journals, and they ended presciently: “since, with the introduction of computers, much work is being done to make the results of studies appearing in medical journals more accessible to physicians, a considerable amount of misinformation could be disseminated rapidly.” Boy, did they get that one right!

Better yet, this extraordinary paper also included the results of an experiment: 514 manuscripts submitted to one journal were reviewed by a statistician. Only 26% were “acceptable” statistically. However, the intervention of a statistical review raised the “acceptable” rate to 74%. Schor and Karten's recommendation was that a statistician be made part of the investigator's team and of the editors' team as well [1]. Their findings were confirmed by many others, for example, Gardner and Bond [2].

I got my first taste of editing in 1977 at the New England Journal of Medicine, and first there and then at JAMA the Journal of the American Medical Association, my daily job has been to try to select the best reports of the most innovative, important, and relevant research submitted to a large-circulation general medical journal. Although the best papers were exciting and solid, they seemed like islands floating in a swamp of paper rubbish. So from the start, the Schor/Karten paper was a beacon. Not only did the authors identify a major problem in the literature, and did so using scientific methods, but they tested a solution and then made recommendations based on good evidence.

This became a major motivation for establishing the Peer Review Congresses. Exasperatedly, in 1986, I wrote:

One trouble is that despite this system (of peer review), anyone who reads journals widely and critically is forced to realize that there are scarcely any bars to eventual publication [3].

Was the broad literature so bad despite peer review or because of it? What sort of product, clinical research reports, was the public funding and we journals disseminating? Only research could find out, and so from the start the Congresses were limited strictly to reports of research.

At the same time, Iain Chalmers and his group in Oxford were struggling to make sense of the entire literature on interventions in health care, using and refining the science of meta-analysis to apply it to clinical reports. This meant that, with Chalmers' inspired creation of the Cochrane Collaboration, a great many bright individuals such as Altman, Moher, Dickersin, Chalmers, Schulz, Gøtzsche, and others were bringing intense skepticism and systematic scrutiny to assess the completeness and quality of reporting of clinical research and to identify those essential items, the inadequate reporting of which was associated with bias. The actual extent of biases, say, because of financial conflicts or failure to publish, could be measured, and from that came changes in the practices of journals, research institutions, and individual researchers. Eventually, there even came changes in the law (e.g., requirements to register clinical trials and then to post their results). Much of this research was presented at the Congresses [4–6]. The evidence was overwhelming that poor reporting biased conclusions – usually about recommended therapies [7]. The principles of randomized controlled trials, the bedrock of evidence about therapies, had been established 40 years before and none of it was rocket science. But time and again investigators had been shown to be making numerous simple but crucial mistakes in the reporting of such trials.

What to do about it?

In the early 1990s, two groups came up with recommendations for reporting randomized trials [8, 9]. These were published but produced no discernible effect. In discussions with David Moher, he suggested to me that JAMA should publish a clinical trial according to the SORT recommendation, which we did [10], calling for comments – which we got in large numbers. It was obvious that one of the reasons that the SORT recommendations never caught on was that while they were the product of a great deal of effort by distinguished experts, no one had actually tried them out in practice. When this was done, the resultant paper was unreadable, as the guidelines allowed no editorial flexibility and broke up the logic and flow of the article.

David and I realized that editors were crucial in this process. Put bluntly, if editors demanded it at a time when the authors were likely to be in a compliant frame of mind – when acceptance of their manuscript depended on their following orders, then editorial policy would become the standard for the profession.

Owing to the genius, persistence, and diplomacy of David Moher, the two groups got their representatives together, and from this CONSORT was born in 1996 [10–13]. Criticism was drowned in a flood of approval. This was because the evidence for inclusion of items on the checklist was presented, and the community was encouraged to comment. The backing of journal editors forced investigators to accept the standards, and the cooperation of editors was made easier when they were reassured, on Doug Altman's suggestion, that different journals were allowed flexibility in where they asked authors to include particular items. The guidelines were provisional, they were to be studied, and that there was a process for revision as new evidence accumulated.

The acceptance of CONSORT was soon followed by the creation and publication of reporting guidelines in many other clinical areas. The founding of the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) [14] Network in 2008 was not only a recognition of the success of such guidelines but also the need to get authors to write articles fit for purpose and provide much needed resources for all those involved with medical journals. As such, it represents a huge step in improving the transparency and quality of reporting research.

Are we there yet?

Forty-seven years later, Lang and Altman, referring to the Schor/Karten article that I mentioned at the beginning, write about the changes that seem to have occurred.

Articles with even major errors continue to pass editorial and peer review and to be published in leading journals. The truth is that the problem of poor statistical reporting is long-standing, widespread, potentially serious, concerns mostly basic statistics, and yet is largely unsuspected by most readers of the biomedical literature [15].

Lang and Altman refer to the statistical design and analysis of studies, but a study where these elements are faulty cannot be trusted. The report IS the research, and my bet is that other parts of a considerable proportion of clinical reports are likely to be just as faulty. That was my complaint in 1986, and it is depressing that it is still our beef after all these efforts. I suspect there is more bad research reported simply because every year there are more research reports, but whether things are improving or getting worse is unclear. What it does mean is that we have work to do. This book is an excellent place to start the prevention and cure of a vastly prevalent malady.

References

  1. 1 Schor, S. & Karten, I. (1966) Statistical evaluation of medical journal manuscripts. JAMA, 195, 1123–1128.
  2. 2 Gardner, M.J. & Bond, J. (1990) An exploratory study of statistical assessment of papers published in the British Medical Journal. BMJ, 263, 1355–1357.
  3. 3 Rennie, D. (1986) Guarding the guardians: a conference on editorial peer review. JAMA, 256, 2391–2392.
  4. 4 Dickersin, K. (1990) The existence of publication bias and risk factors for its occurrence. JAMA, 263, 1385–1389.
  5. 5 Chalmers, T.C., Frank, C.S. & Reitman, D. (1990) Minimizing the three stages of publication bias. JAMA, 263, 1392–1395.
  6. 6 Chalmers, I., Adams, M., Dickersin, K. et al. (1990) A cohort study of summary reports of controlled trials. JAMA, 263, 1401–1405.
  7. 7 Schulz, K.F., Chalmers, I., Hayes, R.J. & Altman, D.G. (1995) Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA, 273, 408–412.
  8. 8 The Standards of Reporting Trials Group. (1994) A proposal for structured reporting of randomized controlled trials. JAMA, 272, 1926–1931.
  9. 9 Working Group on Recommendations for Reporting of Clinical Trials in the Biomedical Literature (1994) Call for comments on a proposal to improve reporting of clinical trials in the biomedical literature. Annals of Internal Medicine 121, 894–895.
  10. 10 Rennie, D. (1995) Reporting randomised controlled trials. An experiment and a call for responses from readers. JAMA, 273, 1054–1055.
  11. 11 Rennie, D. (1996) How to report randomized controlled trials. The CONSORT Statement. JAMA, 276, 649.
  12. 12 Begg, C., Cho, M., Eastwood, S. et al. (1996) Improving the quality of reporting of randomized controlled trials. The CONSORT Statement. JAMA, 276, 637–639.
  13. 13 (1996) Checklist of information for inclusion in reports of clinical trials. The Asilomar Working Group on Recommendations for reporting of Clinical Trials in the Biomedical Literature. Ann Intern Med., 124, 741–743.
  14. 14 http://www.equator-network.org/resource-centre/library-of-health-research-reporting/reporting-guidelines/
  15. 15 Lang, T. & Altman, D. (2013) Basic statistical reporting for articles published in clinical medical journals: the SAMPL guidelines. In: Smart, P., Maisonneuve, H. & Polderman, A. (eds), Science Editors' Handbook. European Association of Science Editors, Redruth, Cornwall, UK.

Preface

Medical research is intended to lead to improvements in the knowledge underpinning the prevention and treatment of illnesses. The value of research publications is, however, nullified if the published reports of that research are inadequate. Recent decades have seen the accumulation of a vast amount of evidence that reports of research are often seriously deficient, across all specialties and all types of research. The good news is that many of these problems are correctable. Reporting guidelines offer one solution to the problem by helping to increase the completeness of reports of medical research. At their core the vast majority of reporting guidelines consist of a checklist which can be thought of as reminder list for authors as to what information should be included when reporting their research. When endorsed and implemented properly by journals, reporting guidelines can become powerful tools.

Since the original CONSORT Statement, published in 1996, the development of reporting guidelines has been quite prolific. By early 2014 there were more than 200 reporting guidelines listed in the EQUATOR Network's library with several more in development. This book brings together many of the most commonly used reporting guidelines along with chapters on the development of the field itself. We encourage authors and peer reviewers to use reporting guidelines, and editors to endorse and implement them. Together this will help reduce waste and increase value. Using reporting guidelines will help to produce research papers that are able to pass future scrutiny and contribute usefully to systematic reviews, clinical practice guidelines, policy decision making and generally advance our scientific knowledge to improve patients' care and life of every one of us.

The reporting guidelines field is evolving quickly, which makes it a challenge to keep an ‘old’ technology – a hard copy book – up-to-date. In this regard readers should consult the EQUATOR web site (www.equator-network.org) for the most recent reporting guideline developments.

David Moher
Douglas G. Altman
Kenneth F. Schulz
Iveta Simera
Elizabeth Wager
10th March 2014

Part I

General Issues