STARD guidelines: another piece of an intricate puzzle for evaluating the quality of scientific publishing
Preface on STARD Guideline

STARD guidelines: another piece of an intricate puzzle for evaluating the quality of scientific publishing

This focused issue of Annals of Translational Medicine supplies a comprehensive debate about the 2015 Standards for Reporting of Diagnostic Accuracy Studies (STARD) guidelines. The editors of some authoritative journals provide their personal contribution to the topic, reviewing the many pros and the still remaining gaps about the publication of high quality diagnostic studies. As the Guest Editor of this issue, I would try to give simple keys of lecture that might help illustrate the substantial contribution emerging from the new 2015 STARD guideline, but also some crucial questions remaining unanswered.

In a world with limited resources and with a constant decrease of funding for basic, translational and clinical research, scientific publishing has become one of the most ballyhooed metrics of scientists. Indeed, the large dependence of career progression and research funding from scientific publishing has generated a negative impact on both credibility and practical significance of the literature (1). The first and foremost question that we should ask is: why do we choose to publish scientific papers? There are many and non-mutually exclusive answers. First, we may laudably aim to publish a scientific article for improving the knowledge in science and medicine that is validating new tests for screening, diagnosing and monitoring pathologies (2). However, we may also be encouraged to publish because we suffer from work-addiction, culminating in an open mental disorder in certain individuals, the so-called syndrome of the “obsessive-compulsory scientist”, so that scientific publishing comes ahead of any other routine activity (3). Else, we may be forced to publish to apply for a better position or maintaining the current benefits. Last but not least, some scientists may take huge economic revenues from the medical and diagnostic industry for publishing positive results about a new drug or an innovative test. Whatever is the answer, the risk of publishing unreliable scientific outcomes has now been broadly acknowledged by the scientific community.

The STARD guidelines have been originally developed in 2003 for improving the accuracy of publication of diagnostic studies (4). In 2015, the Consortium has released a new STARD statement, which follows the same organization as the former, thought the original list of 25 items has now been expanded to 30 (5).

In his valuable contribution, the Editor in Chief of Clinical Biochemistry, Peter A. Kavsak clearly describes how much the STARD guidelines should be regarded as a key guide in evaluating diagnostic research studies (6). In the second contribution, Alan H.B. Wu, Editor in Chief of Clinica Chimica Acta, and Robert H. Christenson (7) outline the many other advantages of the 2015 STARD guidelines, but also emphasize that some improvements are still possible in a forthcoming revision, which should also embrace pre-analytical issues, since these activities are those most vulnerable to inaccuracy and errors throughout the total testing process (8). In the following contribution, Zhi-De Hu, Section Editor of Annals of Translational Medicine, nicely reviews the real world application of the STARD guidelines in scientific publishing, concluding that a thoughtful elaboration may be needed to help authors, reviewers and journal editors to better understand the meaning, rationale and optimal use of each item on the checklist (9). Finally, Mario Plebani and Giuseppe Lippi, Editors of Clinical Chemistry and Laboratory Medicine, highlight the unquestionable value of the STARD guidelines, but also delineate additional areas of improvement that may be advisable in a forthcoming revision (10).

Therefore, although we would all agree that widespread adoption of STARD 2015 guidelines should be regarded as a road ahead for improving the quality of diagnostic studies published in our journals, some crucial issues remains. Briefly, can we be sure that straightforward fulfillment of STARD 2015 guidelines will be enough to ensure quality and integrity of published studies?

Earlier this year, Michael McCarthy advised on the British Medical Journal that two additional papers ought to be retracted after an investigation found that data were fabricated by the Australian lead author (11). In the 17th December issue of the New England Journal of Medicine, indisputably one of the most prestigious scientific journals worldwide, Charlotte J. Haug wrote a foremost editorial about potential frauds in the peer-review process, attributable to creation of fake reviewers (12). Finally, an unquestionable bias remains in the current scientific literature, in that negative, unexpected or controversial results are rarely submitted or, even worse, scarcely attractive for scientific journals (13).

Indeed, the STARD 2015 guidelines have little power or authority to prevent scientific fraud and faked revision, thought the outcome of biased studies and scientific cheating may be even worse than publishing inaccurate reports. As such, the STARD 2015 guidelines should be seen as another (valuable) piece of an intricate puzzle for evaluating the quality of scientific publishing.


References

  1. Stone R, Jasny B. Communication in science pressures and predators. Scientific discourse: buckling at the seams. Introduction. Science 2013;342:56-7. [PubMed]
  2. Lippi G, Plebani M. Laboratory medicine does matter in science (and medicine)… yet many seem to ignore it. Clin Chem Lab Med 2015;53:1655-6. [PubMed]
  3. Lippi G, Plebani M, Franchini M. The syndrome of the "obsessive-compulsory scientist": a new mental disorder? Clin Chem Lab Med 2013;51:1575-7. [PubMed]
  4. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Clin Chem Lab Med 2003;41:68-73. [PubMed]
  5. Bossuyt PM, Reitsma JB, Bruns DE, et al. STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies. Clin Chem 2015;61:1446-52. [PubMed]
  6. Kavsak PA. A STAR-Document for those interested in evaluating diagnostic research studies. Ann Transl Med 2016;4:45.
  7. Wu AH, Christenson RH. The standards for reporting diagnostic accuracy studies 2015 update: is there a missing link to the triumvirate? Ann Transl Med 2016;4:44.
  8. Lippi G, Banfi G, Church S, et al. Preanalytical quality improvement. In pursuit of harmony, on behalf of European Federation for Clinical Chemistry and Laboratory Medicine (EFLM) Working group for Preanalytical Phase (WG-PRE). Clin Chem Lab Med 2015;53:357-70. [PubMed]
  9. Hu ZD. STARD guideline in diagnostic accuracy tests: perspective from a systematic reviewer. Ann Transl Med 2016;6:46.
  10. Lippi G, Plebani M. Improving accuracy of diagnostic studies in a world with limited resources: a road ahead. Ann Transl Med 2016;6:43.
  11. McCarthy M. Ramipril research papers are retracted over faked data. BMJ 2015;351:h5035. [PubMed]
  12. Haug CJ. Peer-Review Fraud--Hacking the Scientific Publication Process. N Engl J Med 2015;373:2393-5. [PubMed]
  13. Matosin N, Frank E, Engel M, et al. Negativity towards negative results: a discussion of the disconnect between scientific worth and scientific culture. Dis Model Mech 2014;7:171-3. [PubMed]
Professor Giuseppe Lippi.

Prof. Giuseppe Lippi

Section of Clinical Biochemistry, University of Verona, Verona, Italy.
(Email: giuseppe.lippi@univr.it)

doi: 10.3978/j.issn.2305-5839.2016.01.02

Conflicts of Interest: The author has no conflicts of interest to declare.

Cite this article as: Lippi G. STARD guidelines: another piece of an intricate puzzle for evaluating the quality of scientific publishing. Ann Transl Med 2016;4(3):42. doi: 10.3978/j.issn.2305-5839.2016.01.02

Download Citation