Retraction Watch: Journals are failing to address duplication in the literature, says a new study

Andy

Retired committee member
How seriously are journals taking duplicated work that they publish? That was the question Mario Malički and colleagues set out to answer six years ago. And last month, they published their findings in Biochemia Medica.

The upshot? Journals have a lot of work to do.

Since we’re often asked why duplication is a problem, we’ll quote from the new paper: Duplication “can inflate an author’s or journal’s prestige, but wastes time and resources of readers, peer reviewers, and publishers. Duplication of data can also lead to biased estimates of efficacy or safety of treatments and products in meta-analyses of health interventions, as the same data which is calculated twice exaggerates the accuracy of the analysis, and leaves an impression that more patients were involved in testing a drug. Not referencing the origin or the overlap of the data, can therefore be considered akin to fabrication, as it implies the data or information is new, when in fact it is not.”
https://retractionwatch.com/2019/01...plication-in-the-literature-says-a-new-study/

RW: What were your main findings?

MM: Our main finding is that duplicate publications are not addressed – in our study only 54 % (n=194 of 359) of duplicates were addressed by journals and only 9% (n=33) retracted, although they should all have been retracted according to editorial standards (e.g. COPE).
 
MM: No, they were cases when authors misused the publishers’ trust and did not inform them of publications they already had or submitted. We used the term “authors actions” rather than misconduct because self-plagiarism is not everywhere defined legally as misconduct. But we definitely see it as detrimental research practice.
"self-plagiarism" is an interesting term, and is akin to what so many of the BPS club do; cite their own and close colleagues' work to give the impression of independent validation.
 
"self-plagiarism" is an interesting term, and is akin to what so many of the BPS club do; cite their own and close colleagues' work to give the impression of independent validation.

Yes! I did a work placement years ago with HSE, one of the things I was looking at was ME/CFS. I was shocked at the number of papers published by some of the names we know well that basically just re-worked data they'd used in previous papers. And it certainly has an effect on such as Cochrane. :banghead:
 
I was recently thinking about this idea in relation to Wessely's outpourings between 1989 and 1991, all apparently based largely on the same cohort of 50 patients of whom 30 accepted treatment. 20 fulfilled criteria for probable or major depression and were offered tricyclic antidepressants (3 of them refused). A further 3 fulfilled criteria for other psychiatric disorders and were offered similar treatment.

In fairness, Wessely did state that "we have not conducted a randomised, controlled trial, but a pilot study, the necessary preliminary to a more formal assessment". It is a shame that all those influenced by these papers seem not to have been interested in that qualifying of the findings.
 
Thank you for this posting @Andy. Often we see researchers who have published many hundreds of peer reviewed papers. As it sometimes goes in academia, the more senior academics, may have the juniors do the hard slogging, and the seniors sign their name to it.

And now another wrinkle that explains those many hundreds of publications - write a paper, rinse and repeat.
 
Back
Top Bottom