A summary of the red flags suggesting further enquiry - the actual items are a bit more defined than I have summarised them as, making it easier to say 'yes' or 'no'.
Governance
No registration of the trial before the trial starts
Discrepancy between planned and actual sample size
Issues with ethics - e.g. no report of ethics approval, indication of an ethics concern
Author group
Low number of authors, given the size of the study
Previous problems with the authors (non-voluntary retraction of papers)
Large number of RCTs produced in a short time by the author/institute
Plausibility of intervention usage ( I think this is talking about blinding)
Inadequate or implausible description of allocation concealment
Unnecessary or illogical description of methodology (example is use of sealed envelopes in a placebo controlled study)
Timeframe
Fast recruitment of participants
Short time between trial ending and paper submission
Drop out rates
No drop outs, no reasons given for drop outs
Ideal number of losses results in round numbers of participants (eg 50 or 100)
Baseline Characteristics
Few relevant baseline characteristics reported
Implausible participant characteristics (eg similar SDs for different characteristics)
Perfect balance between multiple baseline characteristics, or large difference between baseline characteristics
Outcomes
Abnormally large effect size
Conflicting information between outcomes (e.g. more ongoing pregnancies than clinical pregnancies)
Change in primary outcome from registration to publication
Limitations
Interesting list. The drop-out issue is one that fascinates me for the CFS research: using the treatment itself as a filter for survival of the fittest. I can't think specifically of what I'd state we'd be looking for there.
I'm assuming some of the focus is on outright 'made it up by filling in the questionnaires yourself' type thing, but CFS has been a business built on the trick of collateral damage/sacrifice the weak and hey presto those who completed your treatment were the ones that were more likely to improve anyway.
Secondly, some patterns or reasons for concern are somewhat crude: other patterns of misconduct may not be picked up on by using this checklist. Fabricators can use the checklist to fabricate a paper that fits all the items of the checklist. As a screening tool, it may misclassify papers that either do or do not warrant further investigation. Lastly, using our checklist can be time consuming depending on the article being screened and the capacity and experience of users.
Quite astounding these poor authors need to state this. It should all be happenning anyway shouldn't it, so if editors and peer reviewers aren't mentally thinking of these things and therefore finding a list (in case they forgot anything) they kind of aren't qualified?
Good editors and peer reviewers know a lot about how to do their job well. And a lot of what they know can be extracted and put into systems to ensure that even the hopeless ones on a bad day can do a mostly passable job.
there is the double-edged sword. A good list could perhaps give cover for those good editors and peer reviewers to keep their job or suggested amends/flags when faced with a hierarchy (and I'd guess theoretically if things got really bad some leg to stand on/proof that it wasn't 'just personal judgement' if it did affect their job etc).
The thing is that if the list is about 'those other' problems that rarely occur and don't affect most of the papers then it presents a really good distraction for papers that are just as problematic but can say they weren't flagged for these things.
I agree with @Trish that it is a bit weird to have such a list and then say it needs so many of these major/gross things just to be 'further investigated' which also red flags that maybe it is just a cover for 'being seen to have a process' but ..... the old classic what type and when by and is it retrospective in the never never etc.
meanwhile behavioural and therapist delivered has a get out of jail free on methods vs licensed areas and bungs an extra name on the author list to keep the numbers of flags down.
@Hutan I do agree 'something must be done'


Who it would be best done by (due to hierarchy could it be independent of it being your potential future boss or other conflicts) and what should be on it (that people could be made to care about rather than bombastically brush off)...
I do like the idea of ethics committees having to hold those things to account, and flagging 'prolific papers' and repeat offenders on 'smaller' (not small) issues* to stop the things happening in the first place/really scare people off but I guess the only place is in funding and getting other institutions to voluntarily sign up to code of conducts for reputation etc.
*EDIT: like the research where people severely ill with MS were trawled in under the guise of whatever they were told and actually given words 'custard-tired' and if the ill group took longer to read (both I think it was in reality, but only reported the 'tired' or illness-related words) it meant proof that they were catastrophising and hypervigilant, which caused their 'fatigue' (a bit circular as this probably also caused their slowness). I'm now going to name that study 'the Ross-Morris slow readers have the fear study' ? any takers or improvements?
having said that whilst I'm aware it would never happen, that latter bit of 'sign the code of conduct'.. or don't .. could be no skin off the nose of most subjects and be one of the few things that highlights certain subjects that would hesitate to implement something that would affect a large proportion of their research?
How can this stuff get done without having insiders who might have their own agenda inside the tent?
Last edited: