Checklist to assess Trustworthiness in RAndomised Controlled Trials (TRACT checklist): concept proposal and pilot, 2023, Mol et al

Discussion in 'Research methodology news and research' started by Hutan, Apr 4, 2024.

  1. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,374
    Location:
    Aotearoa New Zealand
    Abstract
    Objectives: To propose a checklist that can be used to assess trustworthiness of randomized controlled trials (RCTs).

    Design: A screening tool was developed using the four-stage approach proposed by Moher et al. This included defining the scope, reviewing the evidence base, suggesting a list of items from piloting, and holding a consensus meeting. The initial checklist was set-up by a core group who had been involved in the assessment of problematic RCTs for several years. We piloted this in a consensus panel of several stakeholders, including health professionals, reviewers, journal editors, policymakers, researchers, and evidence-synthesis specialists. Each member was asked to score three articles with the checklist and the results were then discussed in consensus meetings.

    Outcome: The Trustworthiness in RAndomised Clinical Trials (TRACT) checklist includes 19 items organised into seven domains that are applicable to every RCT: 1) Governance, 2) Author Group, 3) Plausibility of Intervention Usage, 4) Timeframe, 5) Drop-out Rates, 6) Baseline Characteristics, and 7) Outcomes. Each item can be answered as either no concerns, some concerns/no information, or major concerns. If a study is assessed and found to have a majority of items rated at a major concern level, then editors, reviewers or evidence synthesizers should consider a more thorough investigation, including assessment of original individual participant data.

    Conclusions: The TRACT checklist is the first checklist developed specifically to detect trustworthiness issues in RCTs. It might help editors, publishers and researchers to screen for such issues in submitted or published RCTs in a transparent and replicable manner.

    Keywords: Checklist; Randomised controlled trials; Research integrity; Trustworthiness.

    https://pubmed.ncbi.nlm.nih.gov/37337220/

    Although it doesn't say in the abstract, this is supposed to have something to do with Cochrane
     
  2. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,374
    Location:
    Aotearoa New Zealand
    This appears to be the main link with Cochrane
    Also one of the authors is a member of a Cochrane group.
     
  3. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,374
    Location:
    Aotearoa New Zealand
    A summary of the red flags suggesting further enquiry - the actual items are a bit more defined than I have summarised them as, making it easier to say 'yes' or 'no'.

    Governance

    No registration of the trial before the trial starts
    Discrepancy between planned and actual sample size
    Issues with ethics - e.g. no report of ethics approval, indication of an ethics concern

    Author group
    Low number of authors, given the size of the study
    Previous problems with the authors (non-voluntary retraction of papers)
    Large number of RCTs produced in a short time by the author/institute

    Plausibility of intervention usage ( I think this is talking about blinding)
    Inadequate or implausible description of allocation concealment
    Unnecessary or illogical description of methodology (example is use of sealed envelopes in a placebo controlled study)

    Timeframe
    Fast recruitment of participants
    Short time between trial ending and paper submission

    Drop out rates
    No drop outs, no reasons given for drop outs
    Ideal number of losses results in round numbers of participants (eg 50 or 100)

    Baseline Characteristics
    Few relevant baseline characteristics reported
    Implausible participant characteristics (eg similar SDs for different characteristics)
    Perfect balance between multiple baseline characteristics, or large difference between baseline characteristics

    Outcomes
    Abnormally large effect size
    Conflicting information between outcomes (e.g. more ongoing pregnancies than clinical pregnancies)
    Change in primary outcome from registration to publication


    Limitations
     
  4. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    It seems a farce.

    A bit like suggested criteria for bad music (some applied to Mozart and Wagner)

    'Too many notes'
    'Fails to use sonata form'
    'Too small an orchestra'
    'Boring old stuff'
    'Keeps playing the same tune'

    It is a classic example of the floating expert phenomenon.
    How do you generate the most expert advice? Ask in some experts. How do these experts know they are experts? Ask some more experts.

    What is forgotten is that real experts cannot be bothered with this sort of drivel. 'Experts' are always by definition not the experts but people who would like to be.


    And what the heck are 'stakeholders' doing in there? Surely the whole point is to avoid stakeholders and stick to rational argument.
     
    TiredSam, MEMarge, bobbler and 8 others like this.
  5. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,374
    Location:
    Aotearoa New Zealand
    I'm surprised that there is nothing there about conflicts of interest of the authors. That is something we have seen a lot of on the forum - for example, the people with financial interests in a chain of hyperbaric oxygen clinics find that hyperbaric oxygen treatment solves all the problems.
     
  6. JellyBabyKid

    JellyBabyKid Senior Member (Voting Rights)

    Messages:
    327
    This explains so much.. :rolleyes:
     
    MEMarge, bobbler, alktipping and 4 others like this.
  7. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,374
    Location:
    Aotearoa New Zealand
    :laugh: You and I seem to be going through a period of disagreement Jonathan.

    I like the idea of this - a checklist to guide inexperienced editors checking a study. If editors had to go through a checklist and then followup on items that are red flags, then some of the shockingly bad papers we have seen would not have made it to publication. I think the actual content could be improved.

    Clearly there is a shortage of experts, given the poor quality of papers we see. A checklist and some quality assurance systems could improve the performance of the many mediocre editors and reviewers. I think it's important to note that the items don't mean that a paper gets chucked out - it is just an alert to look harder.
     
  8. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
    I haven't read the article yet, but this but from the abstract seemed all wrong:

    Surely if any single item is rated at major concern level they need to do a through investigation. For example, the main 2011 PACE paper should have been immediately flagged as likely to be useless because of the single flag of unblinded interventions and subjective primary outcomes.
     
  9. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    Well you *****y well shouldn't @Hutan!

    1. If someone does not have a full grasp of problem issues with trials and how to apply them consistently they should not be an editor.

    This isn't difficult. All clinicians get a training in all the things we discuss here. The big problem is that many of them are not intelligent enough to fully understand why these problems have the impact they do. An even bigger problem is that for 90% of clinicians vested interests or conveniences or back scratching patterns mean they conveniently forget not only everything they were taught but all the checklists you could possibly throw at them! We have seen this on the forum to an extent that even surprised me.

    2. Even if you did think a checklist was useful it needs to bear some relation to reality. Where is the reference to subjective outcome bias, blinding and so on. I think it may be very instructive that these are yet again omitted.

    Why should the number of authors matter? My initial study of rituximab in RA had two authors, only five patients, no controls, and was open label. It was turned down and even publicly ridiculed by the editors of the two major relevant journals. But I persuaded the second one to take it. In the meantime, the drug company scientists could see barn door that the data had to be significant because they used objective measures back up by pharmacodynamic profiles that could not conceivably have any other explanation. To see why that matters you just need common sense, not checklists, which in my case would have come out no, no, no ,no. And PACE sails through, not because the Lancet did not know perfectly well what the problems were, nor even the kindly Professor W who no doubt had a quick shoofly at it. They were stakeholders so who cares?

    I think things may be a lot worse than you imagine. This proposal is eerily like the RoB tools from Jonathan Stern and co. It conveniently misses out the really bad crimes. It is relatively amateur in comparison to RoB but the errors in RoB were egregious enough. It comes from a group of gynaecologists who seem to be focused on an epidemiology and statistics unit in Amsterdam.

    Why would it even occur to someone to suggest that a big study needed lots of authors? Easy. So that the staff of the Clinical Investigation Unit and all the statisticians got their names on the paper too. We saw this with RoB2. It is a deliberate exercise in brand whitewashing by professional trial paper churners-out I suspect. It even has the Cochrane stamp of approval!!
     
  10. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    Yes this is exactly the sort of pseudoarithmetic nonsense we have seen with GRADE and RoB from statisticians. It has nothing whatever to do with reality.
     
  11. Caroline Struthers

    Caroline Struthers Senior Member (Voting Rights)

    Messages:
    966
    Location:
    Oxford UK
    The real cause of all dodgy trials is conflict of interest . Which never features in lists of factors affecting reliability of trials - the elephant in the room. Also, editors (who seem to make it their business not to understand science) have way too much power, and can never be challenged. Ridiculous.
     
    Ariel, MEMarge, bobbler and 12 others like this.
  12. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
    It looks like they are only going for the most blatant fraud, not poor methodology.

    The thing that amazes me most about this article is the number of people and meetings involved in producing it.
     
    Ariel, MEMarge, bobbler and 10 others like this.
  13. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,374
    Location:
    Aotearoa New Zealand
    :)

    And yet there they are.

    In the industry I used to work in, we were under no illusions that some of the people who worked in the industry were inexperienced, lacked knowledge, over-worked, under the influence of drugs and/or at the best of times could not think at the level needed to work out all the possible things they needed to consider. But, we needed them all to perform at at least a satisfactory level, to keep people safe, to make profits. So, we had checklists. And systems that meant that something couldn't progress to the next step until the checklist had been gone through. The aviation industry has checklists for exactly the same reason.

    Releasing a flawed RCT out into the world has the potential to cause a lot of harm - so the process of doing that warrants a quality management system.

    I remember when I did some study where we had hospital management case studies that it was largely the doctors who resisted checklists and systems, because they felt that they didn't need checking, that they got things right all the time

    Good editors and peer reviewers know a lot about how to do their job well. And a lot of what they know can be extracted and put into systems to ensure that even the hopeless ones on a bad day can do a mostly passable job.

    Sure.

    Yes, I was hoping to see that. That was why I made this thread, so we could see if it was there. And then potentially use it in our argument with Cochrane.
     
    Last edited: Apr 4, 2024
    MEMarge, bobbler, alktipping and 11 others like this.
  14. Caroline Struthers

    Caroline Struthers Senior Member (Voting Rights)

    Messages:
    966
    Location:
    Oxford UK
    Or if they are challenged, they don't have to respond
     
    MEMarge, bobbler, Lou B Lou and 7 others like this.
  15. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    Indeed and if I am not mistaken these people are only too keen to release more flawed trials - except flawed in ways not on their list.

    There is a word 'disingenuous'.
    (As in 'as disingenuous as the Cochrane founders'.)
     
    MEMarge, bobbler, Lou B Lou and 9 others like this.
  16. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,661
    Location:
    Canada
    Most of this is up to individual judgment and open to interpretation. Which isn't necessarily bad in itself, but given the history of evidence-based medicine, where people who do trials usually grade trials similar to their own, this is the same old problem. None of this appears close to reduce bias, which is the dominant issue. Also I'm not sure how this applies to "RCTs" that aren't even properly controlled. Technically it shouldn't, but we all know that not being properly controlled is not an issue for most of evidence-evidence medicine, even though it's in the name, because it can conveniently simply mean 'clinical', without even downgrading its value.

    Who decides what's plausible? Psychosomatic ideology isn't plausible in itself, and yet it's almost universally believed to a high degree, capable of fully explaining any and all symptoms of unclear origin. Who decides what's an abnormally large effect? In a field where abnormal effects are the most common, since they're not based on plausible mechanisms and don't work? Where trying the same thing again and again until they get results they like is an accepted way of doing things?

    There is mention of pre-registration, but PACE was pre-registered, they just deviated from it in a specific way to turn a null into a fake abnormally large positive, and yet you point this out and are met with general indifference because reasons. Ah, well, it does mention changes in outcomes, but PACE is considered "gold standard", so clearly hardly anyone really cares about that. Just mumble a few words and ignore the issues raised and as long as it's popular it's all good. And there is nothing about bias, such as open interventions that explicitly seek to alter participants' response from people with a high degree of bias and conflict, even up to literally telling participants that the treatment they are 'trying' is safe and effective.

    It also mentions short time between registration and publication, but nothing about abnormally long times. We've seen this recently with... one of the Crawley trials, I think, that took 8 years or so to publish their results because they were bad.

    None of this still amounts to independent review either. And it's not accountable in any way. And still way too chummy, when we take examples like Cochrane where authors faced with a review having been found to be inadequate can simply not care and keep it published.

    Obviously a good checklist would be useful here, but technically this already exists in the form of existing standards. The usual problem is that they're simply ignored whenever convenient, so this is just yet another standard where items can be conveniently ignored as long as the is a cultural demand for something, or political incentives, or literally any reason to exempt anything falling short of the standard.

    [​IMG]
     
    EzzieD, alktipping, Sean and 5 others like this.
  17. Creekside

    Creekside Senior Member (Voting Rights)

    Messages:
    1,218
    I think it's too little, too late. It shouldn't be too much longer before AIs take over the task of checking studies for flaws, and doing a better job of it. The critical problem is making sure that the bad studies aren't being falsely claimed as good during the training period.

    I propose that when an AI gets trained for this, it should be given access to S4ME and other "critical about bad studies" forums, to give it a wider perspective.
     
  18. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    Academic freedom, including the freedom from any responsibility for your product.

    Nice work if you can get it.
     
    MEMarge, EzzieD, alktipping and 4 others like this.
  19. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    I am not against checklists in principle. They have their place where there is a long list of things to check, particularly in maintaining safety standards, like making sure all surgical instruments are accounted for before closing up the patient, and equivalents in many other areas, like flying or maintaining aircraft.

    But like all tools they can be misused, and become ends in themselves rather than tools.

    As an assistant to help making a decision or judgement? No problem. As the standard or decision itself? Big problem.
     
    Last edited: Apr 8, 2024
    MEMarge, bobbler, EzzieD and 5 others like this.
  20. Caroline Struthers

    Caroline Struthers Senior Member (Voting Rights)

    Messages:
    966
    Location:
    Oxford UK
    This cartoon is so good. I moved from Cochrane to what I thought was going to be a better life promoting the use of reporting guidelines to help improve the standard of reporting of medical research, trials etc. I can tell you that the enthusiasm for academics to develop and publish new and overlapping/contradictory reporting "standards" (aka guidelines) which nobody follows (journal editors publish papers whether they're well reported or not because $$$) is just as large as it is for academics to develop endless checklists to help them work out whether research, whether well reported or badly reported, is "trustworthy". Nobody does anything about trying to ensure the research is necessary and well conducted in the first place as I guess there's no $$$ in that. As the late Doug Altman said 30 years ago, we need less research, better research, and research done for the right reasons. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2539276/pdf/bmj00425-0005.pdf]
     
    alktipping, MEMarge, bobbler and 7 others like this.

Share This Page