Sly Saint

Senior Member (Voting Rights)
There are a number of tools available for journals to use to help assess RCTs.
I wondered if anyone here had 'tried them out' for PACE and/or any of the other 'RCT's
by Crawley or Chalder?

I've seen CASP mentioned:
https://casp-uk.net/wp-content/uploads/2018/01/CASP-Randomised-Controlled-Trial-Checklist.pdf

and Consort:
http://www.consort-statement.org/
"
The CONSORT Statement is endorsed by prominent general medical journals, many specialty medical journals, and leading editorial organizations. CONSORT is part of a broader effort, to improve the reporting of different types of health research, and indeed, to improve the quality of research used in decision-making in healthcare."

this is one from the BMJ:
https://bmjopen.bmj.com/content/sup...-2015-008807.DC1/bmjopen-2015-008807supp2.pdf

and this is the one used by NICE:
https://www.nice.org.uk/process/pmg...dology-checklist-randomised-controlled-trials

(do most reviewers at journals use these tools? and do they submit them to the journals or just use them to help make their decisions? @Lucibee )

@Tom Kindlon @Graham @dave30th
 
I agree such guidelines can be useful.
I have cited CONSORT guidelines in some of my publications. For example:

http://journals.sagepub.com/doi/pdf/10.1177/1359105317697323

The PACE trial demonstrated many elements of good trial reporting including regarding harms. For example, they published the manuals for both therapist and participants, as recommended by the CONSORT extension for trials assessing non-pharmacologic treatments (Boutron et al., 2008).
---
However, an important issue remains: the degree of adherence to the interventions. The CONSORT statement on harms notes that ‘it is important to report participants who are non-adherent or lost to follow-up because their actions may reflect their inability to tolerate the intervention’ (Ioannidis et al., 2004: 785).


https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(11)60684-3/fulltext

Peter White and colleagues1 claim that, if cognitive behaviour therapy and graded exercise therapy are delivered as described, they are “safe” for chronic fatigue syndrome (CFS); the CONSORT statement on harms reporting recommends against such claims.2
 
Last edited:
do most reviewers at journals use these tools? and do they submit them to the journals or just use them to help make their decisions?

I would hope that reviewers do not use these 'tools'. I certainly would not. It is a bit like having a tool to provide a jury instructions on whether or not to find someone guilty. Tools like this might possibly be useful for people who are not familiar with trial assessment to remind them of what they need to look out for. For people who are used to assessing trials (which should really include all properly trained physicians who have attended journal clubs throughout their careers) the important issues will be obvious. What may not be obvious are weaknesses relating to a specific trial context (like a trial that trains people to say they are better and then asks them if they are better) but standard tools are not going to pick them out.

Part of the problem with Cochrane has been that they have a tool for assessing quality of evidence from trials. And it is completely useless because it misses the most basic problems - as far as we can see from PACE.

The real problems I think are:
1. A lot of people in biomedical science are actually quite dim, at least in terms of ability to evaluate results. I am not sure what one does about that. Unfortunately the editors in chief of Lancet and BMJ both seem to fall into this category.
2. The establishment prefers to protect its own and cover up embarrassing failures rather than admit they got things wrong.
 
I agree with @Tom Kindlon that CONSORT guidelines are useful, but they don't enable you to assess the quality of trials. They are simply a reporting checklist to make sure that info is reported - ie, have the said what randomisation method they used, what the primary outcome measures were, etc. It doesn't then check that they have used the appropriate ones of either.

The Lancet does use CONSORT in this sense, but just as a box-ticking exercise.
 
Part of the problem with Cochrane has been that they have a tool for assessing quality of evidence from trials. And it is completely useless because it misses the most basic problems

This is probably a stupid question, but has anyone pointed this out?
Is there another trial (not ME related) or hypothetical example, that could be used to demonstrate how these things are missed out using the existing tools?

(Just had a quick search and see that the consort diagram was submitted for this 'Early intervention study', O'Dowd/Crawley https://www.s4me.info/threads/odowd-crawley-early-intervention-study.2931/ )

And I see that the PACE authors said that they were compliant with Consort guidelines :
https://www.s4me.info/threads/pace-trial-tsc-and-tmg-minutes-released.3150/page-22#post-88225

Surely something as basic as no (adequate) control group/s in a Randomised Controlled Trial should get flagged up(?)
 
Surely something as basic as no (adequate) control group/s in a Randomised Controlled Trial should get flagged up(?)

That is exactly what was not flagged up by Larun et al. If you have dim people applying 'tools' you don't necessarily get a classy piece of furniture, or even one that stays up when you sit on it.

And when a president of a Royal College does not understand the basics you get to realise how big a problem dim people are.

I pointed out to Cochrane that whatever evidence grading system they had it obviously was not working. There has been feedback suggesting that they realise that in the PACE case things were not applied properly but I am not sure it sounded as if they really understood the depth of the problem.
 
Ticking the boxes for CONSORT is like having your car papers in order.

It's just one of those minimal requirement things. There is still the question of how you drive it, and making sure you put the right fuel in it, and enough air in the tires, and securely locking it up at night so that your data doesn't get stolen.
 
I've watched this short series of videos of a talk given by Mike Clarke which I found on the Cochrane training website.
"
This resource is a four-part video recording of a talk given by Mike Clarke, the then Director of the UK Cochrane Center, as part of the Oxford MSc in Evidence-Based Healthcare Programme.
The talk is about risks of doing multiple analyses, including looking at different subgroups, which can lead to false positive findings. The ways of reducing these risks are discussed."

the first three are about 15 minutes long each and the last is 6 minutes.

https://training.cochrane.org/resource/mulitiplicity-and-subgroup-analysis-beware

"Mike has 25 years’ experience of the conduct and oversight of randomised trials, systematic reviews and other types of prospective research. Previously, Centre Director of the UK Cochrane Centre. He is currently the inaugural Director of the All Ireland Hub for Trials Methodology Research and the Co-ordinating editor of the Cochrane Methodology Review Group."

as far as I can gather he is now at Queens University Belfast:
https://pure.qub.ac.uk/portal/en/persons/mike-clarke(f6bee3f9-d4fd-4498-b28b-1e52b1dfaf91).html

his contact details are on the last link.

wonder if he is worth contacting?

@dave30th

(as an aside: interesting to see that Oxford do courses in RCTs and Systematic Reviews
https://www.conted.ox.ac.uk/courses/randomized-controlled-trials
https://www.conted.ox.ac.uk/courses/systematic-reviews)
 
From the Cochrane training site:

"Mulitiplicity and subgroup analysis - Beware!

This resource is a four-part video recording of a talk given by Mike Clarke, the then Director of the UK Cochrane Center, as part of the Oxford MSc in Evidence-Based Healthcare Programme.
The talk is about risks of doing multiple analyses, including looking at different subgroups, which can lead to false positive findings. The ways of reducing these risks are discussed.

After using this resource, you should be able to...
  • Understand and avoid the risks of using multiple analyses in clinical trials and systematic reviews"
 
I've watched this short series of videos of a talk given by Mike Clarke which I found on the Cochrane training website.
"
This resource is a four-part video recording of a talk given by Mike Clarke, the then Director of the UK Cochrane Center, as part of the Oxford MSc in Evidence-Based Healthcare Programme.
The talk is about risks of doing multiple analyses, including looking at different subgroups, which can lead to false positive findings. The ways of reducing these risks are discussed.

Might be relevant for the Cochrane review/ IAG thread?

https://www.s4me.info/threads/indep...y-and-me-cfs-2020-led-by-hilda-bastian.13645/
 
New CONSORT update for clinical trial reporting. Includes new items on data sharing, conflicts of interest, PPI, harms and intervention delivery.
Summary of main changes in CONSORT 2025
Addition of new checklist items
  • Item 4: added item on data sharing, including where and how individual de-identified participant data, statistical code, and any other materials can be accessed.

  • Item 5b: added item on financial and other conflicts of interest of manuscript authors.

  • Item 8: added item on how patients and/or the public were involved in the design, conduct, and/or reporting of the trial.

  • Item 12b: added item on eligibility criteria for sites and for individuals delivering the interventions, where applicable

  • Item 15: added item on how harms and other unintended effects were assessed.

  • Item 21: added items to define who is included in each analysis (eg, all randomised participants) and in which group (item 21b), and how missing data were handled in the analysis (item 21c).

  • Item 24: added item on intervention delivery, including how the intervention and comparator were actually administered (item 24a) and details of concomitant care received during the trial (item 24b).
Completely revised checklist items
  • Item 3: revised item to include where the statistical analysis plan can be accessed in addition to the trial protocol.

  • Item 10: revised item to include reporting of important changes to the trial after it commenced, including any outcomes or analyses that were not prespecified.

  • Item 26: revised item to specify for each primary and secondary outcome—the number of participants included in the analysis and the number of participants with available data at each time point for each treatment group.
...

CONSORT 2025 statement: updated guideline for reporting randomized trials
 
Back
Top Bottom