It also seems that the authors have included multiple estimates from the same study (for example different E:T ratios). I don't think their modelling accounts from the correlation between these, so it is similar to counting some studies multiple times.
12 out of the 28 studies in the review came from the same research team as the reviewers at Griffith University. There are also 4 from the Klimas group in Florida who previously advocated this line of research. Other studies are rather old, from before 2000.
I don't think these estimates are useful.
The authors have thrown 2 case studies with less than 10 participants and no control group in the mix. This entirely messes up the meta-analysis. The only 2 randomized studies are the PACE trial and this small Belgian study...
We have written a blog article that summarizes the problems with the BMJ review on Long Covid interventions (Zeraatkar et al. 2024). Inconsistency in how imprecision was evaluated seems to be the key issue. Suspect that a correction will be needed...
Been looking closer into this. One interpretation might be that including all randomized participants is assuming that those with missing data did not had the outcome (in our example improvement/recovery), so a form of imputation (non-responder imputation).
Cochrane seems to recommend not...
Intention to treat
In the protocol the authors said they were going to use intention-to-treat (ITT): "reviewers will preferentially extract the results from intention-to-treat analyses without any imputations for missing data, when reported." But if you look at the data they extract in the Excel...
A minus sign I think. the - before the 13.11 got accidentally deleted when I made the table. The estimate is −8.4, 95% confidence interval (CI) −13.11 to −3.69. So it is above the MID of 3 points and does not cross 0.
I assume this is what you referred to? Apologies for the confusion (will...
Here's why I'm asking about precision: the outcomes above that were downgraded twice were non-behavioral interventions with low risk of bias. Those what were not downgraded were rehabilitative interventions and all high risk of bias.
So because of this weird approach to evaluating precision...
Imprecision
I think there is an issue with how they evaluated (im)precision and was hoping if anyone could double-check. In short, precision is determined by the variation in the measurement (e.g. standard deviation) and the amount of information collected (sample size).
In the protocol the...
Meta-epidemiological evidence that randomization and allocation concealment overestimate treatment effects is also weak.
Take for example this latest overview where Guyatt was senior author and where the overestimation because of lack of blinding was actually bigger than for randomization and...
GRADE is getting a big update and some parts are already available in this new book:
https://book.gradepro.org/
Unfortunately, it includes the following passage:
The meta-epidemiological evidence refers to the MetaBLIND study.
This might explain why reviewers (such as those that wrote the...
This is a living review which will be regularly updated (every 6 months or sooner if there is important new evidence) and the authors have funding to this for at leats 3 years.
Perhaps the patients involved could highlight some of these problems?
Except for two cognitive tests, all of their outcomes are subjective. I think this might become a problem if more rehabilitation trials are done where subjective outcomes show an improvement but objective outcomes (6minute walking test, CPET, employment etc) don't. It might cause the same issues...
Tilestats is also useful because he often breaks it down to the by hand calculation:
https://www.youtube.com/@tilestats
A similar YouTube account is Statquest by Josh Starmer:
https://www.youtube.com/@statquest/videos
Yes according to their standards, this should have at least the same certainty of evidence. But they downgraded it twice, once for selective reporting and the second time because the reviewers believed there is no plausible mechanism:
It's actually worse because they recalculate the results based on summary data which is less precise compared to what the original study reported.
For example, the REGAIN study primary result for the PROPr questionnaire was 0.03 (95% confidence interval: 0.01 to 0.06) which is lower than the...
If I understand correctly, they didn't do any meta-analysis at all. For all outcomes summarised, the number of trials = 1. So they simply reiterate results from individual studies, most notably the Knoop trial (for CBT) and the REGAIN trial (for rehab).
Both trials were high risk of bias for...
This is all based on the 1 Dutch trial by Hans Knoop (Kuut et al. 2023, discussed here). A study on COPD patients found the minimal importance difference for the CIS-fatigue scale to be 9.3 points, so bigger than the 8.4 difference found in the CBT trial. The reviewers rated the Kuut 2023 study...
This is the paper that contains 'helpful facts' for which they author received a lot of criticism on social media (seems that she has now deleted that post)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.