Has there been a review of ME/CFS treatment randomised trials since the NICE 2021 review?

Discussion in '2020 UK NICE ME/CFS Guideline' started by Sasha, Mar 16, 2025.

  1. Trish

    Trish Moderator Staff Member

    Messages:
    58,955
    Location:
    UK
    We need representative samples. With samples from a fairly homogeneous population and where everyone has an equal chance of being included in the sample, random samples are likely to be representative of the population.

    With small studies drawn from an unrepresentative subset of the population being studied, such a random sample from an ME/CFS clinic however good the randomisation process the sample won't be representative of the whole ME/CFS population. For good ME/CFS studies ideally the sample would be representative of the whole ME/CFS population. That takes a lot more effort by the researchers.
     
    alktipping and Peter Trewhitt like this.
  2. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,057
    Location:
    London, UK
    I am not a statistician but I suspect that that actually won't work, for all sorts of complicated reasons relating to Gaussian and non-Gaussian distributions. Perhaps more to the point it is totally impracticable, if only for ethical reasons. You cannot take a random sample. You can only sample those who have given informed consent and being 'informer's a hugely complex issue. When I did studies I was very aware that they were biased to people who I thought would genuinely be able to make an informed choice. I think I was ethically obliged to do so, even if it involved my making judgments about who might be 'informed'. As an example, I recruited a woman who spoke very little English but whose daughter was a consultant paediatrician who did and the three of us came to a consensus that mum was understanding the implications of a dangerous treatment. I made no attempt to recruit people for whom I could not be sure they understood.

    And of course a representative sample obtained randomly will not take into account subgroups and we have all the arguments about criteria and so on.
     
  3. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    14,560
    Location:
    Canada
    Oh I get that. Still not random. Pseudo-random would somewhat fit. It's some randomness, which occurs after a series of steps that negate most of that purpose, and can be followed by even more steps that undo it even more, such as doing comparison analyses after arms have crossed over. Which is ridiculous, but common. That makes most of them clinical trials. Nothing wrong with that, they're just a lesser type of evidence, in an industry where the highest type of evidence sits far below every other industry, but adding qualifier words like this is only to give the impression that they are more rigorous than they actually are.

    I see it about as similar as labeling a criminal trial as 'mostly fair'. A trial can't be mostly fair. It either is, or isn't. Close to it still isn't it. This is exactly why legitimate judicial systems all feature things like technical invalidation, when the process was mostly fair, but fell short of fully fair.

    I'm not trying to be difficult over this. I've just seen how this industry works for years and how they distort the meaning of words all the damn time for obvious and entirely self-serving purposes. It's mostly propaganda. Like the common misuse of RCT, to give the impression that it's a rigorous controlled trial, when most of the time the C actually stands for clinical. That usage is entirely on purpose. They use words like weapons.

    Words have specific meanings. And I understand that in this industry, when they use randomized, they don't mean actual random, they mean somewhat quasi-random. And that's my problem with it. They use that word to mean a thing everyone in the industry understands, but means something different than the actual word does. And it's all for effect, to give undeserved qualifiers that make studies sound more scientific than they are. I keep having to elevate my estimation of just how much propaganda there is in this industry, and it's just never high enough.

    They can like it the way it is, but the overall quality of the work turns out to be shoddy as a result. Because they like the fake shoddy results more than they want rigorous answers that solve actual problems in real life.

    It definitely is hard to maximize that randomness. But it should be maximized, even if it falls short of perfection. The current system just doesn't bother with that, and so we end up with things like trials for the LP, which feature selection interviews, completely destroying any pretense of rigorous assessment. And that's just one of so many problems that never get addressed, they just keep getting added to the mountain of bullshit while standards keep getting lowered to fit all this bulk of poop.
     
    alktipping and Peter Trewhitt like this.
  4. EndME

    EndME Senior Member (Voting Rights)

    Messages:
    1,541
    It is of course random and it is so very much by the exact definition of the word, it is just that the randomness, so to speak, lives in a different space. Now your critique is that the sampling for the RCTs isn't done via i.d.d. random variables (so the speak throwing a very large dice and chosing participants for the trial according to the results of that) that are uniformly distributed amongst the whole set of people who suffer from a given condition, but as Jonathan points out that is not the point, nor a possibility, nor ethical.

    So the question seems to be whether your sample is "somewhat" representative of the general illness population which you're studying in the sense that your results are generalisable to a wider population than the one that is being studied as part of the trial. If it was perfectly representative, you'd automatically end up with what you consider to be a "true" RCT (or at least how you've used the word random), even if the randomness only occured at treatment allocation. In some cases being "somewhat" representative likely matters less than in others so your extremely justified critique, which very much applies to many cases of trials for ME/CFS being discussed on this forum, about sampling bias does very much seem to depend on the specifics of the situation as well as far as I can tell. If you're at a homeopathic conference and recruiting patients for a homeopathic RCT where you roughly ask people "whether they felt better" after given the homeopathic substance then you clearly have sample bias problems and cannot claim that "homeopathy makes people healthier" on the basis of your results (certainly not if your study isn't blinded which the studies we're discussing here aren't), if you however are studying AIDS and your outcome measure is some objective marker of HIV/AIDS then it probably matters a lot less whether your sample is representative or not.

    As you say the problem is that the presence of certain words often seems to give the pretense of methodological rigor even if it shouldn't and provides shoddy results with an automatic pass.

    Regarding "controlled" the situation seems similar to me. Whilst I find it absolutely laughable that things be can be called "controlled" if they really don't control for anything that matters at all, you'll never be able to anticipate and control for all given factors, so I can only see how the word controlled a priori has to be interpreted in the context of the trial, rather that it being a checklist for something being done right. This requires sensible people. I find it very strange that Jonathan reports that everyone at his presentations agrees that something like PACE is an absolute farce that cannot be taken seriously even if it contains the words randomised and controlled and that they are able to see right through this, but yet others simply cannot and there's a whole industry built on such abysmal standards that things can proceed whilst being backed by supposedly the authorities of highest scientific standards. But then again I guess humans are strange and caught up in all sorts of different interests.
     
    Last edited: Mar 19, 2025

Share This Page