Independent advisory group for the full update of the Cochrane review on exercise therapy and ME/CFS (2020), led by Hilda Bastian

Discussion in '2021 Cochrane Exercise Therapy Review' started by Lucibee, Feb 13, 2020.

  1. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,386
    [my bold]

    How very strange. So because all disorders impact functionality, then functionality cannot be objective and cannot be measured ... ? Trying to fathom the logic is like trying to put your finger on a blob of mercury. So by the same logic, a broken leg stops you playing football, but subsequently being able to play twice a week would not be a reasonably objective measure of recovery? Along with other measures of course.

    My mother suffered from deep depression throughout her life, which fluctuated over long periods. When she was more functional there were all sorts of predominantly objective measures - going to the shops, riding her bike, going out. And when not functional also a pretty objective measure - staying in bed day and night.

    But no, the 'defence' here is that others know nothing of good scientific method.
     
    Last edited: Jun 12, 2020
    adambeyoncelowe, Sean, JemPD and 8 others like this.
  2. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
  3. Caroline Struthers

    Caroline Struthers Senior Member (Voting Rights)

    Messages:
    882
    Location:
    Oxford UK
    I agree.
     
  4. PhysiosforME

    PhysiosforME Senior Member (Voting Rights)

    Messages:
    306
    We replied from the physios for ME account and also all replied individually as well! Karen even got blocked by them!
     
    Mithriel, mango, Amw66 and 12 others like this.
  5. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,386
    :D;)
     
    MEMarge and PhysiosforME like this.
  6. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,965
    Location:
    London, UK
    Indeed. Maybe Mark Vink has already done that?
     
  7. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,855
    Location:
    Australia
    Sure. The key point is that if participants (and any assessors) genuinely don't know which arm they are in, response biases and related distortions will be relatively similar across all arms.

    Objective outcome measures of functioning in an unblinded trial don't eliminate bias, but they show that such improvements are genuine, rather than just a change in how someone filled in a questionnaire because they felt more optimistic or distracted from symptom focusing (or other response biases).
    Obviously objective outcomes on their own are not enough, someone could do a lot more activity but report much more severe symptoms.
    Secondly, outcome measures need to take into account baseline levels, rather than looking at mean group changes. The clinical benefit differs based on the baseline levels - many questionnaires fail when it comes to linearity - changes on different questions and from lower levels of functioning can have much greater real-world impact.

    Many of us have given this a great deal of thought and we would love to develop our own composite outcome measures for use in clinical trials, if given the opportunity!
     
    Last edited: Jun 12, 2020
    Robert 1973, Trish, rvallee and 8 others like this.
  8. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,855
    Location:
    Australia
    Yes, there is a big difference between reporting what could happen or what a participant thinks they could do, versus what they have actually done/has actually happened. Biases such as recall bias certainly apply, which is why an outcome such as urinary incontinence needs to be recorded daily. Likewise, this is why a participant reporting that they are in full time work with minimal symptoms is much more convincing than someone answering "Compared to one year ago, how would your rate your health in general now?".

    For the SF-36 PF questions for example, there is a big difference between asking "Does your health limit these activities?", rather than "How often have you done these activities in the last week?"

    Oh and while I'm at it, many of the studies that try to statistically determine "Minimal clinically important difference" are based on a fundamentally flawed circular argument - bias on the reference question is correlated with bias on the other questions, hence these statistical approaches are fundamentally flawed because they might not be measuring any fundamental health improvement at all. These approaches also fail to consider how biases on questionnaire answering behaviour in prospective studies can differ between studies.


    Using a Delphi technique would be much more worthwhile, but the determinations of what is clinically relevant would need to be tested repeatedly in a wide variety of groups from different cultural and age backgrounds.
     

    Attached Files:

    Last edited: Jun 12, 2020
    Simbindi, Trish, rvallee and 11 others like this.
  9. Hutan

    Hutan Moderator Staff Member

    Messages:
    27,828
    Location:
    Aotearoa New Zealand
    Thanks @Hilda Bastian for taking the time to offer up that urinary incontinence study.

    There is certainly a lot going on in it.

    Edit - I'll add here so people have an idea of the structure - two treatments were compared: physio (pelvic floor exercises) and surgery. Women were randomly allocated to the treatments but women allocated to physio were allowed to also have surgery (and about half of them chose to do that).

    So, the study found that surgery was a lot better than physio and recommended that women be offered both treatments (rather than just physio) as first line treatments, and that they be given information on the expected outcomes of each. But how good was the information the study produced about the expected outcomes? Is physio even worth trying?

    If we just consider the physio treatment (because there's a lot to talk about with just that), the paper noted that previous studies had reported rates of subjective 'success' (presumably 'cure' - as these figures are compared to surgery cure rates, but perhaps not) in the range of 53 to 97%. It's all rather odd, as the paper later mentions two studies of physio treatment with improvement rates of 33% and 43% - and surely improvement is a category that includes cure?

    In this study, the rate of subjective cure in the women who were allocated to the physio group and only had physio was only 16%.

    However, this probably over-states the real rate of cure, because half of the women who were allocated to the physio treatment actually gave up on physio and went and had surgery. It seems likely that the women allocated to the physio group who did not choose to have surgery were a whole lot more likely to be pleased with the outcome of their physio exercises than the ones who went on to have surgery. Indeed, of the 99 women allocated to physio who chose to have surgery, 90 of them had reported no improvement (so, not even some improvement, much less a cure) at their last assessment before surgery. So, from this study, we could expect the subjective cure rate from physio for women with the required level of urinary incontinence to be somewhat less than 16%, and maybe a whole lot less.

    Also, of the women who were allocated to the physio group and underwent physio only (i.e. didn't also have surgery) and who were lost to followup (i.e. who didn't contribute to the 12 month data), 76% (16 out of 21) reported no improvement at all in their last recorded assessment. So, the women who dropped out were very likely to have reported no improvement, and much less a cure. That adds further to the suggestion that the women who were allocated to the physio group and who didn't choose to have surgery and stayed around for the 12 month followup were quite a select group - who were a whole lot more likely to be pleased with the outcome than all the other women allocated to the physio group.

    There was no control in this study, as it wasn't primarily trying to determine whether physio works. So we don't actually know if some of the 16% of women who received physio only and reported being cured might have reported being cured after 12 months with no treatment at all. I imagine there is some rate of spontaneous resolution of symptoms, especially for those women with an onset from giving birth. It looks as though obesity is a significant risk factor for urinary incontinence and so it seems possible that some women might lose weight and find that their symptoms resolved, independent of the pelvic floor exercises. I wouldn't be surprised if the physiotherapists actually prompted/helped some obese women to lose weight, so the reported improvement could arguably be from that rather than (or at least as well as) the pelvic floor exercises.

    A problem with the subjective rating is that the physio treatment was presented as a skill to be learned. The paper talks about the treatment being given "depending on a number of things including 'adherence' and 'the ability of the women to learn to perform the muscle contractions'. So there would have been quite a bit of social pressure for the women to report a good outcome - many would want to be seen as having worked hard to do the exercises and as being able to learn what was required. It's quite possible to imagine that a woman might report a cure when there was still some urinary incontinence, thinking that probably every woman experiences some leakage under some circumstances.

    Acting against an overly positive reporting of subjective outcomes is the possibility that many of the women in the study might just have wanted a fast track to the surgery. Some might even have felt that reporting no improvement from physio would make their case for surgery stronger.

    Adding all that up, with the two subjective outcomes at 12 months the study focuses on (PGI self-report of improvement and a subjective report of cure (a negative response to "do you experience urine leakage related to physical activity, coughing or sneezing?")), it's very hard to say what impact the physio treatment had on urinary incontinence in this study.

    Sadly, the objective outcome is a bit useless too.
    There was no initial objective testing or even a requirement that the testing had been done at some earlier time.
    So, we don't actually know if the women in either treatment group really were in the moderate to severe category at the beginning of the study. It calls into question the objective cure rate - which was measured by a urodynamic test at 12 months - we don't know how many women would have failed the test at the beginning.

    Among the women who only had physio, the rate of objective cure was higher than the rate of subjective cure (44% vs 16%). The paper suggests that women were able to withstand the clinical provocation cough test with a full bladder but 'still had stress urinary incontinence in everyday life in response to unexpected events'. Which sounds pretty likely to me, and calls into question the whole notion of 'objective cure'. An objective measure isn't automatically a useful measure. It also calls into question how you might compare cure rates between different studies, if some studies are using a test that doesn't actually measure cure.

    There's a whole lot more. Perhaps most importantly, the paper does not really highlight the impact on treatment outcomes of the 49% of the women in the physio treatment who went on to have surgery. The data that is presented most prominently is an intention to treat analysis, with the result that, at first glance, the physio treatment (with about half of the participants also having had surgery) looks a whole lot more useful than it really was.

    The paper is a bit of a dog's breakfast. There's some good stuff in there, but it's all mushed up with stuff that is a bit whiffy. I don't think it is a good example of subjective outcomes doing a good job of answering the question of how well a treatment works in an unblinded trial.

    (I hope I haven't got too much wrong - I have run out of energy to carefully check everything. Some edits to improve readability.)
     
    Last edited: Jun 12, 2020
  10. Hutan

    Hutan Moderator Staff Member

    Messages:
    27,828
    Location:
    Aotearoa New Zealand
    It's interesting to compare the reported subjective rate of cure for this sample of women with the average reported for the pelvic floor treatment and no treatment from the Cochrane review:
    Pelvic floor muscle training for urinary incontinence in women

    (SUI is stress urinary incontinence, which is what was looked at with the physio vs surgery study.)

    It looks like the rate of cure with no treatment is around 6%. The physio vs surgery study found a rate of subjective cure that was much less than the average identified by Cochrane. In fact, given all of the uncertainties in the physio vs surgery study and just based on that study, we can't discount the possibility that pelvic floor exercises are no more likely to produce a cure than no treatment. It makes me think that Cochrane's reported cure rate of 56% needs a bit of scrutiny.
     
  11. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    ...which would count as a value from the trial (that it raised questions about other evidence for you, I mean). It provided data on adverse effects, and that's an unusually large sample size of women for that non-surgical treatment. (Since there is not only a single relevant trial, a single trial wouldn't be the appropriate sole basis for informing women.)

    I didn't claim I was looking for an example that showed that: I explicitly said I wasn't, on grounds of time and value of such an exercise, and that I wouldn't vouch for the trial. I was addressing the question of what does that look like, and the thing that started all this: are such trials valueless. That you think it's some good stuff mushed up with stuff that's a bit whiffy is, as I've said several times in this thread, a key point (as per my post here): studies of all kinds are patchy like that, blinded or not. And that's why the systematic reviewing community has been moving towards rating the uncertainty around data per outcome, not throwing out babies with bathwater (or calling bathwater babies).
     
  12. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    I wish your first point was true! There are many other potential sources of bias - what a particular practitioner (or more) did that deviated from the protocol, things going differently at one hospital versus another... And then all the things that can go wrong with the report (selective outcome reporting etc).

    I truly understand the point with the particular types of trials you are speaking about, and I have criticized the PACE trial on the grounds of inadequate measurement tools. But you aren't going to convince people who have battled for exactly the opposite - for people's voices about their care to be taken seriously and given weight - to sign up to dismissing patients' reports so categorically. There are stronger arguments that do carry weight,* so it's not even essential. And other than blinding of allocation concealment, how good the evidence for blinding is from situation to situation is a topic that is still being studied and debated, because [insert here my usual "it's complicated" :nerd:].

    * [including that it is demonstrably a problem in a specific case]
     
    Last edited: Jun 12, 2020
  13. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,386
    But surely @Snow Leopard's first point is true - referring as it does specifically to response bias and related distortions. There was no suggestion that other kinds of bias don't exist.
     
  14. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,265
    Patients with devastating illnesses tend to have difficulties remaining objective when evaluating a treatment because they so desperately want it to work. As such the efficacy of treatments is often greatly exaggerated in anecdotal reports or unblinded studies.

    It is possible that some people do not understand this due to not having been in a position to observe this phenomenon, and confuse statements about self-report/subjective outcomes being unreliable with a poor attitude towards patients. Acknowledging this vulnerability of patients does not mean dismissing the patient voice. It is simply a fact that has to be taken into account if one wants to do competent research.

    I have been involved long enough in the patient community to see that this difficulty staying objective is a common and major problem that leads to treatments being given credibility even when they later turn out to be ineffective. Patients are harmed by the side effects and expenses of undergoing the treatment that does not actually work.

    The open-label and the placebo controlled studies of Rituximab for ME/CFS are a good example.
     
    Last edited: Jun 12, 2020
  15. Hutan

    Hutan Moderator Staff Member

    Messages:
    27,828
    Location:
    Aotearoa New Zealand
    Perhaps there are some people that it will be difficult to convince. But surely advocates who have battled for people's voices to be taken seriously would also want people to have reliable information when they are making decisions about their care?
     
    Last edited: Jun 12, 2020
  16. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,386
    Absolutely. I was going to pick up on the same thing.

    If a source of bias exists then it exists, and no amount of arguing other points will change that. Any source of bias must be addressed in its own right, else why run a trial. And if such bias significantly influences a trial's primary outcome, then the trial becomes worthless for any assessment of that primary outcome. Correcting for bias - any kind of bias - is not being dismissive of patients' voices.

    The correction being suggested in this thread, for response bias in unblinded trials, is actually inclusive of patient reporting: ensure at least the primary outcome is an objective measure, and that any subjective measures can be validated/calibrated against one or more objective measures. If objective and subjective measures all match up, then wonderful. If not then you get the chance to know there is an issue and address it, else another far more insidious form of bias creeps it - the pan-trial illusion that the biased measures are unbiased, with nothing to disprove that ... p*ssing into the wind basically.
     
  17. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,386
    Yes! We have to be very careful to not conflate "self reported outcomes" with "subjective outcomes"; although closely related they are not the same thing.

    Sources of bias can be hierarchical, layered. The very process of self reporting has the potential for bias: poor memory, difficulty ascribing a value to continuous variables, consciously lying for whatever reason, unconsciously believing something to be different from the reality, etc., etc. But a layer down from that, the variable being reported may itself be highly subjective and nebulous. Pain, fatigue, cognitive impairment, etc., etc.

    However, there is also the possibility to self report objective measures, which is why I'm labouring the point of there being a hierarchical layering of bias here. An objective measure, independently acquired and validated, is pretty objective. So how objective is a self reported objective measure? I think the annoying answer is: it depends. The objective measure needs to be something that can be readily self reported. How many times did you go to the shops last week. What times did you get up and go to bed last week, including all times through each day, etc. If reported accurately then these measures could be very objective, but reporting accuracy could also be a major issue.

    Reporting accuracy will likely be better if discrete raw values are asked for, not woolly infinitely variable values, or discrete values arbitrarily ascribed to woolly values. I imagine that within a trial some measures could be taken to improve reporting accuracy, or at least corroborate it.

    So self reporting is its own source of potential bias. And the measures being reported will themselves have potential for their own degree of bias. Putting the two together means both must be heeded.
     
    Last edited: Jun 12, 2020
    Snow Leopard, Hutan, rvallee and 7 others like this.
  18. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    So much agree from me on this comment! Absolutely an individual outcome in a context has its own degree of bias - and it's higher if it's subjective/unreliable/without a solid research basis that it measures what it claims to measure/manipulable... And subjective outcome measures have a higher risk of bias - that just doesn't necessarily make them useless, and it doesn't doom the trial they're in to uselessness either, especially if other sources of bias are low.

    There's a spectrum from very subjective to very objective, and in the middle there isn't universal agreement on some dividing line. Very easy to see where a very subjective outcome is so subjective, but awfully easy to over-estimate how objective "objective" outcomes are.

    (Poor memory can affect patient-reported measures of any type - if a trial goes on a long time, and you end up filling in a week or two from memory, it's easier for some things than others, depending on how memorable it is, not how unambiguous it is - eating carrots or hours you slept, for example, versus days you went to a workplace.)
     
    Snow Leopard, Hutan, MEMarge and 2 others like this.
  19. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    Yep, but how is information about 2 procedures reliable, if it says their outcomes are equal, when patients say one hurts more than the other? Yes, it's less reliable when you can't blind for the difference between the procedures, but it's a question of degree of reliability, not "reliably precisely right" vs "useless", with nothing in between. Getting a rough idea if bad pain is very common or quite uncommon can be quite valuable, especially when you don't necessarily trust the biases of the person telling you "most people only experience slight discomfort".
     
    obeat likes this.
  20. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,494
    Location:
    Mid-Wales

Share This Page