Interventions that manipulate how patients report symptoms as a separate form of bias

I take your point about the use of the word response but that's why I said treatment-response bias - not just response bias as per the questionnaire. At that point in the treatment you might genuinely believe that is an accurate picture of how you're doing - even though it's not.
Understood. It's just that people being what they are, some will ignore your hyphen and insert their own! Maybe some word other than response ... treatment-outcome induced bias?
 
We could turn the argument about this sort of built-in bias around a different way.

We could accept that the PACE style of CBT is intended to induce bias. That's it's purpose as the first stage in a supposed recovery program. According to the BPS hypothesis, you have to change the way the patient thinks in order for the treatment to work.

CBT that is designed for CFS in the PACE trial was overtly intended to change the way patients viewed their symptoms, and their beliefs about what might be perpetuating or worsening them. The aim was to stop patients focusing on their symptoms and interpreting them as indicators of disease, and to interpret them as anxiety about returning to activity after a period of genuine illness has passed.

The second aim was that once patients came to believe they were no longer sick, and just needed to overcome this anxiety and misinterpretation of symptoms, they would increase their activity back to normal levels

And thirdly, based on the unproven deconditioning hypothesis, it was assumed that this would lead to recovery back to normal.

So there are 3 stages:
1. Change your beliefs about your symptoms
2. Change your behaviour by increasing activity
3. Be able to sustain increased activity, and thereby recover back to normal fitness and employment levels.

The problem with the assessment of efficacy of the PACE trial was that the researchers deliberately designed the primary outcome measures to test only stage 1, not stages 2 or 3.

So they succeeded, at least temporarily, to change some patients' interpretation of their symptoms, as demonstrated by them filling in the questionnaires differently. In their model, that means they have cured the patients, since stages 2 and 3 will automatically follow if you succeed with stage 1.

But they failed with step 2. There was no tracking of whether patients actually did increase their activity, or just substituted one activity for another and stayed within their energy envelope. There was not measure of compliance.

3. There was no evidence that patients were able to sustain the sort of behaviour that would indicate they were healthy. They were no fitter, they weren't more able to return to work, and they still couldn't walk anywhere near as far as a healthy person in 6 minutes.

So the problem is not so much built in bias in the treatment program, it is the use of questionnaires as outcome measures that only measure that overtly built in bias in Stage 1, and make no attempt to measure compliance, and fail to report, or hide away in minor later published papers, real measures of recovery, or lack thereof.
 
So the problem is not so much built in bias in the treatment program, it is the use of questionnaires as outcome measures that only measure that overtly built in bias in Stage 1, and make no attempt to measure compliance, and fail to report, or hide away in minor later published papers, real measures of recovery, or lack thereof.
Yes, the treatment programme builds in skewing of people's beliefs, perceptions, etc, and that is true no matter if anyone asks them about those perceptions or not. But if then questioned about their perceptions, their answers will be biased away from truth/reality because their perceptions already are. Which would be fine if the only thing that mattered was stage 1, and is of course what MS and Co argue when stating "that is how the illness is defined" ... they don't give a fig for anything else, except the stage that "proves" them right.
 
I agree that this is another level of bias that does not appear to be taken into account.
I am personally against any use of 'tools' though. It irrational and as long as it is considered OK the system will be manipulated.

Using a tool works like this:

1. Gather reliable evidence for factors affecting bias from all possible sources.
2. Derive a set of rules that seems to cover those factors and put it in a tool.
3. Use the tool.

A better approach is:

1. Gather reliable evidence for factors affecting bias from all possible sources.
2. Use it, case by case.

Since in the first method 2 can only be at best a rough approximation to 1 it has to be better to use 1.

The practical reason for having a tool is to make it possible to employ people with no real understanding of clinical trial psychology to apply some rules. I think that should not happen.

It depends what you mean by a tool and what the issues are someone needs to consider. Even those with good knowledge make mistakes or forget considerations.

I would draw an anology with a threat analysis tool (the microsoft threat analysis tool) where you basically describe the system you are building and it has a set of patterns of know attacks takt can happen of you get elements of the design (and implementation) wrong. It then applies those patterns to generate (sometimes a huge list) of potential issues. The developers then need to go through these and check they have potential mitigations or give judgements as to why the threats are not relivant.

As a tool it isn't a replacement for expertise but a way of coding and applying a complex set of knowedge and ensuring things don't get forgotten.

So from a perspective of assessing bias in trial design I could imaging similar where the knowledge patterned encoded into the system are things like the sources of bias (and other stuff that can then go wrong). Someone designing (or assessing the trial) can then use the tool to describe the trail protocol and then the tool leads them through a set of issues that could occur that they need to consider (and document the approach). I also think adding a degree of formulism into a description can help expose issues.

In this way though a tool has three purposes: firstly, to ensure a description is actually captured and such that it is clear where there is ambiguity; secondly, to help remind trail designers of a lot of known issues they need to consider; and thirdly, to document the choices such that they are clear to reviewers. I think this latter step is important in that it makes it hard for trial designers to gloss over the details they thin may be dodgy.

There will always be dangerous that people who don't understand do design things. More commonly as systems get complex its hard to have experts who can hold everything in their heads (and people tend to look at those issues that they have recently seen). So tools can help deal with those issues rather than replacing the need for knowledge.

Of course it may be that the designs aren't that complex to need such tools to help.
 
It depends what you mean by a tool and what the issues are someone needs to consider. Even those with good knowledge make mistakes or forget considerations.

If a tool is just a list of things to remember that is fine. The problem with tools like GRADE is that they attempt to extract general rules about the impact of bias on reliability using a bogus arithmetic.
 
So the problem is not so much built in bias in the treatment program, it is the use of questionnaires as outcome measures that only measure that overtly built in bias in Stage 1

I agree with the analysis the mechanics but my understanding is that the quest is for a definition of a specific source of bias relating to treatment delivery. Lack of blinding causes problems specifically with subjective outcomes but is classified under lack of blinding nonetheless.
 
If a tool is just a list of things to remember that is fine. The problem with tools like GRADE is that they attempt to extract general rules about the impact of bias on reliability using a bogus arithmetic.

There are huge problems in applying any form of arithmetic in a simplistic way to any problem and assuming that gives an accurate sense of what is happening. I do see people do this in some areas such as risk analysis but even then its not a score they believe really just a way of trying to focus attention on the biggest problems.
 
There are huge problems in applying any form of arithmetic in a simplistic way to any problem and assuming that gives an accurate sense of what is happening. I do see people do this in some areas such as risk analysis but even then its not a score they believe really just a way of trying to focus attention on the biggest problems.

Indeed. The problem with GRADE is that the pseudo arithmetic is applied strictly by organisations like NICE and Cochrane. NICE got the right result but it was more or less by chance that GRADE had adoption to get there.
 
Indeed. The problem with GRADE is that the pseudo arithmetic is applied strictly by organisations like NICE and Cochrane. NICE got the right result but it was more or less by chance that GRADE had adoption to get there.

I think that is why I had in mind a tool that basically captures patterns that lead to bias and force the authors to think through the issues and how they are dealt with rather than some pseudo science of imposing an arithmetic on top of personal prejudice of the reviewers (in terms of how they judge some of the different issues).

I do think it is possible to create a useful toolset to help people designing protocols and doing reviews but not what GRADE is (or at least what I think it is).
 
I agree with the analysis the mechanics but my understanding is that the quest is for a definition of a specific source of bias relating to treatment delivery. Lack of blinding causes problems specifically with subjective outcomes but is classified under lack of blinding nonetheless.
I realise that, but we wouldn't need to worry about what to call this built in bias if the researchers would acknowledge that all they are assessing with their questionnaires is whether the way the patients report their thoughts has changed, not whether there is any change in their health or ability to function. But I agree this is not the aim of this thread, so I'll desist.
 
I realise that, but we wouldn't need to worry about what to call this built in bias if the researchers would acknowledge that all they are assessing with their questionnaires is whether the way the patients report their thoughts has changed, not whether there is any change in their health or ability to function. But I agree this is not the aim of this thread, so I'll desist.

I think it is important for any assessment/review to understand what is being measured and hence what may be subject to bias. When using questionnaires you are, at best, getting someones perception of something (like fatigue or physical function) rather than a different measure. Given that it would seem to me that there are additional ways for bias to happen and hence it seems important. Couple that with treatments that aim to change perception of symptoms then it seems like an important factor in assessing bias.
 
Yes, I agree it's important. I'm just feeling very frustrated that the likes of Sharpe are still getting away with their claims that the PACE trial showed a real effect.
Absolutely. Which is why I think it is time to properly highlight that their trials exhibit a form of bias that has perhaps not been fully exposed before, and has maybe allowed them to peddle their nonsense a bit too easily. Though I do appreciate that nothing is ever likely to penetrate their non-scientific heads.
 
I'm just feeling very frustrated that the likes of Sharpe are still getting away with their claims that the PACE trial showed a real effect.

It's very irritating that Sharpe still does this and keeps repeating the same criticisms of others over and over again. And yet in Fiona Lowenstein's recent Guardian column, she referred to PACE as a "now-discredited study." I've referred to PACE as "now-discredited" previously in a number of posts, but this is the first time I've seen that phrased used as a normative statement in a major news organization. Whatever Sharpe says, PACE can these days credibly be called a "now-discredited" study.
 
Back
Top Bottom