From X



I teach the fraudulent PACE trial as how not to conduct research in my research methods course. My students ask how this demonstrated slop got into and stays in a prestigious journal. Maybe someday I’ll have @TheLancet editors in as guest speakers to tell my students themselves.
 
This post was copied and following discussion moved from here.

Anything less is settling for too little.
Stop playing games with questionnaires and claiming "success" because someone thinks they felt a little better on a random Tuesday...
The only question that matters is "Can you do all the things you could before, without limitation?" Yes or No
Yeah even in cases where you dont have a cure the increase or decrease in percentages would be massive. So if you cant even get a big increase in numbers, the patient most likely didnt meaningfully get better. Simple.

The PACE trial authors dropped the Acti like someone else mentioned. They used the excuse that it was inconvenient for patients (lol). We should measure steps in all the clinical trials. Its just a cheap watch on the wrist with lots of important data.

No excuses, get it done. unless there are specific reasons, any trial that doesnt measure step counts is automatically a big minus for me, because it shows the authors havnt read up on other MECFS studies like F and M, or even the PACE trial, that despite its flaws can teach you about study design (and what not to do).
 
Last edited by a moderator:
They used the excuse that it was inconvenient for patients (lol).
That was the public excuse. The excuse behind the scenes to the trial governing committee was that actimeters had failed to show a benefit, therefore there was no point in using them.

Think about that. They are saying that they should not be required to use measures that might falsify their claim.
 
That was the public excuse. The excuse behind the scenes to the trial governing committee was that actimeters had failed to show a benefit, therefore there was no point in using them.

Think about that. They are saying that they should not be required to use measures that might falsify their claim.
Do you have a source for this? It may be relevant to something I'm (slowly) writing
 
Do you have a source for this? It may be relevant to something I'm (slowly) writing
 
Do you have a source for this? It may be relevant to something I'm (slowly) writing
Before you do, can I just jump in and say this is something that is often described inaccurately. I have clarified what happened in these two posts: here and here. Basically, the plan was always to use actigraphy as a predictor, not an outcome, but they did consider using it or two other things as objective outcome measures. They decided not to use any of them as primary outcome measures. Their reasoning for that was heinous.

They did not originally plan for it to be an outcome measure and then reverse that decision. [Editing to add: but see @Tom Kindlon's posts later in this thread, and my follow-up.]

Just tagging people who discussed this above @Robert 1973, @Sean, @rvallee.

I am reading Lucibee's blog now and will edit this if I am wrong!

Editing having read it: I don't think Lucibee's blog changes my stance. She writes:
In the original trial FAQs (now no longer available on the PACE trial website, for some reason) and in their response to Tom’s comments on the protocol, the investigators stated:
Although we originally planned to use actigraphy as an outcome measure, as well as a baseline measure, we decided that a test that required participants to wear an actometer around their ankle for a week was too great a burden at the end of the trial. We will however test baseline actigraphy as a moderator of outcome.
I cannot verify that quote as it's not available. However, there's nothing in the Trial Management Group minutes or the protocol that supports that. It would be odd to say that on a public FAQ and not in private meetings, but I trust Lucibee that it is an accurate quote. It contradicts both the protocol and the minutes, though, so notwithstanding some contemporaneous evidence I haven't read yet, it's possible they got a bit confused there.

I have read Tom's letter on the protocol BMJ 2013;347:f5731. His mention of outcome measures is here:
One reason that the minutes are sought for the PACE (Pacing,Graded Activity, and Cognitive Behaviour Therapy—a Randomised Evaluation) trial, which looked at the effectiveness of treatments for chronic fatigue syndrome, is to find out why outcome measures were changed.1 None of the three primary outcomes were reported as in the protocol...

And their response BMJ 2013;347:f5963 is here:
Kindlon states that access to the committee minutes of the PACE (Pacing, Graded Activity, and Cognitive Behaviour Therapy—a Randomised Evaluation) trial is needed “to find out why outcome measures were changed.”1 We disagree.Firstly, the primary outcome measures were the same as those described in the protocol—fatigue and physical disability.2
Nothing else in their response is relevant.

So actigraphy is mentioned by neither, though I believe Tom has brought it up elsewhere.

It doesn't change the fact that their reasoning for not using actigraphy as an outcome measure was outrageous. That remains undeniably true.

From TMG minutes #11, 4 Nov 2004:
Actigraphy is to be given at baseline only, as a predictor. This is on the basis of research by the Dutch Nijmegen group who found it useful as a predictor (the more passive, the poorer outcome), but not useful for outcome.
And TMG minutes #12, 10 Dec 2004:
The issue of using actigraphy as an outcome measure was raised. It was noted that the Dutch study by Bleijenberg and colleagues reported that actigraphy was not a good outcome measure since the majority of patients are reasonably active and there is no change in this in spite of improvement in fatigue. However, pervasively passive people at baseline may do worse on CBT and perhaps better on GET.

A final decision on using this as an outcome has been postponed until we see how much of a measurement load actigraphy is, and it was agreed that this may be changed later next year if desired.

Happy to be corrected with other sources.
 
Last edited:
In the original trial FAQs (now no longer available on the PACE trial website, for some reason) and in their response to Tom’s comments on the protocol, the investigators stated:
Although we originally planned to use actigraphy as an outcome measure, as well as a baseline measure, we decided that a test that required participants to wear an actometer around their ankle for a week was too great a burden at the end of the trial. We will however test baseline actigraphy as a moderator of outcome.
It is over 20 years ago now, but as I recall when the funding was announced for the study in 2003, one person on a list I was on who seemed to have inside knowledge said that it involved actigraphy as an outcome measure i.e. that was the grant that was approved by the MRC.

But then by the time the protocol was published in 2007, actigraphy as an outcome measure was dropped.

It's very frustrating that, presuming this is the case, they could change this so radically from the multi-million pound grant that was approved.
 
Last edited:
Has the original grant application ever been made public?

This was an application at one stage, I'm not sure it was the final one:
THE PACE TRIAL IDENTIFIER.pdf

As I recall, it was obtained using a freedom of information act request. One person editorialises is about it here:

It includes:
Secondary measures:
Efficacy:
[..]
2. Daytime physical movement (an objective measure of activity) will be measured over 48 hours with an Actiwatch attached to the ankle.
 
But then by the time the protocol was published in 2007, actigraphy as an outcome measure was dropped.

It's very frustrating that, presuming this is the case, they could change this so radically from the multi-million pound grant that was approved.

This was an application at one stage, I'm not sure it was the final one:
THE PACE TRIAL IDENTIFIER.pdf
As I recall, it was obtained using a freedom of information act request. One person editorialises is about it here:
LISTSERV - CO-CURE Archives - LISTSERV.NODAK.EDU
Thank you so much for digging these out, @Tom Kindlon! So helpful. I've had a look.

It's unclear when the Pace Trial Identifier was written, but it's after October 2001 as they write
The outline proposal of this study (G010039) was approved for a full proposal in October 2001.
and well before April 2004, which when Jane Bryant's editorial is dated. @Tom Kindlon, you mentioned that funding was announced in 2003, so some time between late 2001 and early 2003.

In the PACE Trial Identifier, the secondary measures are listed as follows [formatted with each measure on a separate line for ease of reading]:
Secondary measures:
Efficacy:
1. The self-rated Clinical Global Impression (CGI) change score (range 1-7) provides a self-rated global measure of change, and has been used in previous trials.30
2. Daytime physical movement (an objective measure of activity) will be measured over 48 hours with an Actiwatch attached to the ankle.
3. The Hospital Anxiety and Depression scale will measure change in anxiety and depression.31
4. The 36 item short-form health survey (SF-36) measures not only physical but also social and role functioning.24
5. The EuroQOL (EQ-5D) visual analogue scale provides a simple global measure of quality of life.32
6. The Client Service Receipt Inventory (CSRI), adapted for use in CFS,33 will measure hours of employment/study, wages
and benefits received, allowing another more objective measure of function.
7. An operationalised Likert scale (from much better to much worse) of the nine CDC symptoms of CFS.1

The secondary outcome measures in the published trial per the main paper were:
  1. CGI
  2. Work and Social Adjustment Scale
  3. 6 Minute Walk Test
  4. Jenkins Sleep Scale
  5. HADS
  6. CFS symptom count
  7. Poor concentration or memory
  8. Postexertional malaise
So there are numerous changes from what is stated in the PACE Trial Identifier. I would have thought it would be competely normal for many things to change between a funding application and the protocol, with changes after that needing to be well-justified. Since the protocol was published mid-trial, this case is a little different.

Actigraphy for 48 hours does indeed feature in the Identifier but the 6MWT does not, and the latter was ultimately used.

Let's say for argument's sake that that document is the final version and it was indeed submitted to the MRC. Would the MRC's decision have been different if actigraphy had not been listed as an outcome measure? Or if the 6MWT had been there instead of it? Those are genuine questions, not rhetorical. Was actigraphy a dealmaker?

The Trial Management Group Minutes start in June 2002. Unless I missed it, actigraphy doesn't get a mention until meeting #7 in May 2004, when the minutes note [underlining added]:
Objective Measures of outcome. We had much discussion aboutvarious potential objective measures of outcome, including a six-minute walking test where the patient is timed using a stopwatch and the distance walked is recorded. The possibilities of using actigraphy and the step test for fitness were also discussed. We agreed that we would pilot the use of actigraphy, the step test and the six minute walking test in the first three centres. We had some discussion about whether an objective measure was to be a primary outcome. We had some discussion about the power of the trial to detect clinically significant differences between groups using the six-minute walking test.

The next mention is in meeting #10 [underlining added]:
Primary outcome(s)
Discussion took place regarding the primary and secondary outcomes – number of outcomes and efficacy. It was decided that it was acceptable to have several primary outcomes for the trial, and that fatigue and disability could be considered separately and in combination. Cost utility could be considered as a fourth primary outcome. Actigraphy was not considered suitable as a primary outcome, but should remain as a predictor only.

Then in meeting #11, we hear why:
Actigraphy is to be given at baseline only, as a predictor. This is on the basis of research by the Dutch Nijmegen group who found it useful as a predictor (the more passive, the poorer outcome), but not useful for outcome.
And in meeting #12, we get the clanger:
The issue of using actigraphy as an outcome measure was raised. It was noted that the Dutch study by Bleijenberg and colleagues reported that actigraphy was not a good outcome measure since the majority of patients are reasonably active and there is no change in this in spite of improvement in fatigue. However, pervasively passive people at baseline may do worse on CBT and perhaps better on GET.

A final decision on using this as an outcome has been postponed until we see how much of a measurement load actigraphy is, and it was agreed that this may be changed later next year if desired.

So whatever was written in the Identifier, according to the Trial Management Group minutes, actigraphy was not a planned outcome measure at any point from 2002. If the MRC required or requested an objective measure of physical activity, and this was supplied with no intention to use it, then that would be deceitful, but I have not seen anything that would suggest that.

I think the thing for advocates to focus on is not that actigraphy appeared in the PACE Trial Identifier as a secondary outcome measure, but their stated reason for not using it as an outcome measure, and the fact that CBT participants did not improve on the objective measure that was used - the 6MWT - any more than those doing SMC or APT, and GET participants' improvement on the 6MWT was statistically significant, but small and unlikely to be clinically significant.

Tagging @V.R.T. @Robert 1973 @Sean @rvallee to make sure you all see this update after Tom's helpful input.
 
Last edited:
I think the thing for advocates to focus on is not that actigraphy appeared in the PACE Trial Identifier as a secondary outcome measure, but their stated reason for not using it as an outcome measure, and the fact that CBT participants did not improve on the objective measure that was used - the 6MWT - any more than those doing SMC or APT, and GET participants' improvement on the 6MWT was statistically significant, but small and unlikely to be clinically significant.
Not just as an objective outcome measure of immediate improvement, but also critically for the GET arm to test that:

1. Patients did actually follow the full treatment protocol, and did not simply report that they did (which is a serious possibility given how much pressure they were likely under to report that they did).

2. Even if patients did follow it that they did not simply substitute the treatment for their normal daily activities, with no overall increase in activity.

3. The subjective and objective outcome measures correlated.

A strong positive result on actigraphy (including correlations) would have been one of the most powerful pieces of evidence possible in favour of their hypothesis, and they would have been crowing it from the rooftops if it had delivered, and quite rightly so.

But they knew actigraphy could falsify their claim of increased activity (due to treatment), and were likely to do so, and so they made sure it couldn't by the simple but very effective tactic of just not collecting that data in the first place. They deliberately weakened and perverted the trial by refusing to collect some of the most robust and critical data possible.

The excuses given – that actimeters were too much of a burden to patients to wear, and that they failed to report a benefit in the Dutch studies – are utterly appalling, and should have immediately disqualified the study from proceeding any further or being taken seriously.

Actigraphy should also have been a primary outcome measure. No question about that.

This piece of brazen skullduggery and its consequences for us enrages me to this day. It is straight and cruel fraud, as far as I am concerned. They knew what they were doing, had no excuses, and I will never forgive them for it.

:mad::mad::mad:
 
Not just as an objective outcome measure of immediate improvement, but also critically for the GET arm to test that:

1. Patients did actually follow the full treatment protocol, and did not simply report that they did (which is a serious possibility given how much pressure they were likely under to report that they did).

2. Even if patients did follow it that they did not simply substitute the treatment for their normal daily activities, with no overall increase in activity.

3. The subjective and objective outcome measures correlated.

A strong positive result on actigraphy (including correlations) would have been one of the most powerful pieces of evidence possible in favour of their hypothesis, and they would have been crowing it from the rooftops if it had delivered, and quite rightly so.
I agree. I think advocates were and are right to demand actigraphy as an outcome measure. If actigraphy had been measured at 0 and 52 weeks, and there were no statistically significant differences between groups, it would have supported the argument that the changes in subjective measures were due to various biases.

However, it's also possible that actigraphy would have shown a benefit of GET/CBT over SMC/APT or of GET over CBT in the PACE trial, if it had been done. Which would have been bad for us.

Why might it have shown a benefit of GET over CBT? Well, here's a quick analysis of the proportions in each PACE trial arm whose physical function score increased by what we might consider a lot (mistakes possible):

Change in SF36PF between 0 and 52 wksGETCBTSMCAPT
>=20 points​
55.8%​
50.0%​
38.2%​
35.9%​
>=40 points​
26.6%​
17.6%​
13.8%​
9.8%​

If those big changes in SF36PF correlated with jumps in steps, you could have a difference.

While we might not trust change in the subjective SF36PF score too much in a trial where there's so much brainwashing going on, and there's a lot of variation in how SF36PF and steps match up at an individual level, the SF36PF does correlate with steps at group level. This is from the phase III rituximab trial where expectations were high:
Looking at table 2 of the 2019 paper:
  • The rituximab group improved by roughly 10 points on the SF36PF and 480 steps.
  • The placebo group improved by roughly 13 points on the SF36PF and 671 steps.

In Rekeland et al. 2022's data (participants did not do an intervention, they just wore a step counter), the mean change in steps for those whose SF36PF changes by -20 to +5 is 100-200 steps, with huge variation (range -1827 to +2637). But the mean change in steps for those whose SF36PF changes by +10 to +45 is 1454, largely driven by 3 participants with changes of 20 points/4208 steps, 25 points/2211 steps and 45 points/6319 steps.

At a certain magnitude, changes in SF36PF are likely to correlate with an increase in steps per day, regardless of whether the intervention is truly effective or not. If one group is pushing themselves more during that period of measurement than another, that will show up as higher steps. In the PACE Trial Identifier, they specified a 48 hour measurement. All you need is for a subgroup of one arm to be able to overexert for 2 days while another subgroup - perhaps the APT group, who have been expressly told not to overexert - continues their usual routine, and you've got a statistically significant difference between groups in steps per day.

The PACE cohort was very heterogeneous. For example, they deliberately recruited those with fibromyalgia. To be honest, I'd be surprised if it didn't contain a subgroup that either genuinely benefitted from being told to do more or was capable of overexerting for 2 days.

Had two objective measures shown superiority of GET over CBT, they would also have researched themselves out of a job. Not sure they wanted that, even though they said they wanted to know "which patients require the more complex CBT rather than the simpler and more readily available GET".
 
A strong positive result on actigraphy (including correlations) would have been one of the most powerful pieces of evidence possible in favour of their hypothesis, and they would have been crowing it from the rooftops if it had delivered, and quite rightly so.
Which says everything about how they knew it would fail. By hiding it they can play games, but had it been measured and negative it would have been more awkward to sweep under the rug. Which they did anyway with the working ability, which was the actual main goal.

Then again they argued, officially, that patients with an SF-36 physical functioning score below most advanced cancers are active enough that there is no need to try to get them to be more active, in a trial where getting them to be more active is the entire process, and got away with it so it's not as if anything matters, the outcomes were pre-determined and nothing would have made them, or anyone involved in the system, acknowledge the failure of the model. Especially when they managed to get the treatment model 'trialled' in PACE in the national guidelines before the trial even completed. The trial was as rigged as an election with an autocrat-for-life, it's not who votes that counts, it's who counts the votes.

And now it's just routine for trials to see their primary outcomes fail and the treatment being widely recommended anyway, there is no turning back from this, not after millions of lives have been sacrificed on their altars. Fraud is the default now. Whether PACE was a watershed moment is unclear, but this is the system they built for themselves. That the system works against us is clearly not a worry to anyone working in it.
 
Back
Top Bottom