A safe and effective micro‑choice based rehabilitation for patients with long COVID: results from a quasi‑experimental study 2023, Frisk et al

78 people started the training; 2 didn't complete it, supposedly due to some other illness.
5 more people dropped out, 4 because they didn't want to continue and 1 due to some new disease.

That left 71.

So, there isn't data for all 71 participants for all outcomes.

Table 2 shows the numbers of participants at baseline and at three months. There are huge drops in the numbers of participants at 3 months, and no consistency in how many people are included at baseline. It really is very shoddy, especially given there are no controls. How is it reasonable to include the data from people at baseline who weren't even able to complete the training due to illness? - it artificially depresses the baseline.
e.g. numbers of participants:
CFQ - at baseline 77; at 7 days 74; at 3 months 71
Sick leave - at baseline on sick leave 39 out of 62 (63%); at 3 months 23 out of 55 (43%)
VO2 peak - at baseline 77; at 3 months 67
Dyspnea 12 - at baseline 68; at 3 months 71

Re the sick leave, sick leave was measured for employed participants only. They say that 63% of the 62 employed participants at baseline were on sick leave (therefore 23 people were employed and weren't on sick leave at baseline), but only 43% of participants were on sick leave at 3 months. Given that only 55 participants were assessed for sick leave at 3 months, that suggests that 32 people were employed and not on sick leave at 3 months. That's 9 people out of 78 who went from on sick leave to not on sick leave between having the course and the 3 month followup.

The discussion says

I don't think a rapid return to work fairly describes what we have seen here, at least not due to the 3-day course.

If we look at Figure 2 which gives the waterfall diagram for this 'quasiexperiment', 16 people out of the 120 assessed for eligibility are reported as improving between the assessment for eligibility and the baseline measures to the point where they became ineligible. Between the baseline and the course, which surely was not very long, another 3 out of 83 improved to the point where they became ineligible. Clearly, people were improving all the time, including before they attended the 3 day course.

Also, sick leave was self-reported by people who had had 3 days of being told that they should not let their symptoms dictate what they did - they would have got that message that people who took sick leave were not taking charge of their illness. That probably affected reporting rates.

Re the VO2 peak, the average VO2 peak for sedentary men is reported from google as 35 to 40ml/kg and just 27 to 30 for average sedentary females. Even with all those drop outs (77 to 67), there wasn't much of a change in VO2 peak/kg: from 30.8 to 31.5. It's worth noting that the VO2 peaks actually were pretty good, even at baseline, especially given 82% of the participants were female. Table 3 gives the data expressed as a percentage of expected value - from 92% of predicted to 95% of predicted.

Re fatigue, 99% of the participants reported fatigue at baseline. At 3 months, 77% still reported fatigue, 18% of them reporting severe fatigue, even with the problems with the CFQ.

(I'm tired, so I'm not going to check I have all the numbers right, but will just post this.)

I was going to note something similar, but focusing on how on earth they can claim a change in the mean is significant when the 'n' number of respondents makes it clear they've lost a substantial number - who are likely to be those who don't want to go through another CPET due to illness being more severe - between 'baseline' and '3month' groups that they are comparing.

the 30sec sit to stand test:
baseline n=75 Mean (Std Dev) = 19.0 (6.5)

3month n = 68 Mean (Std Dev) = 22.6 (6.9)

Bit weird that they've managed to filter out approx 10% of participants and end up with a bigger standard deviation, but then when you think about the specificity of recruitment, nature of Long COvid and what a difference 3months makes if they are recruiting people who are defined as having it at 3months, and then the 'treatment' being apparently anything goes...


the V02 peak:
baseline n=77 Mean (Std Dev) = 30.8 (6.2)

3month n = 67 Mean (Std Dev) = 31.5 (6.4)

To me this is 'the one' because of what we all know about CPETs and what they do to those who have PEM. It's a bit strange that 2 more did this at baseline vs the sit to stand test, but hey. 10 out of 77 fewer people are included in their 'mean' at 3months. It is just laughable to use a mean rather than comparing the individuals to themselves at baseline?! And even then they've just proven that their treatment basically makes no difference to the measure that is probably most coercion-free

Just because a piece of research somehow is allowed to choose not to have a null hypothesis, does that really mean that when their results show that even with 3months for many to have naturally got better and having whittled out through coercion (non-believer) and harm/risk of harm etc 10 of these people then just doing another average to show the 'whittled group vs the non-whittled' average for this that isn't basically showing that null hypothesis proven?

I mean given they wouldn't publish if they caused deterioration is there anything else that could be as slam-dunk as 'but it makes no difference to the underlying' ergo 'null hypothesis proven' than this?


Weirdly... the 'Dyspnea 12' measure showed an increase in 'n' number of participants at 3months vs baseline from 68 to 71 - how is that allowed?
just roll them in with the rest?


And their sick leave 'data' goes from 39 participants 'in their mean average' at 'baseline' but only 23 ie only half (if it is the same people) counted in their mean 'at 3months'
How on earth - I'm sorry it isn't feasible - can you get around 70 (often more) participants to turn up and do a CPET or fill in pretty long questionnaires and then claim only 23 wanted/were able to do the online log of their sickleave? - A comparatively simpler, and more important task done at the same timing.

Well I'm normally caveated with accusations, but without an explanation I can anticipate that could be convincing I'm thinking something potentially significant happened there to deter or exclude those who had much sick leave from filling it in?

And after all, we've no way of knowing which of all these original people were the ones 'lost' or included for these as there seems little rhyme nor reason - at least when it is attrition over x months type design other BPS have used then you can see it's not someone appearing for one and not the other etc but this is ... confusing.
 
Last edited:
3 days from 8.30 am to 16.00 pm, including exercise. That must surely have had a major impact on what sort of participants signed up. It's sounding a lot like the Lightning Process, perhaps without the high price tag. No coincidence that this is Norwegian research I think.

Co-author Marte Jürgensen claims she recovered from ME/CFS through the Lightning Process, and is/was also affiliated with Recovery Norway (vice president).

Makes me question how they went from the initial 120 patients screened for eligibility down to 83 as well. I believe the protocol stated motivation as part of the criteria.

edit:
After checking, it was not the Lightning Process, but a similar program.
 
Last edited:
Back
Top Bottom