Trial Report Cost Utility of Specialist Physiotherapy for Functional Motor Disorder (Physio4FMD), 2025, Hunter, Stone, Carson, Edwards et al

Hutan

Moderator
Staff member
One trial has resulted in three papers. As well as this paper, there is:
  • the main paper, which found a null result:
Specialist physiotherapy for functional motor disorder in England and Scotland (Physio4FMD):... 2024 Nielsen, Stone, Carson, Edwards et al
  • an analysis of factors that predict outcome:
Which factors predict outcome from specialist physiotherapy for functional motor disorder?... 2025, Nielsen, Stone, Carson, Edwards et al
___________________


Cost Utility of Specialist Physiotherapy for Functional Motor Disorder (Physio4FMD)
Economic Analysis of a Pragmatic Randomized Controlled Trial


Abstract
Background and Objectives
Functional motor disorder (FMD), a motor-dominant variant of functional neurologic disorder, is a disabling condition associated with high health and social care resource use and poor employment outcomes. Specialist physiotherapy presents a possible treatment option, but there is limited evidence for clinical effectiveness and cost-effectiveness. Physio4FMD is a multicenter randomized controlled trial of specialist physiotherapy for FMD compared with treatment as usual (TAU). The aim of the analysis was to conduct a randomized trial based on economic evaluation of specialist physiotherapy compared with TAU.

Methods
Eleven centers in England and Scotland randomized participants 1:1 to specialist physiotherapy or TAU (referral to community neurologic physiotherapy). Participants completed the EuroQoL EQ-5D-5L, Client Service Receipt Inventory, and Work Productivity and Activity Impairment Questionnaire at baseline, 6 months, and 12 months. The mean incremental cost per quality-adjusted life year (QALY) for specialist physiotherapy compared with TAU over 12 months was calculated from a health and social care and wider societal perspective. The probability of cost-effectiveness and 95% CIs were calculated using bootstrapping.

Results

The analysis included 247 participants (n = 141 for specialist physiotherapy, n = 106 for TAU). The mean cost per participant for specialist physiotherapy was £646 (SD 72) compared with £272 (SD 374) for TAU. Including the costs of treatment, the adjusted mean health and social care cost per participant at 12 months for specialist physiotherapy was £3,814 (95% CI £3,194–£4,433) compared with £3,670 (95% CI £2,931–£4,410) for TAU, with a mean incremental cost of £143 (95% CI £–825 to £1,112). There was no significant difference in QALYs over the 12-month duration of the trial (0.030, 95% CI –0.007 to 0.067). The mean incremental cost per QALY was £4,133 with an 86% probability of being cost-effective at a £20,000 threshold. When broader societal costs such as loss of productivity were taken into consideration, specialist physiotherapy was dominant (incremental cost: £−5,169, 95% CI £–15,394 to £5,056).

Discussion

FMD was associated with high health and social care costs. There is a high probability that specialist physiotherapy is cost-effective compared with TAU particularly when wider societal costs are taken into account.

Trial Registration Information

International Standard Randomised Controlled Trial registry, ISRCTN56136713.

https://www.neurology.org/doi/10.1212/CPJ.0000000000200465
 
Last edited:
Moved posts

They have a new economic analysis of the trial out. They say the intervention is cost-effective. In a tweet the main author (Nielsen) wrote: "It is possible to have a treatment that may not be significantly more clinically effective, but has a high probability of being more cost effective."

So apparently it's cost-effective, by whatever way they're calculating it. Interesting how that works.
 
Last edited by a moderator:
In a tweet the main author (Nielsen) wrote: "It is possible to have a treatment that may not be significantly more clinically effective, but has a high probability of being more cost effective."
What are they comparing the effect against?

If they found that it’s as effective as another intervention, but a lot cheaper, it would make sense to say it’s more cost-effective.

But if the effect is not clinically significant, it can’t be cost effective because you’d essentially be dividing by zero.
 
They have a new economic analysis of the trial out. They say the intervention is cost-effective. In a tweet the main author (Nielsen) wrote: "It is possible to have a treatment that may not be significantly more clinically effective, but has a high probability of being more cost effective."

So apparently it's cost-effective, by whatever way they're calculating it. Interesting how that works.
There was no significant difference in QALYs over the 12-month duration of the trial
Amazing. They found a way of dividing by zero: find a machine that doesn't mind being stuck in an infinite loop. Yes, it's the most expensive machine in the world, and it produces nothing of value, but it will just keep on adding -1 to 1 over and over again.

"Imagine a world"-based medicine for the, uh, loss, I guess.
 
What are they comparing the effect against?

If they found that it’s as effective as another intervention, but a lot cheaper, it would make sense to say it’s more cost-effective.

But if the effect is not clinically significant, it can’t be cost effective because you’d essentially be dividing by zero.
They say treatment as usual. Despite a 4% difference in costs (according to them, which is an unreliable source so whatever). Which all seems to ignore the fact that TAU remains ongoing. And for, as they admit, no difference in QALY, even though it's QALY's value that is used here. But the dividing by zero machine will happily go ahead and do that.
 
Although I guess they're actually comparing this to itself:
82% of participants (n = 87) in TAU received community neurologic physiotherapy.
So I guess the idea is the difference between generic neurologic physiotherapy, and specialist psychosomatic physiotherapy, which is even more generic.

Oh, my bad, they divided by 0.03, so not quite zero:
The mean incremental cost per QALY gained for specialist physiotherapy compared with TAU from a health and social care perspective was £4,133 (mean incremental cost of £143 [95% CI £–1,080 to £1,367] divided by mean incremental QALYs of 0.03 [95% CI –0.01 to 0.08]).
Now, 0.03, which is somehow there despite there being "no significant difference in QALYs over the 12-month duration of the trial", amounts to a mean of 11 days.

This would sound impressive:
If the difference in costs using data from NHS England (£84) is added to the additional cost of specialist physiotherapy (£374) and divided by the incremental QALYs (0.03), the incremental cost-effectiveness ratio would be £15,267 per QALY gained.
If QALY wasn't a measly 11 days.

Lies, damned lies, and statistics. This is basically delusional.
 
I tried to take a look at the math.
When the cost of the intervention (including training) is added, the adjusted mean health and social care cost per participant for specialist physiotherapy was £3,814 (95% CI £3,194–£4,433) compared with £3,670 (95% CI £2,931–£4,410) for TAU, with a mean incremental cost of £143 (95% CI £-825–£1,112) compared with TAU.
Incremental cost is how much more it cost per participant to use specialist physio (I’ll call it SP) compared to TAU.

£143 more for SP vs TAU.
The difference between the two groups for A and E and inpatient stays was £84 greater (95% CI –£1,409 to £1,577) for specialist physiotherapy compared with TAU using data from NHS England and £484 less (95% CI –1,321 to 353) for the same patient cohort in the patient-reported population.
A and E = accident and emergency. Inpatient = required to stay at hospital etc.

£84 more for SP vs TAU.
The difference in the total cost of outpatient contacts with specialists was £165 greater (95% CI –£278 to £609) for specialist physiotherapy compared with TAU in the NHS England data set and £2 less (95% CI –£334 to £330) in the self-reported data.
Outpatient = did not have to stay at hospital.

£165 more for SP vs TAU.
If the difference in costs using data from NHS England (£84) is added to the additional cost of specialist physiotherapy (£374) and divided by the incremental QALYs (0.03), the incremental cost-effectiveness ratio would be £15,267 per QALY gained.
I do not understand where they got £374 from. It only appears as a standard deviation and is not in any of the supplementary files.

£143 + £84 + £165 = £392
With 0.03 QALY, that’s £13,067 per QALY.

If the £143 cost of SP is supposed to be £374 in the above calculation, the total would be £623 or £20,767 per QALY (more than the standard £20,000 threshold they quote).
 
TepW491_d.webp

The far more interesting story is found in the supplementary files. This is the first part of the table.
1-Group A completed follow-up before March 23, 2020 (when national COVID-19 lockdown was instigated in the UK).

Group B completed treatment before March 23, 2020, but completed follow-up after March 23, 2020.

Group C were randomly assigned to treatment groups but did not receive treatment before March 23, 2020, and completed follow-up after March 23, 2020. Only 30 (34%) participants in group C received their physiotherapy treatment within the trial follow-up period and the treatment was delayed (8 from specialist physiotherapy, received at a median of 253 days post randomisation; 22 from treatment as usual, received at a median of 174 days post randomisation).

Group D were recruited in the extension period from Aug 3, 2021.
They excluded C because they barely completed the study due to lockdown. That’s fine.

But there is a drastic difference between pre- and post-pandemic participants.

For pre (AB):
£745 and an incremental QALY of 0.02. That’s £37,250 per QALY - almost double the threshold of £20,000.

For post (D):
SP was cheaper then TAU with a better outcome, so it would be a win-win.

If all of AB completed the intervention before the pandemic and follow up was 12 months, they would have had all of their data for half a year before they recruited group D. There is no mention of blinding in any of the papers. So they might have known that the treatment didn’t work before they decided to recruit more participants - and the new participants tipped the scales in their favour!

Edit: they use «masking» instead of blinding. So the comment above might not be correct.
 
Last edited:
Researchers collecting the trial outcomes, statisticians, and health economists were masked to treatment allocation, and participants were asked not to reveal their group allocation to research workers. Due to the nature of the intervention, it was not possible to mask the trial manager, participants, or treating clinicians.
Contributors
GN led the study and wrote the first draft of the manuscript. GN, AC, MJE, LHG, RMH, JM, LM, IN, MR, and JS contributed to the study design and funding acquisition and were involved at all stages. GN and KH designed the trial intervention, wrote the intervention workbook and treatment manual, trained the physiotherapists, and provided supervision to the specialist physiotherapist group. LM and TCL designed and completed the statistical analysis. MLN was involved in the data analysis. BSS and HN were trial managers. A-MS was the lead research assistant. GN, BSS and A-MS verified the data. All authors were members of the Trial Management Group. All authors helped to interpret the data, to critically revise the manuscript for important intellectual content and approved the final version. All authors had full access to the data if desired and had final responsibility to submit the manuscript for publication.
I wonder how blinding worked if all authors had access to the data. This might not have included that key to unblind the data?
 
I wonder how blinding worked if all authors had access to the data. This might not have included that key to unblind the data?
It feels like it must surely these days be only fools that believe the reason double-blinding is needed is because of the minds of the participants and nothing to do with the people whose fortunes will be made on them ‘submitting the right answers to tye questionnaire’ maybe being the ones who change their behaviour.
 
This is what I don't get. They're using 0.03 even though it's not a statistically significant finding but I don't see them explaining that decision anywhere, unless I missed it.
They «address» it towards the end of the economic analysis:
Our findings need to be interpreted alongside the findings in the clinical trial article, which found no significant difference on the primary outcome of the SF36 Physical Functioning domain.10However, the specialist physiotherapy group was twice as likely to report an improvement in their motor symptoms at 12 months on a patient-reported Clinical Global Impression Scale, and specialist physiotherapy also led to significantly higher scores on SF36 Physical Role Limitations, SF36 Social Functioning, Hospital Anxiety and Depression Scale Anxiety,10 and EQ-5D-5L at 6 months, as reported in this article.

The contrasting findings between the economic evaluation and analysis of the primary outcome of the trial may in part be a result of a different burden of proof for economic evaluations to evaluate cost-effectiveness compared with determining clinical effectiveness. The outcomes of interest in economic evaluations, including resource use, costs, and QALYs, are not powered for as the primary outcome. Furthermore, the skewed nature of the data and difficulty with calculating 95% CIs for the incremental cost effectiveness ratio indicate that economic evaluations do not use p values to determine cost-effectiveness but, instead, present the probability that the intervention is cost-effective to inform decision making.28 Another important contributor to the different findings is likely to be that the primary outcome of clinical effectiveness was an overly narrow view of the potential benefits of physiotherapy. The secondary outcome, the patient-reported Clinical Global Impression Improvement score, allows for a broader assessment of potential impacts that specialist physiotherapy can have across various aspects of patients' lives, including the cost impact, thus capturing a wider range of outcomes. It is notable that consensus recommendations for outcome measures in FND (published after this trial was planned) have recommended the patient-reported Clinical Global Impression of Improvement as the primary outcome measure in trials of interventions in FND.29
 
They «address» it towards the end of the economic analysis

Oh, I see--thanks. Economics uses different methods. Ok, I get that.

This seems self-serving to me:

"Another important contributor to the different findings is likely to be that the primary outcome of clinical effectiveness was an overly narrow view of the potential benefits of physiotherapy. The secondary outcome, the patient-reported Clinical Global Impression Improvement score, allows for a broader assessment of potential impacts that specialist physiotherapy can have across various aspects of patients' lives, including the cost impact, thus capturing a wider range of outcomes. It is notable that consensus recommendations for outcome measures in FND (published after this trial was planned) have recommended the patient-reported Clinical Global Impression of Improvement as the primary outcome measure in trials of interventions in FND."

The CGI-I is basically, do you feel improved or have your symptoms improved, and you get five choices. So in that sense I guess it covers a lot of ground, like they say. But that's because it's pretty vague and non-specific. It doesn't seem to provide any yardstick beyond your prior assessment. In contrast, the SF-36 asks very specifically about a range of activities and whether you can perform them, and you rate each one, and there's a composite score that they can conpare against other populations. While both are subjective and self-reported and subject to bias, it seems like one that requires you to think about each activity and rate it has some advantages over one that just says, essentially, Do you feel better than before?
 
Warning - some comments about a person's death that may be interpreted as flippant, although I intend the opposite.

main paper abstract said:
one death occurred in the specialist physiotherapy group (cause of death was recorded as suicide). All were considered unrelated to specialist physiotherapy.
It seems to me that the person who committed suicide was not having a great quality of life. They were in the specialist physiotherapy group. I think it is highly debatable as to whether their death was related to a treatment that essentially aimed to have them believe that they could get better if they wanted to. At best, it did not stop them killing themselves.

Dave Tuller said:
As described in the paper, the Physio4FMD intervention sought to focus on the factors presumed to be driving the symptoms, such as paying excessive attention to symptoms, and had three broad goals: “to help patients understand their symptoms; to retrain movement with redirection of attention away from focusing on their body; and to develop self-management skills.” The approach had undergone extensive development in the years before the trial.

If the aim is to help a patient in your care 'develop self-management skills', how does having them choose to die not have a massive impact on any QALY calculation or cost-benefit calculation? Surely someone having zero quality of life for the rest of their life has to affect that? Not to mention the impact on their family.

It's becoming a horrible trend - there was the BPS treatments of MAGENTA applied to children that resulted in an attempted suicide. And now this suicide in a BPS treatment trial. It shouldn't come as a surprise - gaslighting is not known for its benefits on quality of life.
 
Last edited:
Oh, I see--thanks. Economics uses different methods. Ok, I get that.
I'm not sure if that comment is sarcastic Dave?

If it isn't, it should be. An economic model can potentially make all sorts of assumptions, some good and some bad. It is not a requirement to fish around, torturing data and massaging numbers to come up with a QALY slightly more than zero so the maths works. Of course, if you have a strong incentive to find something works, there are ways to get the answer you want. It doesn't make it good practice.
 
If the aim is to help a patient in your care 'develop self-management skills', how does having them choose to die not have a massive impact on any QALY calculation or cost-benefit calculation? Surely someone having zero quality of life for the rest of their life has to affect that? Not to mention the impact on their family.

It's becoming a horrible trend - there was the BPS treatments of MAGENTA applied to children that resulted in an attempted suicide. And now this suicide in a BPS treatment trial. It shouldn't come as a surprise - gaslighting is not known for its benefits on quality of life.
There's a very odd vibe over quality of life and how despite claims of curative treatments, the ideologues always fall back onto things like quality of life when it all fails. Even though they never actually improve any of it. Not for real. Only appearances on cheap biased questionnaires.

When you examine our quality of life, including people labeled FND, it's among the lowest levels of quality of life out there, and it hardly seems to bother anyone involved in this. As long as someone can report a slight uptick of 2-3 points on whatever scale, it's basically as if they solved it. That scale could be a thousand points and they'd still argue that this tiny improvement should be interpreted like the first rising part of an exponential curve that will hit max level shortly, right around the corner, inevitable.

It's been a while since I just assume they're all either lying or delusional, or probably a combination of both. Everyone involved in this. I don't know how else to interpret pretending to tweak at the margins of something this significant, and never being bothered by the sheer mass that they ignore.

That's just the natural outcome of a system built on lies and delusional fantasies. As reality keeps on disagreeing with their models, they have to increasingly ignore reality and fall back onto trivial nonsense. But among professionals, this stuff will always be presented as curative treatments that are effective for anyone who truly wants to 'get better'. They hold on to the blatant and disgusting lies that tens of millions of people have no quality of life, just because we can't accept having psychological disorders and follow the easy instructions out of it.

This is the ultimate delusion, the broader profession pretending like those delusions are anything but that. But the lies have piled up so high over decades, a literal mountain made of millions of destroyed lives, that it's not possible to walk any of it back. So they just keep piling and piling because the number of people who truly care and understand, outside of patients and the few loved ones that stick around and care, probably fit in a bus.

Sorry, maybe a bit lugubrious but this is just the sober reality of what we are dealing with. The scale of it. The sheer monstrosity of what those people do in complete, and smug, obliviousness.
 
Back
Top Bottom