Barry
Senior Member (Voting Rights)
The issue of GET and safety has of course been much discussed, especially regarding the the PACE trial authors' claims to have demonstrated its safety, versus the fact there exists good quality anecdotal evidence to the contrary.
There is however an aspect I'm not sure has been broached, and which I would like to be sure is considered. If it has already been addressed, then better it's done twice rather than not at all. The safety or otherwise of GET is of course a crucial consideration for the new NICE guideline, and is particularly why I'm posting this.
(If you prefer not to wade through all the words here, jump to the last two paragraphs!
).
Safety margins. Imagine a black box system, and varying one of its inputs through its range, looking at the effect on the system's output(s). Imagine the system is in a safe state, but that the input is capable of values that can put it into an unsafe state. Now vary the input progressively from its safe value towards an unsafe value; there will be some value at which the the system is deemed to be definitely in an unsafe state. Let's suppose for example that the input started at a value of 5, and the system became unsafe when it reached a value of 10. (All the numbers here are by way of the same arbitrary example).
This does not mean that the input can be reliably taken to 9.99 and presumed all is still safe, even for the single system it is being tested on. There will likely be a value significantly before 10, that will be deemed the maximum value at which the system might still be considered safe; let's say 9 for example, implying a safety margin of 1.0. Each type of system will be different, and each instance of a system within its type will vary.
In reality we will not be talking about a single input, but multiple inputs, with a corresponding geometric increase in complexity, especially if the system is nonlinear. Instead of a single value being changed, we will have the notion of a performance envelope, and the limits within that envelope before things become unsafe, and the safety margins needed within that envelope to remain safe.
The determination of such safety margins will be a combination of theoretical analysis and safety trials on real systems; such systems are invariably too complex to rely solely on theoretical analysis alone, which at best will always be an approximation.
At some point the safety trials have to be designed. I'm not claiming to be any sort of expert on this, but I think it highly likely that such safety trial design will be heavily reliant on the theoretical understanding that does exist, and just as importantly ... recognition of what sort of things are not fully understood, but nonetheless need to be tested.
The most dangerous mistake to make would be to think you understood the system behaviours well, when in fact you did not, and so designed your safety trials on that flawed understanding. You may then design you safety trials woefully inadequately, whilst fondly believing you have covered all the bases. Your system under test might successfully pass your badly designed safety trials, yet that same system might become very unsafe in the real world. Even if you run plenty of instances of your system type through these safety trials, the flawed design of the safety trial means they might all pass with flying colours, even though unsafe in the real world.
So if the system type in question is the human body, one of the most complex and nonlinear systems there is, your safety trial needs to be designed with high awareness of what is understood, and especially what is not fully understood but needs testing. If you have a hypothesis of what condition the intervention is operating on, then that can guide what testing might be needed in order to assert the intervention is safe. But if that hypothesis is wrong, woefully hopelessly wrong, then the resulting testing cannot possibly be used to assert it proves the intervention to be safe, because the safety testing will be as flawed as the hypothesis the whole thing is based on.
The PACE trial authors assert the safety of its interventions based on safety testing that is itself flawed, given it is based on their flawed hypothesis of what perpetuates ME/CFS - deconditioning. Once the real physiology of ME is understood, it will almost certainly show that the PACE trial missed testing all sorts of safety aspects of GET for PwME. The anecdotal evidence of harms tells us this. You simply cannot reliably design safety testing based on a seriously flawed hypothesis.
I'm sure this can be explained much more succinctly. I obviously don't have the medical expertise, and there will also be much better qualified engineers than myself who could do a better job. But I do think we need to get across to NICE why the PACE trial simply cannot be taken as any kind of reliable evidence for the safety of GET. And to me the flawed hypothesis is at the heart of that - how can safety be reliably assessed on that basis!
ETA: Minor edit for clarity.
There is however an aspect I'm not sure has been broached, and which I would like to be sure is considered. If it has already been addressed, then better it's done twice rather than not at all. The safety or otherwise of GET is of course a crucial consideration for the new NICE guideline, and is particularly why I'm posting this.
(If you prefer not to wade through all the words here, jump to the last two paragraphs!

Safety margins. Imagine a black box system, and varying one of its inputs through its range, looking at the effect on the system's output(s). Imagine the system is in a safe state, but that the input is capable of values that can put it into an unsafe state. Now vary the input progressively from its safe value towards an unsafe value; there will be some value at which the the system is deemed to be definitely in an unsafe state. Let's suppose for example that the input started at a value of 5, and the system became unsafe when it reached a value of 10. (All the numbers here are by way of the same arbitrary example).
This does not mean that the input can be reliably taken to 9.99 and presumed all is still safe, even for the single system it is being tested on. There will likely be a value significantly before 10, that will be deemed the maximum value at which the system might still be considered safe; let's say 9 for example, implying a safety margin of 1.0. Each type of system will be different, and each instance of a system within its type will vary.
In reality we will not be talking about a single input, but multiple inputs, with a corresponding geometric increase in complexity, especially if the system is nonlinear. Instead of a single value being changed, we will have the notion of a performance envelope, and the limits within that envelope before things become unsafe, and the safety margins needed within that envelope to remain safe.
The determination of such safety margins will be a combination of theoretical analysis and safety trials on real systems; such systems are invariably too complex to rely solely on theoretical analysis alone, which at best will always be an approximation.
At some point the safety trials have to be designed. I'm not claiming to be any sort of expert on this, but I think it highly likely that such safety trial design will be heavily reliant on the theoretical understanding that does exist, and just as importantly ... recognition of what sort of things are not fully understood, but nonetheless need to be tested.
The most dangerous mistake to make would be to think you understood the system behaviours well, when in fact you did not, and so designed your safety trials on that flawed understanding. You may then design you safety trials woefully inadequately, whilst fondly believing you have covered all the bases. Your system under test might successfully pass your badly designed safety trials, yet that same system might become very unsafe in the real world. Even if you run plenty of instances of your system type through these safety trials, the flawed design of the safety trial means they might all pass with flying colours, even though unsafe in the real world.
So if the system type in question is the human body, one of the most complex and nonlinear systems there is, your safety trial needs to be designed with high awareness of what is understood, and especially what is not fully understood but needs testing. If you have a hypothesis of what condition the intervention is operating on, then that can guide what testing might be needed in order to assert the intervention is safe. But if that hypothesis is wrong, woefully hopelessly wrong, then the resulting testing cannot possibly be used to assert it proves the intervention to be safe, because the safety testing will be as flawed as the hypothesis the whole thing is based on.
The PACE trial authors assert the safety of its interventions based on safety testing that is itself flawed, given it is based on their flawed hypothesis of what perpetuates ME/CFS - deconditioning. Once the real physiology of ME is understood, it will almost certainly show that the PACE trial missed testing all sorts of safety aspects of GET for PwME. The anecdotal evidence of harms tells us this. You simply cannot reliably design safety testing based on a seriously flawed hypothesis.
I'm sure this can be explained much more succinctly. I obviously don't have the medical expertise, and there will also be much better qualified engineers than myself who could do a better job. But I do think we need to get across to NICE why the PACE trial simply cannot be taken as any kind of reliable evidence for the safety of GET. And to me the flawed hypothesis is at the heart of that - how can safety be reliably assessed on that basis!
ETA: Minor edit for clarity.
Last edited: