So they got some minor improvement on some of the scales, but nothing in the only objective outcome measure related to functioning.The improvement of fatigue was apparent by the FACIT fatigue scale, Bell scale, FSS, and CCC, yet not by the Chalder fatigue scale, DSQ-PEM, and Borg scales. This data demonstrates clear differences between the different fatigue questionnaires. In addition, patients did not show a significant increase of their walking distance, as measured by the 6MWT.
None of these graphs look impressive and are surely all clinically irrelevant, despite the section bias of only showing those things that even had a positive result.View attachment 27263
So they got some minor improvement on some of the scales, but nothing in the only objective outcome measure related to functioning.
MCID is not meaningful if the given theory only applies to a subset of the patients.A Reddit user's criticism of the study:
Link to Reddit
Copying the main bulletpoint headings:
1. The reported improvements, while statistically significant, are small and clinically of no significance to patients (MCID not met)
2. The primary endpoint of the study was safety (TEAEs), not efficacy. Nevertheless, the supposed effectiveness is strongly concluded and the TEAEs are only mentioned slightly.
3. In the side effect analysis / AE analysis, there is also a lack of transparency and causal assessment.
4. Multiple testing and selection bias were not taken into account.
5. Not all outcomes or endpoints were listed in the study, which should actually be the content of the study.
Second point is totally wrong, private investments of BC have been in The double digits of millions for their phase 2 alone.None of these graphs look impressive and are surely all clinically irrelevant, despite the section bias of only showing those things that even had a positive result.
Berlin Cures has to be the happiest pharmaceutical company of all time. Not a dime invested into drug development, patients are happy to finance your studies and patients and researchers will do all the marketing for you despite trials not showing anything meaningful.
A Reddit user's criticism of the study:
Link to Reddit
Copying the main bulletpoint headings:
1. The reported improvements, while statistically significant, are small and clinically of no significance to patients (MCID not met)
2. The primary endpoint of the study was safety (TEAEs), not efficacy. Nevertheless, the supposed effectiveness is strongly concluded and the TEAEs are only mentioned slightly.
3. In the side effect analysis / AE analysis, there is also a lack of transparency and causal assessment.
4. Multiple testing and selection bias were not taken into account.
5. Not all outcomes or endpoints were listed in the study, which should actually be the content of the study.
Sorry, I was refering to pre-clinical research etc, I should have been clearer. They indeed had to invest millions to conduct a phase 2 study.Second point is totally wrong, private investments of BC have been in The double digits of millions for their phase 2 alone.
Of course it is a meaningful argument, because the authors provide no arguments for subgroups or evidence for it. You either have to have a sufficient sample size (and the much larger study had negative results on precisely these outcomes so this argument becomes less relevant rather than the opposite) or devise a mechanistic understanding what the subgroups are. Otherwise you just always end up talking your way out by referencing subgroups "GET only works for subgroups, homeopathy only works for subgroups...." Said plainly: You cannot argue for efficacy but at same time say there can't be efficacy because of subgroups. There may not be efficacy due to subgroups, but the result is then that there is no efficacy in the population you studied, not the opposite!MCID is not meaningful if the given theory only applies to a subset of the patients.
These are all mute points. There's no evidence to suggest that these antibodies play a role. You cannot argue that that is because tests are ineffective because then you have no evidence in the first place (and of course ELISA is known to be able to differentiate between the antibody levels of people with an autoimmune disease caused by antibodies and healthy controls for several decades, which is largely what the narrative has been). It's like saying: "ME/CFS is caused by particle x based on my results, but my results don't show it because I can't measure particle x". Perhaps one may then argue that more research such as the things being done by Dmitry Veprintsev should have been conducted, rather than what we've seen.MCID is not meaningful if the given theory only applies to a subset of the patients.
We know that the blood test, even the functional one is lacking. There's a new one in development based on pluripotent stem cells and I hope that we will also get blood measurements with proper controls in the future.
There are simply too many unknowns regarding the blood test, for example the threshold for the detection is quite high, compared to elisa. It could also be that you need several types of auto antibodies that interplay with each other, like The combination ß2 M2. One point in case for this theory is that COVID actually can or does induce these auto antibodies in cattle and ferrets.
Selective reporting of outcome measures is seems like bad practice and I think should be heavily critiqued as it has been elsewhere. Especially when the authors mention explicitly mention microcirculation in the paper and hypothesize about the role of microcirculation as part of the paper and cite previous work done on one person, but don't mention the actual data from the study. It would have been sufficient to simply include the following in the supplementary material: "Other outcome measures are still being analysed and will be part of upcoming publications."The critique about other outcomes like OCT measurement is also not well-founded in my opinion, this first result was rushed to get it out after the bankruptcy of berlin cures. However, proper analyzes and writing papers takes time. Everyone who has worked in science knows this. I would be surprised if we do not get The data of measurements like OCT in the future.
Sure. That will have influenced them to write this, but shouldn't impact their arguments.I would also add that the Reddit user has been part of the study and did not see a benefit.
Unfortunately I cannot answer in a detail level (blood tests) which would be needed since I don't agree.Sorry, I was refering to pre-clinical research etc, I should have been clearer. They indeed had to invest millions to conduct a phase 2 study.
Of course it is a meaningful argument, because the authors provide no arguments for subgroups or evidence for it. You either have to have a sufficient sample size (and the much larger study had negative results on precisely these outcomes so this argument becomes less relevant rather than the opposite) or devise a mechanistic understanding what the subgroups are. Otherwise you just always end up talking your way out by referencing subgroups "GET only works for subgroups, homeopathy only works for subgroups...." Said plainly: You cannot argue for efficacy but at same time say there can't be efficacy because of subgroups. There may not be efficacy due to subgroups, but the result is then that there is no efficacy in the population you studied, not the opposite!
These are all mute points. There's no evidence to suggest that these antibodies play a role. You cannot argue that that is because tests are ineffective because then you have no evidence in the first place (and of course ELISA is known to be able to differentiate between the antibody levels of people with an autoimmune disease caused by antibodies and healthy controls for several decades, which is largely what the narrative has been). It's like saying: "ME/CFS is caused by particle x based on my results, but my results don't show it because I can't measure particle x". Perhaps one may then argue that more research such as the things being done by Dmitry Veprintsev should have been conducted, rather than what we've seen.
Selective reporting of outcome measures is seems like bad practice and I think should be heavily critiqued as it has been elsewhere. Especially when the authors mention explicitly mention microcirculation in the paper and hypothesize about the role of microcirculation as part of the paper and cite previous work done on one person, but don't mention the actual data from the study. It would have been sufficient to simply include the following in the supplementary material: "Other outcome measures are still being analysed and will be part of upcoming publications."
Sure. That will have influenced them to write this, but shouldn't impact their arguments
Everyone has his or his own favorite theory, and I think I've been enough in science to judge it. My impression is that also this forum has its personal favorites, eg linked to the work of clinicians which are part of this forum. Which is completely fine and reasonable.
Maybe the term clinicians was too precise, I mean people involved in studies like decode me.I don't see the forum as having favourite theories. I don't know of any clinicians on the forum working on ME/CFS so I am not sure what that refers to. I certainly don't have a favourite theory. But I do have views on theories that don't add up.
The idea that anti-GCPR cause ME/CFS or LC symptoms does not add up. If they did then certain levels of antibodies would pretty much always be associated with symptoms.
So if you compared antibodies above level L in 100 patients and 100 controls you would get a figure something like:
Patients +ve: 37
Controls +ve: zero, or maybe 2 if you allow for some technical problems.
The actual findings are more like:
Patients +ve: 37
Controls +ve: 33
Although some statistically significant differences have been reported, that is not what you need to make a theory viable. If X causes Y then no healthy people without Y will have X.
It is true that autoantibodies are a bit more complicated because our tests have never been that good at picking out pathogenic antibodies from antibodies that bind similarly in tests but don't cause problems. So things are messier. The control rate above could be as much as 7, but in diseases where we have other evidence for antibodies being pathogenic rates in patients are at least five times greater than in controls, especially if you choose a high threshold level. the data we have on anti-GPCR antibodies say firmly that these are not likely to be pathogenic. I have never seen control data on other randomly sampled autoantibodies in these patients but for all sorts of reasons you are likely to get a statistically significant difference for just about any auto-antibody in a group of ill people compared to controls.
And as far as I can see the 'blocking' power of the drug is based on the same assays that perform so poorly on discriminating patients.
I think for the functional test we do not have any control data except for two persons which were positive.And as far as I can see the 'blocking' power of the drug is based on the same assays that perform so poorly on discriminating patients.