Efgartigimod (Vyvgart) - what could the trial data possibly tell us?

If this was mainly for POTS only some patients presumably happened to have PEM. Unless they screened for that specifically, I don't think the data is very interesting.

ChatGPT says the drug reduces IgG levels which is interesting though.
 
Since it seems that just as many people in the placebo group got better you currently probably have to assume that roughly half of the people that are reporting their improvements not based on the open label part actually didn't receive the drug and the ones reporting improvement only based on the open label part of course needn't say anything.
Only a third of the participants didn't receive the drug during the blinded trial. After the blinded trial, the participants were given open label access. I'm not sure if people were told what arm they had been in then or not.

Four of the ten people mentioned in the article reported that they only started getting better during the open label part of the trial. They may well have been part of the third who got the placebo in the blinded part of the trial. But they may also have been part of the two thirds who got the drug in the blinded part of the trial.
 
Only a third of the participants didn't receive the drug during the blinded trial. After the blinded trial, the participants were given open label access. I'm not sure if people were told what arm they had been in then or not.

Four of the ten people mentioned in the article reported that they only started getting better during the open label part of the trial. They may well have been part of the third who got the placebo in the unblinded part of the trial.
Indeed, I just re-noticed the one third vs one-half split. From what I've heard nobody was unblinded. Sure or they all got the drug previously already, the chances of that are higher, one day we'll know.
 


Fwiw a reddit comment about the study questionnaires from a participant.


I'm not saying this drug definitely worked but I think there are a lot of elements of this trial fiasco that fit with what we have talked about in terms of problematic trial design.

I think the NIH would do better to fund a new, better designed phase 2 of this than say, yet another viral persistance study.
 
If they truly believe the drug helped them, and the drug company are refusing to release the data, what else can they do but call for another trial?
From what I know the drug company cannot refuse to release the data. I think there's even a fairly short limit on the time duration for how long data can be withheld. They are forced to relase it by US laws, the problem is getting timely compliance. I'm sure there are probably many gray zones and that it will be a lengthy and unrewarding process that shouldn't even exist in the first place because the data should just be made available, but I think getting compliance will be probably still be a lot easier, quicker and cheaper than readoing the trial. Argenx has not released the data because it is negative, not because of other reasons.

Other than that I wouldn't be surprised if someone like Federowski has actually seen the data. So there possibly are already people in the know.
 
Is there any reasonable chance that they would respond to a request by e.g. @Jonathan Edwards to view the data without being allowed to share it? At least we would know if it’s worth pursuing rapid publication.
 
Is there any reasonable chance that they would respond to a request by e.g. @Jonathan Edwards to view the data without being allowed to share it? At least we would know if it’s worth pursuing rapid publication.
Maybe, but I don't see why. Why would they care? I could obviously be wrong but my guess is the following: The trial didn't only show that the placebo and treatment were equal off, but that placebo even did better than treatment. As Hutan has mentioned companies tend to torture data even when it is negative and make and press release sound impressive. The press release here has been extremely short.
 
Wasn't the trial terminated in the middle of the open label phase? I'm no expert but you tend to only do that when you are convinced of there not being an efficacy precisely because most costs have already occured.

I can understand people talking about problematic trial design and that is justified if the arguments are reasonable, but none of the points that are brought up in the article are well founded. It's just the repetition of the same nonsense we see again and again. Afterall the most problematic part is that it is a "Long-Covid POTS study" which is what the article leaves completely unaddressed.
 
If the drug had a significantly useful effect I think it very unlikely that questionnaires would fail to pick up a signal even if they were less than ideal.
I have some thoughts on this I’ve been kicking around and this seems like as good of a thread as any to share them. I have long COVID with primarily ANS and neuro symptoms. I feel awful, but it’s really hard to put many of my symptoms into words. The worst symptoms are a collection feelings, which I had never felt before LC. I could say malaise, but it really doesn’t feel like any other malaise I’ve experienced. It would be a meaningless word in much the same way “inflammation” becomes meaningless when it’s used so broadly it doesn’t really mean inflammation anymore.

I’ve had a good response to Pemgarda with similar timelines following two infusisions. However, a positive response short of a full recovery is so much more difficult to subjectively measure than I would have ever anticipated, and that is with the benefit of experiencing it and feeling it first hand. I can only imagine how much murkier it gets when you try to quantify things like malaise with a questionnaire. I’m skeptical that we can really measure improvement in this way. I honestly wonder if gauging a patient’s desire to continue the treatment, without any other measurements, would be more effective.

What I didn’t anticipate is that when you feel better, your mind starts playing tricks on you. You quickly forget how bad you felt a few weeks ago and you’re not a reliable narrator. In my case, it’s almost like my brain prevents me from fully recalling the depths of suffering. I’m aware it happened, but my mind will not relive it to allow me to compare. Then, if both the disease and the treatment include frequent fluctuations in symptoms, it becomes very hard to get a handle on how bad you felt last month v. today.

In my case, I had some immediate improvements that felt pretty objective. Like being able to bend over without feeling like I would pass out for the first time in two years. Or suddenly being able to drink coffee without waves of nausea. Even then, I started to question the effect a few months later as symptoms persisted and continued to wax and wane. It just becomes very hard to remember and compare the level of suffering before and after treatment even when you’ve been incredibly unwell.

I’m not sure the above really does a good job at translating the patient experience I’m trying to convey. It’s just incredibly hard to put into words. However, I really think this is an important idea to consider. Put simply, something in the range of a 50% improvement is not nearly as black and white as one would imagine, even when it’s happening in your own body. If I was asked to complete a questionnaire, and even assuming they asked the right questions, I think there is a very good chance that it would miss what have been very real benefits.
 
Purely out of curiosity I filed an FDA complaint and swiftly received the response that such complaints can't be dealt with since certain federal government activities have ceased due to lack of appropriated funding. I know that Dysautonomia International already did a lot of lobbying to have the data released a year ago. Maybe there really isn't much left that can be done.
 
Put simply, something in the range of a 50% improvement is not nearly as black and white as one would imagine, even when it’s happening in your own body.

Even if there is a huge amount of noise in these assessments, if there is no difference between treatment and placebo I think it is unlikely that much is happening. I spent a lot of time asking patients to mark visual analogue scales comparing how they are in comparison to baseline on a 10cm line. It is clearly difficult to have any precision but when a treatment works the average change is different from placebo.

If everyone answered that they weren't much different because it was too hard to say then some subtle shift might have been missed. But the rituximab trials showed that people often mark major improvements - in both treatment and placebo groups. In that context it is hard to justify desire to continue treatment as an endpoint.
 
I have some thoughts on this I’ve been kicking around and this seems like as good of a thread as any to share them. I have long COVID with primarily ANS and neuro symptoms. I feel awful, but it’s really hard to put many of my symptoms into words. The worst symptoms are a collection feelings, which I had never felt before LC. I could say malaise, but it really doesn’t feel like any other malaise I’ve experienced. It would be a meaningless word in much the same way “inflammation” becomes meaningless when it’s used so broadly it doesn’t really mean inflammation anymore.

I’ve had a good response to Pemgarda with similar timelines following two infusisions. However, a positive response short of a full recovery is so much more difficult to subjectively measure than I would have ever anticipated, and that is with the benefit of experiencing it and feeling it first hand. I can only imagine how much murkier it gets when you try to quantify things like malaise with a questionnaire. I’m skeptical that we can really measure improvement in this way. I honestly wonder if gauging a patient’s desire to continue the treatment, without any other measurements, would be more effective.

What I didn’t anticipate is that when you feel better, your mind starts playing tricks on you. You quickly forget how bad you felt a few weeks ago and you’re not a reliable narrator. In my case, it’s almost like my brain prevents me from fully recalling the depths of suffering. I’m aware it happened, but my mind will not relive it to allow me to compare. Then, if both the disease and the treatment include frequent fluctuations in symptoms, it becomes very hard to get a handle on how bad you felt last month v. today.

In my case, I had some immediate improvements that felt pretty objective. Like being able to bend over without feeling like I would pass out for the first time in two years. Or suddenly being able to drink coffee without waves of nausea. Even then, I started to question the effect a few months later as symptoms persisted and continued to wax and wane. It just becomes very hard to remember and compare the level of suffering before and after treatment even when you’ve been incredibly unwell.

I’m not sure the above really does a good job at translating the patient experience I’m trying to convey. It’s just incredibly hard to put into words. However, I really think this is an important idea to consider. Put simply, something in the range of a 50% improvement is not nearly as black and white as one would imagine, even when it’s happening in your own body. If I was asked to complete a questionnaire, and even assuming they asked the right questions, I think there is a very good chance that it would miss what have been very real benefits.
I think you raise some very valuable points. The other side of the coin is that the picture painted in the article is somewhat different: The people are saying it is not hard to remember and compare the level of suffering before and after treatment but rather that they've seen drastic changes. I find it very possible in general that such drastic changes would not be captured by certain subjective outcome measures if as you describe there are some ceiling effects to questionnaires or if there is some adapation to the new circumstances.

However, on the other hand symptom scores such as MALMÖ were explicitly designed to track POTS changes. So if people don't feel palpitations or dizziness anymore that will not be missed. They also had TTT data to analyse. They had a whole host of sets of questionnaires and sub-questionnaires. It's fair to suggest that the questionnaires and the TTT don't capture the changes in this population adequately, but I think the crucial point is then that you have to argue that the problem is the POTS Long-Covid category initself with a population that is to diverse to meaningfully capture something but that is not what is being done in the critiques. They are saying it worked because people say it worked even when they got the placebo.
 
Last edited:
Looking at the data that has been released on the 22 people in the drug group and 15 people in the placebo-group these are the changes:
- Both groups got better on COMPASS but the positive changes were slighlty more pronounced in the placebo group
- Both groups got better on the MALMÖ but the positive changes were more pronounced in the treatment group, but the difference between groups was very small in comparison to the "placebo-effect"
- Both groups got better on PROMIS fatigue scale but the positive changes were more pronounced in the treatment group
- Both groups got better on PROMIS cognitive function but the positive changes were more pronounced in the placebo group
- Precentage of people with improved PGI-C at week 24 is higher in the treatment group, but both groups are not particularly high (53% vs 65%)

I think it would be very interesting to know in what exact domains for example there were changes in the MALMÖ (quite possibly fatigue made a difference here or whether it is something like palpatations?), but at least this data makes it look like the difference in benefits between placebo and treatment aren't that large. Hopefully there will be access to all of the data.
 
Back
Top Bottom