Vagus nerve-mediated neuroimmune modulation for rheumatoid arthritis: a pivotal randomized controlled trial 2025 Tesser et al

Jaybee00

Senior Member (Voting Rights)

Abstract​

The inflammatory reflex, in which vagus nerve signaling modulates cytokine production, is dysregulated in rheumatoid arthritis (RA). RESET-RA, a pivotal, double-blind, randomized, sham-controlled trial, evaluated a vagus nerve-targeted neuromodulation system for RA in 242 patients with inadequate response/intolerance to biological/targeted synthetic disease-modifying antirheumatic drugs. Patients were randomized to active or sham stimulation for 3 months, and then all received open-label stimulation with results reported to 12 months. The primary end point was 3-month American College of Rheumatology 20% (ACR20) response. ACR20 rates were higher with active simulation than with sham at 3 months (35.2% versus 24.2%, P = 0.0209), which further improved in open-label to 50.0% at 6 months and 52.8% at 12 months (all-completers). Adverse events occurred in a similar proportion of patients in both arms. Related serious adverse events (rate = 1.6%) were all perioperative, and resolved. Vagus nerve-mediated neuroimmune modulation for RA achieved its primary efficacy end point and produced durable clinical benefits with a favorable safety profile. ClinicalTrials.gov registration: NCT04539964.

 
A 10% increment in ACR20 is pretty feeble. The real question is whether the blinding was in fact adequate (the unblinded phase gave higher rates of ACR20) or whether there is a modest biological effect from vagal stimulation. i am not going to hold my breath.
 
Fig. 2: Integrated neuromodulation system.
figure 2
The integrated neuromodulation system consists of an implant and pod. The implant is placed in the pod to position and hold it in place on the left cervical vagus nerve to ensure direct contact for precise stimulation. The implant is approximately 2.5 cm in length and weighs 2.6 g. To charge the implant, patients wear a wireless device (charger) around the neck for a few minutes, once a week. The implant is programmed by healthcare providers (HCPs) using a proprietary application (programmer).

It was not much stimulation, only 1 minute daily:
The active stimulation intensity was set to an upper comfort level (maximum = 2.5 mA) and delivered a 1-min train of pulses to the vagus nerve once daily at 10 Hz16 (arm 1 = 1.8 mA average; arm 2 = 0 mA).
They were allowed to use other treatments after the 3 month blind period:
Following the primary end point assessment at 3 months, all patients were eligible to continue in the study for open-label active stimulation treatment. Adjunctive pharmacological treatments (‘augmented therapy’) were permitted throughout the open-label stimulation period at the discretion of the rheumatologist in consultation with the patient, with 17.8%, 24.8% and 32.2% of patients receiving protocol-defined augmented therapy at 6, 9 and 12 months, respectively. At these timepoints, 88.0%, 80.6% and 75.2% of patients remained free from adjunctive b/tsDMARD therapy.
They used Bang’s blinding index to assess blinding. I’m not very familiar with the index, but a visual inspection of the data shows what I believe is a clear skew towards guessing correctly in both groups:
IMG_0534.jpeg
 
They used Bang’s blinding index to assess blinding. I’m not very familiar with the index, but a visual inspection of the data shows what I believe is a clear skew towards guessing correctly in both groups:
This is good to see. I’m suprised the authors wrote “pivotal” in their title after this data + the open label data seems to clearly indicate that this trial is not enough to make out an effect or not.
 
At this point I’ve stopped being surprised at the dishonesty in science.. Everything is propaganda unless proven otherwise.
Yes my “surprise” was rhetorical. A hedged and euphemised way to say that the authors seem to come to a conclusion explicitly not supported by their evidence.

Unfortunately outside this community I have to find ways to hedge and soften down my critiques because everyone seems to take being published in a paper as gospel. (I imagine if I had the energy of a healthy person I could be more outspoken but I never really have the energy to debate so I try not to provoke them ahah).
 
Yes my “surprise” was rhetorical. A hedged and euphemised way to say that the authors seem to come to a conclusion explicitly not supported by their evidence.

Unfortunately outside this community I have to find ways to hedge and soften down my critiques because everyone seems to take being published in a paper as gospel. (I imagine if I had the energy of a healthy person I could be more outspoken but I never really have the energy to debate so I try not to provoke them ahah).
Sounds reasonable. I have the same experience! It’s incredibly frustrating..
 
They used Bang’s blinding index to assess blinding. I’m not very familiar with the index

Assessment of blinding in clinical trials (2004)

Success of blinding is a fundamental issue in many clinical trials. The validity of a trial may be questioned if this important assumption is violated. Although thousands of ostensibly double-blind trials are conducted annually and investigators acknowledge the importance of blinding, attempts to measure the effectiveness of blinding are rarely discussed. Several published papers proposed ways to evaluate the success of blinding, but none of the methods are commonly used or regarded as standard.

This paper investigates a new approach to assess the success of blinding in clinical trials. The blinding index proposed is scaled to an interval of −1 to 1, 1 being complete lack of blinding, 0 being consistent with perfect blinding and −1 indicating opposite guessing which may be related to unblinding. It has the ability to detect a relatively low degree of blinding, response bias and different behaviors in two arms.

The proposed method is applied to a clinical trial of cholesterol-lowering medication in a group of elderly people.

Web | Controlled Clinical Trials | Paywall

A simple blinding index for randomized controlled trials (2024)

Blinding is an essential part of many randomized controlled trials. However, its quality is usually not checked, and when it is, common measures are the James index and/or the Bang index. In the present paper we discuss these two indices, providing examples demonstrating their considerable weaknesses and limitations, and propose an alternative method for measuring blinding. We argue that this new approach has a number of advantages. We also provide an R-package for computing our blinding index.

Web | Contemporary Clinical Trials Communications | Open Access
 
At this point I’ve stopped being surprised at the dishonesty in science.. Everything is propaganda unless proven otherwise.
Oh, I don't know, I don't think fraud should be categorized as propaganda. There's lots of fraud, too. So much fraud.

Obviously completely unrelated to widespread loss of trust in experts. No, see, it's the TikToks, has to be! It's not as if blending pseudoscience and science could actually backfire, it's all well-meaning and so on and so forth.
 
Merged post



Might be of interest to @Jonathan Edwards
 
Last edited by a moderator:
a visual inspection of the data shows what I believe is a clear skew towards guessing correctly in both groups:
Was thinking that if the treatment really worked it would probably also skew the results in the intervention group, with more people correctly identifying they were getting the treatment and not the inactive control.

This wouldn't explain why 40% in the sham group strongly believed they were in the sham group. But perhaps if the intervention didn't work at all, the proportion believing they were getting a sham in the intervention group would also be quite high and closer to 40%. In other words, I'm not sure if we should expect a random or equal distribution of guesses in the sham group.
 
Was thinking that if the treatment really worked it would probably also skew the results in the intervention group, with more people correctly identifying they were getting the treatment and not the inactive control.
Yes, it seems to me that people should have been asked what treatment they were on just a couple of days into the treatment. That would be more likely to pick out a problem in blinding. Assessing which treatment people believe they are getting even at one month would surely be confounded by an effective treatment, especially if word got around that some people were improving on the trial.

I think it should have been possible for there to be effective blinding, what with the implanting of the stimulator.

My set point on vagus stimulation is that it is probably ineffective (anything that clips on an ear is almost certainly ineffective). But, I don't think those 1 month results necessarily show that blinding was inadequate.

(sorry, edited/added more)
 
Last edited:
Was thinking that if the treatment really worked it would probably also skew the results in the intervention group, with more people correctly identifying they were getting the treatment and not the inactive control.
If we go by JE’s assessment above, it looks like it didn’t really work, which strengthens the argument that the intervention effectively broke the blinding in the intervention group.
But perhaps if the intervention didn't work at all, the proportion believing they were getting a sham in the intervention group would also be quite high and closer to 40%.
Exactly. And it doesn’t seem like it worked in any particularly meaningful way, which would further strengthen the hypothesis that the blinding was broken by the intervention and not the effect of the intervention.
In other words, I'm not sure if we should expect a random or equal distribution of guesses in the sham group.
If you expect the intervention to work and take the lack of an effect to mean that you’re in the sham group, sure.

But you could also say that the lack of an effect might be because the intervention didn’t work, and your guess would be 50/50 between sham and intervention in the sham group.

Alternatively, you could reason that the intervention group is less likely to not experience an effect (because sometimes the intervention actually works), so if your aim is to guess correctly the most times and you don’t experience an effect, you should guess that you’re in the sham group.

I agree with Hutan that an early assessment would have been good, ideally combined with a late assessment as well.
 
The authors are arguably hiding disadvantageous information:
Bang’s blinding index scores were <0.3 for patients, joint assessors and co-investigators, which indicated satisfactory blinding at the time of primary end point assessment (Supplementary Table 4).
For all three measurements, there were instances of the BI being =>0.3 at 1 month.

The patient’s BI values were also substantially higher than the values of evaluator and Co-PI, potentially indicating that the patients had more info about their allocation than the people involved in the trial. That would only be the case if the blinding was broken either by the device or the effect on their health.
IMG_0592.jpeg
IMG_0593.jpeg
IMG_0594.jpeg
 
Last edited:
Back
Top