
The integrated neuromodulation system consists of an implant and pod. The implant is placed in the pod to position and hold it in place on the left cervical vagus nerve to ensure direct contact for precise stimulation. The implant is approximately 2.5 cm in length and weighs 2.6 g. To charge the implant, patients wear a wireless device (charger) around the neck for a few minutes, once a week. The implant is programmed by healthcare providers (HCPs) using a proprietary application (programmer).
They were allowed to use other treatments after the 3 month blind period:The active stimulation intensity was set to an upper comfort level (maximum = 2.5 mA) and delivered a 1-min train of pulses to the vagus nerve once daily at 10 Hz16 (arm 1 = 1.8 mA average; arm 2 = 0 mA).
They used Bang’s blinding index to assess blinding. I’m not very familiar with the index, but a visual inspection of the data shows what I believe is a clear skew towards guessing correctly in both groups:Following the primary end point assessment at 3 months, all patients were eligible to continue in the study for open-label active stimulation treatment. Adjunctive pharmacological treatments (‘augmented therapy’) were permitted throughout the open-label stimulation period at the discretion of the rheumatologist in consultation with the patient, with 17.8%, 24.8% and 32.2% of patients receiving protocol-defined augmented therapy at 6, 9 and 12 months, respectively. At these timepoints, 88.0%, 80.6% and 75.2% of patients remained free from adjunctive b/tsDMARD therapy.

This is good to see. I’m suprised the authors wrote “pivotal” in their title after this data + the open label data seems to clearly indicate that this trial is not enough to make out an effect or not.They used Bang’s blinding index to assess blinding. I’m not very familiar with the index, but a visual inspection of the data shows what I believe is a clear skew towards guessing correctly in both groups:
At this point I’ve stopped being surprised at the dishonesty in science.. Everything is propaganda unless proven otherwise.This is good to see. I’m suprised the authors wrote “pivotal” in their title after this data + the open label data seems to clearly indicate that this trial is not enough to make out an effect or not.
Yes my “surprise” was rhetorical. A hedged and euphemised way to say that the authors seem to come to a conclusion explicitly not supported by their evidence.At this point I’ve stopped being surprised at the dishonesty in science.. Everything is propaganda unless proven otherwise.
Sounds reasonable. I have the same experience! It’s incredibly frustrating..Yes my “surprise” was rhetorical. A hedged and euphemised way to say that the authors seem to come to a conclusion explicitly not supported by their evidence.
Unfortunately outside this community I have to find ways to hedge and soften down my critiques because everyone seems to take being published in a paper as gospel. (I imagine if I had the energy of a healthy person I could be more outspoken but I never really have the energy to debate so I try not to provoke them ahah).
They used Bang’s blinding index to assess blinding. I’m not very familiar with the index
Oh, I don't know, I don't think fraud should be categorized as propaganda. There's lots of fraud, too. So much fraud.At this point I’ve stopped being surprised at the dishonesty in science.. Everything is propaganda unless proven otherwise.
Was thinking that if the treatment really worked it would probably also skew the results in the intervention group, with more people correctly identifying they were getting the treatment and not the inactive control.a visual inspection of the data shows what I believe is a clear skew towards guessing correctly in both groups:
Yes, it seems to me that people should have been asked what treatment they were on just a couple of days into the treatment. That would be more likely to pick out a problem in blinding. Assessing which treatment people believe they are getting even at one month would surely be confounded by an effective treatment, especially if word got around that some people were improving on the trial.Was thinking that if the treatment really worked it would probably also skew the results in the intervention group, with more people correctly identifying they were getting the treatment and not the inactive control.
If we go by JE’s assessment above, it looks like it didn’t really work, which strengthens the argument that the intervention effectively broke the blinding in the intervention group.Was thinking that if the treatment really worked it would probably also skew the results in the intervention group, with more people correctly identifying they were getting the treatment and not the inactive control.
Exactly. And it doesn’t seem like it worked in any particularly meaningful way, which would further strengthen the hypothesis that the blinding was broken by the intervention and not the effect of the intervention.But perhaps if the intervention didn't work at all, the proportion believing they were getting a sham in the intervention group would also be quite high and closer to 40%.
If you expect the intervention to work and take the lack of an effect to mean that you’re in the sham group, sure.In other words, I'm not sure if we should expect a random or equal distribution of guesses in the sham group.
For all three measurements, there were instances of the BI being =>0.3 at 1 month.Bang’s blinding index scores were <0.3 for patients, joint assessors and co-investigators, which indicated satisfactory blinding at the time of primary end point assessment (Supplementary Table 4).


