Trying to understand the drop-outs in this paper because, as Trish has pointed out, that seems to be the main issue in how the results are represented.
They say they had data from 995 patients and a drop-out rate of 31% so you would think the main outcomes in this paper had around 687 patients who underwent CBT and filled in questionnaires. But if we look at the tables, the data for the main outcomes shows that there were far fewer patients who filled in the questionnaires.
For example, the self-imported improvement scores had data from only 365 patients. It would be good if the paper explained why the other 630 patients didn't fill in this questionnaire at follow-up. Was it for example because they only started handing out the questionnaire in a particular year and not before that?
A similar problem exists for the symptom questionnaires. For fatigue, for example, only 503 filled in the questionnaire at discharge while 977 filled it in at the start of the treatment. So why did 40% not fill in the questionnaire at discharge? At first, I thought it was because these patients might still be in treatment but the Method section says that patients were excluded "if they were still in active treatment".
So I think the most likely explanation is that the authors considered someone a drop-out only if they failed haven't filled in any of the questionnaires. So some of the 40% who didn't fill in the fatigue questionnaire at follow-up might have filled in another questionnaire and so they aren't counted as having dropped out.
That might be reasonable, it's just weird that so many people have filled in one of the questionnaires but not others. I would think the main difficulty in ensuring follow-up is getting in contact with the patient and make him/her fill in information. I suppose that if you manage to contact the patient and fill in one short questionnaire, it wouldn't be a big deal to let them fill in another short questionnaire - one like the Chalder Fatigue Scale that the patient filled in at the start of treatment. So it would be good if the paper could explain why this is is the case with their data.
It also makes the data hard to interpret. When they say that "90% were satisfied with their treatment" we don't really know how many patients that represents. 90% of what? How many patients rated treatment satisfaction? The abstract also says that "85% of patients self-reported that they felt an improvement in their fatigue at follow-up" without clarifying that only a third of the patients filled in this questionnaire at follow-up, about the same proportion of patients, dropped out and might have a different opinion.
It's rather misleading to state in the abstract that "data was available for 995 patients" and then highlight figures that only apply to a minority of those 995 patients.