Nick Brown looks at study on sexual harassment and 'CFS'

Since this is a public forum I will comment only in general terms.

This is indeed a minor paper, so it's unlikely that the media would be very interested in it. (That's not in itself any sort of comment on how the media covers ME.)

One option is to write to the journal. That can either be to put pressure on the authors to share their data, or to point out issues that might require the article to be corrected. However, to have a good enough case for the latter you have the former, and when the journal has what one might call a "Sergeant Wilson" data sharing policy ("I say, chaps, would you mind awfully sharing your data"), writing to the editor is unlikely to make much difference. That said, in many cases even when the journal has a mandatory data sharing policy that doesn't mean you're going to get the data, as I'm sure people here don't need to be told.

Another option can be to write to the scientific integrity people at the authors' institution. This is a pretty big step and requires a lot of care. And as my colleagues and I found in the Wansink case (Google is your friend), the scientific integrity department can turn out to be a paper tiger, or indeed a paper shrew.

A note on PubPeer is almost always a good idea, though. I will go and do that. If people have the PubPeer alert plug-in installed in their browser, they will get a notice whenever an article appears in a web page which has an entry.
 
Incidentally, it seems that the authors of de Venter et al. may have mentioned these data five years previously, in a letter to the British Journal of Psychiatry (https://www.ncbi.nlm.nih.gov/pubmed/22297595), responding to a published article that had showed an association between parental physical (but not sexual) abuse and CFS.

"Recently, our research group examined the impact of childhood trauma in a well-described tertiary sample of patients with CFS. In accordance with the previously mentioned population-based studies, childhood sexual harassment was the best predictor of psychological symptoms in CFS (unpublished data)."

There are many reasons why they may have waited until 2017 to publish the data (assuming they are indeed the same), so I won't attempt to read anything into that.
 
I also noticed that they did not include gender in their regression models. One would expect "sexual harassment" to be more commonly reported by women than men (the authors even admit this). If the women in the sample are, on average, more severely impaired than the men, then what you could be measuring here is just the relation between gender and symptom severity.

Edit to add: in my reading, I've come across article after article that adopts this "kitchen sink" approach to analysing the role of psychosocial variables in health. You start with a really vague idea, like how childhood adversity of some kind might affect present health. Then you cobble together as many variables as you can that might index some kind of adversity. Then you measure them all.

Its a good strategy for getting stuff published, because it almost never fails to work. You have so many opportunities to get at least one significant effect - something will probably be significant, simply by chance alone.

Honestly, people have no business even attempting a study like this until they've bothered to think it through. You need to start with some sort of theory as to how you think various kinds of negative experiences would impact on adult physical health, and the mechanisms by which they operate. And then based on this theory, you would pick just one or two variables that you consider most likely to have an impact and study those.

Even if they can't think this through, surely, any researcher would realise that multiple analyses increase the chances of getting a spurious positive result, and p values should be corrected accordingly?
 
Last edited:
I had thought there were serious concerns about the accuracy of retrospective surveys on things like abuse. I think I've read that the data is just really unreliable (I suspect @Woolie could say more). I think it was something about a tendency for people who were struggling with illness or emotional issues tend to think back for possible emotional trauma more than others.
That too, @Adrian!

We know there are a lot of very general factors that influence people's reports of their past. These include:

Demographic factors
: Your gender, your social class and the era you grew up in will all affect the likelihood of you experiencing different sorts of adversity. And we know that all these factors also affect the incidence of certain types of illnesses in adulthood

Recall biases: One's present mental and physical health can have a huge effect on how we frame our past. Even those with a current physical illness of known origin tend to report more adversity than those who are currently healthy. Add to that the fact that people with contested illnesses have probably been actively searching their past for potential explanation for their ill-health, and you have another major source of bias.
 
This is away from the statistical problems about the paper but may, or may not, be worth considering. They claim an association between the childhood adversity and CFS, but how is this supposed to fit with the model. Is this supposed to belong to the predisposing, the precipitating or the perpetuating factors?

If it be the perpetuating factor, the precipitating cause having vanished, then the normal CBT and GET would presumably be irrelevant.

It can hardly be the precipitating cause, as it presumably starts long after the event.

Is it supposedly the predisposing factor, in which case what is the resemblance of the illness to that of those who have no such factor?

Or is the whole model just a load of...……………….?
 
The model is a whole load of ................ but they are trying to answer the question of WHY some people get an infection but let themselves get deconditioned instead of getting back to normal like proper people. The theory is that there is some sort of inherent weakness which makes the "sick role" an answer to their prayers.
 
The model is a whole load of ................ but they are trying to answer the question of WHY some people get an infection but let themselves get deconditioned instead of getting back to normal like proper people. The theory is that there is some sort of inherent weakness which makes the "sick role" an answer to their prayers.

But surely if they genuinely believed that the treatment would be specific to the problem, and there would be a distinct form of CBT aimed at addressing the "fact" of the abuse or whatever, rather than the fact that we are really just a set of idle bastards who need to be taught how to exercise.
 
There would be if there was any logic to them :) It is just another link to freudian psychology. Everything is a mess as they just pick and choose whatever is most convenient at the time.

It may be that people further away from the inner circle believe what is being said and think they are putting "science" into the theories. Thought the CDC under Reeves pushed the childhood trauma thing so this could simply be different from the UK "experts" view.
 
in my reading, I've come across article after article that adopts this "kitchen sink" approach to analysing the role of psychosocial variables in health. You start with a really vague idea, like how childhood adversity of some kind might affect present health. Then you cobble together as many variables as you can that might index some kind of adversity. Then you measure them all.
This is indeed a common problem. In this particular case, however, I don't think that "kitchen sink" is quite fair. They only have one source of independent variables (the TEC) and they state, up-front, that they are going to explore the relations between the various factors and the DVs. That has a lot of problems of its own (all of the TEC variables will likely be intercorrelated, some a lot), and indeed they perhaps should have thrown sex in there as a control (age and sex are almost always good covariates because they can be measured very reliably, so they don't tend to distort the regressions).

Even if they can't think this through, surely, any researcher would realise that multiple analyses increase the chances of getting a spurious positive result, and p values should be corrected accordingly?
You would think so, wouldn't you? And as a result, you might start to think that the authors must have been ill-intentioned to make such an elementary mistake. But then, the reviewers didn't see anything wrong either, and they were presumably neutral on the subject matter of the article (at least, I have no reason to think that they weren't). Lack of statistical understanding is very often a parsimonious explanation for bad articles, because *whispers* many (most?) psychologists are terrible at statistics, even though psychology is utterly dependent on statistics.

I should add, for completeness, that I am also terrible at statistics, but at least I know that. I ran the main arguments of my blog post past three other people, two have whom have written psychology or statistics textbooks, before I posted it. To me, doing my own statistical analyses on my own data (or someone else's data, if they haven't analysed yet or at least told me what results they got) is a bit like the first time you get in a car after you pass your driving test, and there is nobody to tell you if you're doing it right. My colleague James Heathers and I (see JohnTheJack's link to the article about us in Science) discovered that a very substantial number of articles have statistical errors that can be detected with the naked eye, if you look carefully. Those errors are not always catastrophic for the conclusions of the study in question, but you would hope that science would be done with more attention to detail.
 
This is indeed a common problem. In this particular case, however, I don't think that "kitchen sink" is quite fair. They only have one source of independent variables (the TEC) and they state, up-front, that they are going to explore the relations between the various factors and the DVs. That has a lot of problems of its own (all of the TEC variables will likely be intercorrelated, some a lot), and indeed they perhaps should have thrown sex in there as a control (age and sex are almost always good covariates because they can be measured very reliably, so they don't tend to distort the regressions).
I think you're too kind, @sTeamTraen! It is implicit even in an "exploratory" study that some thought went into choosing the independent variables. Otherwise why not just see if hair colour or first letter of one's surname is important? The answer is obvious: we have no theoretical (or empirical) basis for believing they might be. So implicit in the study is an idea about childhood adversity and how it might negatively affect adult health. Its just not properly thought through.

Also, passing off fishing expeditions as "exploratory" research is a bit of a get out of jail free card. Poor choice of research question is an enormous contributor to the problems in Psychology. Which I think would be greatly lessened if we only tested hypotheses that were thought through. Then there'd be no more wobbly chair studies, or studies claiming sad people attend more to the colour blue, or any of that other tosh we've seen in the last few years. So to me, being wrong with the statistics is just part of the problem. The other is being wrong about the question.

(Edited to correct errors)
 
Last edited:
Actually I think it's fine to label research as exploratory, which the authors more or less did here by saying that they set out to see what would turn up in their regressions. (Many other researchers would have written the introduction around their "specific hypothesis" that sexual harassment, rather than sexual abuse, would drive the effect; this is a process called "HARKing", standing for Hypothesising After the Results are Known, and it's very common in fields like psychology.) But the corollary of being up-front about the exploratory nature of your research is that you don't get to use p values to claim that you have found something (even if what you have found isn't almost certainly a statistical artefact). Sadly, a large number of researchers --- perhaps the majority --- don't know that. A lack of understanding about the meaning of the most basic principles of statistical inference is widespread in social-science and biomedical research, and explains a lot of the failures of these fields to make progress.

There are lots of terrible studies out there that illustrate many of the points that have been made above (kitchen sink regressions, pretending you hypothesised something when in fact you tested 100 possible combinations, etc). I think this is actually a slightly different case, but --- pending confirmation with the data --- it still looks terrible. In fact it turns out that there is an inexhaustible supply of ways for studies to be terrible, which is why science in general (and psychology in particular) is hard to do well.
 
The authors have issued a Corrigendum. It reads:
We admit that the reported effect sizes are small considering the means and standard deviations of the dependent variables, and could therefore be mistaken for standardized coefficients, which they are not.

we admit that, in retrospect, it would have been preferable to explicitly mention that the β coefficients reported are in fact unstandardized coefficients

Regrettably, we used some words in the article that imply the use of a predictive model.

Don't know if this is sufficient as the authors do not acknowledge that their message regarding childhood sexual harassment and ME/CFS is misleading. And frustratingly they still claim strong evidence for clinical effects of childhood trauma in CFS...
 
To set things straight, when we use the words predictive, prediction, predict, predictor in our paper, we actually mean that there is an influence or an effect of the independent variable (in this case sexual harassment) on the outcome measures (fatigue and physical functioning).
I don't think this sentence is very helpful. The issue is their suggestion of causality, not the use of the word "predict" in itself. IMO the new language ("influence", "effect") also implies causality. Better would be to state that there is an association between the variables, which --- at least formally --- does not imply a direction of causation (or indeed any causation at all).
 
Better would be to state that there is an association between the variables, which --- at least formally --- does not imply a direction of causation (or indeed any causation at all).
Yes I would prefer that they admit that their paper was misleading in suggesting a causative role for sexual harassment in ME/CFS and that their data do not support such an interpretation.

Now they seem to suggest that they've simply overstated things by using the wrong words…
 
Back
Top Bottom