Comparing ME/CFS following mononucleosis with Long COVID, 2026, Jason et al

Gosh, I have started reading the paper and looking at the data.

This line of data stands out as a demonstration of the problems of the definiton of moderate and severe ME/CFS:

SF-36 Physical functioning, mean (standard deviation)

Severe ME/CFS 77.7 (18.9)
Moderate ME/CFS 90.4(11.1)
Long Covid 73.2 (21.5)
Mono controls 98.6 (3.9)
LC controls 93.6 (11.6)

Compare those figures with the PACE trial with means around 40 to 60 across the trial, and they were all able to attend clinics so would be classed on most severity scales as mild to moderate. To enter the PACE trial you had to score 65 or less.

What on earth is going on here?
Oh it's great to have those numbers, thanks for sharing them @Trish. I felt hamstrung reading the other papers with SF36 PF scores.

Those SF36 PF scores are a lot more like those I would expect 6 months post-infection: the ME/CFS group meeting more than one set of criteria and the long COVID group would have mostly mild ME/CFS, with those one SD below the mean dipping into moderate.

The group fulfilling only one set of criteria's SF36PF is so close to that of LC controls that @Simon M's concern that that group contained people who were not ill looks very well-founded indeed.

Here's what van Campen et al. 2020 found were the averages by severity in a cohort sick for an average 12 years (see last row, PAS=physical functioning scale):
Using the clinician-assigned ICC severity category, 121 (42%) were scored as having mild disease, 98 patients (34%) were scored as having moderate disease and 70 patients (24%) were scored as having severe disease.
1776783293221.png
 
Thank you for taking the time to respond. I appreciate the level of detail and the points you’re raising.

I also want to acknowledge that my earlier reply came across as defensive. That wasn’t my intention, and I can see how some of my wording didn’t engage directly with the concerns being raised here. I realize I'm suddenly a proxy for your anger and frustration with researchers in general and our team in particular. And I knew that when I stepped into the conversation, but it still stings a bit. I'm still very challenged with ME/CFS, MCAS, MCS, etc. and don't have a lot of energy to engage in sustained discussions, but I am glad to represent both sides as a mediator of sorts. I think I have a unique opportunity as both a patient and epidemiologist.

On the question of severity definitions, you’re right that the way we evaluate “severe” using multiple case definitions doesn’t map cleanly onto how patients typically describe severity (e.g., housebound/bedbound), and I understand why that’s a sticking point. That’s an area where there’s still a lot of debate, and it’s helpful to hear how it’s being interpreted from your perspective.

More broadly, I hear the concerns about questionnaires as well—particularly around PEM, fluctuation, and the limitations of trying to capture complex lived experience in a structured tool. I don’t think anyone on our team would claim these measures are perfect, and the kinds of issues you’re raising are exactly the ones that need to be grappled with.

If you’re open to it, I would genuinely value hearing more of your specific concerns—especially around:
1) how severity would be better defined in research
2) what aspects of PEM are most often missed or mischaracterized
3) where you think current tools or approaches are leading to misleading conclusions

I’m part of the team, but I’m also here to listen and learn from discussions like this. These kinds of detailed critiques are important, and I do take them back into our internal conversations. Since I am also serving as a patient advisor, I want to give input on how the publications are being received by the ME/CFS community and how we can better serve that group. To clarify, this isn’t about defending the work, but about understanding where it’s not aligning with patient experience or expectations.

If you would state your comments and questions in a respectful way I will pass them on. If you would like to be contacted to perhaps participate in a focus group, please send me your information and brief bio (innova.advocacy@gmail.com).

Thanks again for engaging so thoughtfully.
 
On the question of severity definitions, you’re right that the way we evaluate “severe” using multiple case definitions doesn’t map cleanly onto how patients typically describe severity (e.g., housebound/bedbound), and I understand why that’s a sticking point.
I didn't even know this was a thing. Is there a brief summary about how this works?
 
Wow, well you lost me there. 'Don't criticse us because you don't know everything we do!' Whose fault is it if we mere patients don't have all the facts, especially when publications from your group are behind a paywall?

Your group is responsibe for establishing a distorted concept of PEM in the literature which in many ways is unrecogniseable to patients; perhaps your group should take some responsibility for creating the frustration that many of us feel at that.
Hi Andy,

Thank you because this gives me a chance to explain what I mean by not having all the facts (I was not trying to be condescending). Because journals don't have advertising to cover costs, many of the high-impact journals charge a fee to make the article free to the public.

For SAGE journals (the publisher of this study), making an individual article open access usually involves an article processing charge that can range from roughly $1,500 up to around $5,000 depending on the journal. That cost has to be covered by the study funding or institutional support, which isn’t always available, especially for smaller projects. Much of the work the team does is not funded and the decisions where to seek publication are complicated, but our preference would always be to have open access.
 
I didn't even know this was a thing. Is there a brief summary about how this works?
In some of these studies, “severity” isn’t defined by functional status (like housebound or bedbound), but by how many case definitions a person meets at the same time. The idea is that people who meet multiple definitions tend to report a broader range and higher frequency/severity of symptoms, so they’re grouped as “more severe” within that framework.

But as you pointed out, that doesn’t necessarily line up with how patients experience severity in real life, where functional ability is usually the key distinction. So it’s a bit of a proxy measure rather than a direct one, and that mismatch is part of why it can be confusing or controversial.

We do need to find a better way to classify the categories, but without a consensus among researchers on case definitions, it is very difficult. And in the research circles there are so many strong opinions I'm not sure a global consensus is ever going to happen. This is why patient engagement is so important. We are the experts on our condition and on how we experience it in real life, not on questionnaires.
 
We do need to find a better way to classify the categories, but without a consensus among researchers on case definitions, it is very difficult.
The first step would be to stop using the very flawed one..
And in the research circles there are so many strong opinions I'm not sure a global consensus is ever going to happen.
Again, consensus isn’t the issue. There’s consensus around lots of terrible ideas, and not around lots of good ones.

Pick one that has a reasonable rationale - like FUNCAP - and use that until there are better alternatives. These things are never going to be exact or perfect anyways.
This is why patient engagement is so important. We are the experts on our condition and on how we experience it in real life, not on questionnaires.
Yet your team keep basing their research on flawed questionnaires..
 
In some of these studies, “severity” isn’t defined by functional status (like housebound or bedbound), but by how many case definitions a person meets at the same time. The idea is that people who meet multiple definitions tend to report a broader range and higher frequency/severity of symptoms, so they’re grouped as “more severe” within that framework.
Ok. That seems like a big assumption to me.
 
Ok. That seems like a big assumption to me.
It's definitely an imperfect system. I personally think much of the politics involved in developing a consensus criteria has to do with governments not wanting to acknowledge the severity many people experience because it will result in more disability payments and more healthcare expenditures. I saw this when working with a NICE committee in the UK for a rare disease. There are some very vocal, influential "advisors" for those committees who tell the lawmakers what they want to hear. I don't know that they get paid under the table for prostituting themselves that way, but they are definitely not representing the patients, only their personal interests.
 
The first step would be to stop using the very flawed one..

Again, consensus isn’t the issue. There’s consensus around lots of terrible ideas, and not around lots of good ones.

Pick one that has a reasonable rationale - like FUNCAP - and use that until there are better alternatives. These things are never going to be exact or perfect anyways.

Yet your team keep basing their research on flawed questionnaires..
I think some of the current studies using wearables for objective measurements is going to change some of the long-standing myths about PEM and fatigue in ME/CFS. The medical world wants biomarkers (which my team is also working on) and this is a step in that direction until we have the validated biological markers.

Questionnaires are definitely an imperfect way of trying to capture something as complex and variable as PEM, and I understand the worry about false positives or people being grouped together who may not have the same underlying condition.

At the same time, in large-scale studies they’re often one of the few practical ways to collect standardized data across groups—but that doesn’t resolve the validity issues you’re raising, and it does mean the results need to be interpreted with a lot of caution.

The point about downstream use is also really important. Even if a tool is used carefully in a research context, that nuance can get lost when findings are applied more broadly. I am more of a qualitative research analyst and prefer smaller groups with interviews as the basis, but both are necessary to make a dent in the misguided thinking about ME/CFS.

If you were designing this kind of study, what would you see as a better way to identify or characterize PEM in a research setting?
 
Last edited:
It's striking that "criticism from the people we are trying to help" is paired with the word "sadly".

This is the most valuable criticism any scientist could hope to receive. It worries me when it's not recognised as such.
Hi Kitty,

My bad. I sometimes have a hard time separating the human emotional response from the detached scientist. I know my team values the criticism because I have heard them discuss it many times and not in a defensive way, but in a "how can we do better" way. The sadness is because I know these people's unwavering commitment and passion to advocate for people with ME/CFS, and hostility toward the individuals and the team (not the constructive criticism) is counterproductive. They are always glad to explain their positions and approaches in civil discourse.
 
I'm new to the forum and don't know much about ME/CFS, but that feels like an extension of the 'pwME are troublesome, hysterical people' trope.
I'm sorry that's how you interpreted the summary. If you get a chance, you might want to do a search on YouTube for Leonard Jason and see some of the interviews and webinars he has done with different advocacy groups. He has spent much of his career trying to prove to colleagues that ME/CFS is not a psychological disorder. That's why I chose to be a part of his team...after all, I'm one of you!
 
I appreciate you coming back and what you’ve written.

I myself am not a scientist, but I can name off the top of my head two other researchers in the past two years who complained about one way or another about the attitude from patients and implied they won’t keep helping or we are “known” for this behaviour, and it does touch a nerve.

My non-scientist comment is just that there is a definition of severe ME. There are numerous versions of the ME severity Scale, referred to in this
in the UK NHS
But also throughout the world, see this NIH paper also uses it

Therefore if you specify a category I’d suggest giving it a different name than “severe” because that already has a meaning, and you are not following the conventional understanding. Perhaps call it “broad” if it’s a lot of symptoms, because most people already understand that severe=bedbound.
 
Therefore if you specify a category I’d suggest giving it a different name than “severe” because that already has a meaning, and you are not following the conventional understanding. Perhaps call it “broad” if it’s a lot of symptoms, because most people already understand that severe=bedbound.
Thank you. I am going to compile the comments for Dr. Jason and the team to review and see if there's anything we can influence to make the descriptions and names more accurate and representative. It takes months to set up a formal study with institutional review board approval but just getting the feedback helps guide future projects. There have been some studies that helped inform the revisions to the DSQ questionnaires and while you can never please everyone, we can definitely use the input to continue to improve our representation.
 
I think some of the current studies using wearables for objective measurements is going to change some of the long-standing myths about PEM and fatigue in ME/CFS. The medical world wants biomarkers (which my team is also working on) and this is a step in that direction until we have the validated biological markers.
The medical world doesn’t need biomarkers, whatever that means. They need relevant knowledge about pathology that can guide treatments, research and management.

But none of that answers why your team keeps using the bad questionnaires.
Questionnaires are definitely an imperfect way of trying to capture something as complex and variable as PEM, and I understand the worry about false positives or people being grouped together who may not have the same underlying condition.

At the same time, in large-scale studies they’re often one of the few practical ways to collect standardized data across groups—but that doesn’t resolve the validity issues you’re raising, and it does mean the results need to be interpreted with a lot of caution.
I’m glad you agree on the false positives issues and validity issues, but I get the impression that you believe that bad data is better than no data. My opinion is very much the opposite: bad data is far worse than no data.
The point about downstream use is also really important. Even if a tool is used carefully in a research context, that nuance can get lost when findings are applied more broadly. I am more of a qualitative research analyst and prefer smaller groups with interviews as the basis, but both are necessary to make a dent in the misguided thinking about ME/CFS.

If you were designing this kind of study, what would you see as a better way to identify or characterize PEM in a research setting?
Do proper screening interviews with someone that has sufficient experience assessing PEM in a clinical setting, and use FUNCAP to assess their approximate level of functioning.
The sadness is because I know these people's unwavering commitment and passion to advocate for people with ME/CFS, and hostility toward the individuals and the team (not the constructive criticism) is counterproductive.
Where’s the hostility?

The team responded quite poorly in a thread about the new DSQ PEM questionnaire, and then moved to individual messages with some members, which made everything a lot less transparent and duplicated the effort required by very sick people.
They are always glad to explain their positions and approaches in civil discourse.
I suggest avoiding implying that this conversation is uncivil without clear references. If you think there are anyone going too far, there’s a contact moderators button at the bottom of each post.
 
I'm sorry that's how you interpreted the summary. If you get a chance, you might want to do a search on YouTube for Leonard Jason and see some of the interviews and webinars he has done with different advocacy groups. He has spent much of his career trying to prove to colleagues that ME/CFS is not a psychological disorder. That's why I chose to be a part of his team...after all, I'm one of you!
Thanks for the reply. My response was not to the summary, but rather to the comment: "Sadly, we also get a lot of criticism from the people we are trying to help..." I'm glad people are working to understand ME/CFS more deeply. Hopefully there will be a convergence in definitions and research approaches as time goes on.
 
The medical world doesn’t need biomarkers, whatever that means. They need relevant knowledge about pathology that can guide treatments, research and management.

But none of that answers why your team keeps using the bad questionnaires.

I’m glad you agree on the false positives issues and validity issues, but I get the impression that you believe that bad data is better than no data. My opinion is very much the opposite: bad data is far worse than no data.

Do proper screening interviews with someone that has sufficient experience assessing PEM in a clinical setting, and use FUNCAP to assess their approximate level of functioning.

Where’s the hostility?

The team responded quite poorly in a thread about the new DSQ PEM questionnaire, and then moved to individual messages with some members, which made everything a lot less transparent and duplicated the effort required by very sick people.

I suggest avoiding implying that this conversation is uncivil without clear references. If you think there are anyone going too far, there’s a contact moderators button at the bottom of each post.
I am speaking in generalities and I'm sorry you are interpreting that as being directed at you personally. It's not. Probably the reason I'm one of the few on the team following social media is because the environment can be very hostile and we don't need to expend our limited energy fielding the personal attacks. I'm in my 70s and so is Dr. Jason. We're old school manners and etiquette :-)
 
I am speaking in generalities and I'm sorry you are interpreting that as being directed at you personally. It's not. Probably the reason I'm one of the few on the team following social media is because the environment can be very hostile and we don't need to expend our limited energy fielding the personal attacks. I'm in my 70s and so is Dr. Jason. We're old school manners and etiquette :-)
I didn’t perceive anything as directed at me personally. I would still suggest avoiding general statements without explicitly saying they are about social media at large, and not the specific forum or thread you’re currently on. If share your frustration with social media, but it probably isn’t very relevant for the topics here either.
 
Back
Top Bottom