It's similar to CLES I believe. While CLES is the number of wins (groupA > Group B) divided by the total number of possible comparisons, cliff d seems like the (wins - loses) divided by the total number of possible comparisons. I get the same values as you using this formula.
I think that...
If found one that I find quite intuitive: the Common Language Effect Size (CLES). If you were to randomly take a participant from the ME/CFS group and a random participant from the HC group, how often would the ME/CFS patients have a lower value?
If this was random noice and no equal values...
Small differences like:
Cohen_d_difference: I got: -0.129, you got: 0.12646
P_Welch_Difference: I got:0.424, you got: 0.432
etc.
Did you exclude those 10 from AT? Is that necessary? I'm not sure that not hitting peak affects their AT values.
No sorry, typo, the second overview I posted was for...
Apologies for the wrong p-values, not sure what went wrong there.
Here's what I got for values at AT and with PI-026 excluded. The first row looks very similar to your results for Work at AT (although for some reason, some figures are a bit different).
One thing that strikes me is that...
Yes I think you make a good point. When expressed as a percentage, there are 4 ME/CFS patients (PI-029, PI-087, PI-114, and PI-166) that have extreme values:
That make the distribution of percentage changes quite skewed:
With these 4 included, I found a cohen_d of 0.008 and p_value of 0.93...
I think we largely agree. The data does not suggest that there is no difference at all. It's not 100% random noise.
I would summarise it as: the data suggests that there might be an effect. But the effect is quite small with poor separation between groups and no correlation with the Bell...
True, my graph looked weird because I checked precision below a certain value if the value was negative but above the value if it was positive. If I don't do this then I get the convergence to the ratio of ME/CFS patients in the total sample that you mentioned:
I tried to look at the precision level for different values for the VO2_percentage change. I had it switch around the 0 point (checked for smaller values in case of negative values and for bigger values when positive values were used).
Here's the code I used
df = df_max
values = np.arange(-40...
Thanks @forestglip. Your coding skills are impressive and very useful.
I think your analysis confirms that there might be a small effect. On the other hand, your approach is also iteratively looking for the best threshold, trying different things and looking for a value with the clearest...
Yes I think we mixed up the terms.
You we're right about the sensitivity and specificity values, but your description (how many with lower values than -9.7% would be ME/CFS patients) refers to precision rather than specificity.
Here's what I got for a threshold of -9.7, for example:
Total...
Thanks again for the impressive analysis @forestglip.
I do not get the same results though. For example if I use the threshold of -9.7% for max VO2 values, I get 25 ME/CFS and 7 healthy controls so that ME/CFS patients make up 78% of the sample rather than 90%.
Because of the large overlap...
Anyway, this is a bit besides the point.
I plan to write a blog post about this because what the data shows is a quite different than what the paper reports and focuses on.
- I think the data show that there is no significant effect for any of the outcomes, whether you look at AT or max...
I've now recalculated with the correct comparison of ME/CFS patients but it is still the same large difference:
This calculation first takes the means, then expresses the change in means as a percentage
(day2_MECFS.mean() - day1_MECFS.mean()) / day1_MECFS.mean() * 100
Result: 9.4%
This one...
I think that VO2 is VO2 divided per weight of the participant (so ml kg−1 min−1) while VO2_t is just the VO2 (ml/min), probably an error in the codebook.
There's still something about these two values that don't add up because they should result in the exact same effect sizes but they often...
Thanks, got the same values as you.
EDIT: this is an error see: https://www.s4me.info/threads/cardiopulmonary-and-metabolic-responses-during-a-2-day-cpet-in-me-cfs-translating-reduced-oxygen-consumption-keller-et-al-2024.39219/page-4#post-552976
I noticed that these average percentage changes...
Yes that was the difference. I forgot to include him because he had no valid data for HR.
Yes I got the same result but made an error in writing it down in my table/overview.
The different p-values might be due to me using:
t_value, p_value = stats.ttest_ind(difference_MECFS, difference_HC...
Had a look but it seems that surprisingly only 10 participants did not meet the maximum effort criteria which are described as follows:
Here's how I implemented this in my code (using Python) - hopefully somebody can check and try to replicate.
df_original['HR_predicted'] = df_original['HR'] /...
Yes good point. I still have to look at these criteria so the values I reported above used all the data and will probably be quite different once I restrict the analysis to those who met the required thresholds.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.