They have already started the study, so the protocol is very delayed..
To be clear - they looked at more than ME/CFS. But they did not include the Larun review!
We know from NICE that all of the studies up until that point were of very low or low quality. This study finds that the reviews of those studies...
They will be comparing exercise against exercise.
If exercise is effective for alleviating hyperactive mTORC1, and exercise makes people with PEM worse, wouldn’t that indicate that other...
It seems like they instructed healthy people to act sick while answering questions.
I have no way to assess the quality of this study, but this kind of work gets me more excited about DecodeME and SequenceME. And while it’s...
Can you elaborate on this? I have not heard of this assumption before.
Asthenia = general fatigue and weakness. Given that fatigue is one of the most common LC symptoms, it’s unexpected to see it reported in only ~20...
[IMG] [IMG] * p < 0.05; ** p < 0.002; *** p < 0.001 vs. cases (the p-values refer to the chi-squared test or Fisher’s exact test).
So no healthy controls?
[IMG] The results are very unimpressive. Edit: they should probably have mentioned the poor performance in the avstract. They only mention it...
Balanced Accuracy is (Specificity + Sensitivity)/2
Being able to express reasoning doesn’t mean that it’s actually able to reason. Or that the reasoning it expresses was the reasoning it used....
Ah, I see we have a thread on it:...
So this is a review of the BPS reviews, and it found that they were all quite terrible?
Does it mean that the test said that 49 % and 66 % of the patients were simulators? If so, that’s a terrible false positive rate. It would still...
By ‘go and get data it doesn’t know about, do you mean that it can have an agent that is programmed to e.g. search the web for info about topics...
Do you know what kind of data his model had access to? If it’s based on more or less the same data that you used, would it not be expected to...
If the AI is sub-symbolic, how can we verify it’s reasoning? How do we know that we don’t just train a model that’s good at saying what we want it...
Separate names with a comma.