1. Guest, the 'News in Brief' for the week beginning 21st September 2020 is here.
    Dismiss Notice
  2. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Dr Ron Davis Gives Updates on ME/CFS research - September 2019 onwards

Discussion in 'BioMedical ME/CFS News' started by John Mac, Sep 26, 2019.

Tags:
  1. Londinium

    Londinium Senior Member (Voting Rights)

    Messages:
    254
    Likes Received:
    2,515
    Is that 70 patients after having selected IDO2 as the gene of interest though? If one is testing other genes as well in that population of 70 then one needs to do a correction for multiple comparisons.
     
  2. ukxmrv

    ukxmrv Senior Member (Voting Rights)

    Messages:
    536
    Likes Received:
    3,674
    I came across this report (not ME related) of a family where the children inherited a combination of genes which researched argued as responsible for their symptoms.

    "It seems that the genetic mutation the children inherited from their mother acts as a modifier for the father’s mutations, says Srivastava. The NKX2-5 variant appears to have exacerbated the abnormal development caused by the father’s mutations, leading to a phenotype that is more severe in the children than in either of the parents.

    The phenomenon of a handful of genes together determining a phenotype is known as oligogenic inheritance, and it’s not a new theory for disease mechanisms. Traditionally, “we’ve be able to understand human disease through [single gene] disorders that are relatively rare but easier to detect,” says Srivastava. Yet the fact that many genetic variants are not deterministic of disease “would suggest most disease is a combination of genes,” he tells The Scientist."

    https://www.the-scientist.com/noteb...-their-own-cause-disease-when-combined-66328?
     
    alktipping, merylg, Lisa108 and 6 others like this.
  3. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    6,047
    Likes Received:
    39,626
    One of the great things in the video is Ron Davis making the point that scientist strive to prove themselves wrong, and the more they strive to do this and fail, the more confidence they gain that they are probably right.

    Is this essentially the same as what the null hypothesis is about? Running a trial with the objective of proving the null hypothesis?

    He also observed that this has been the problem with some of the other research - scientists striving to prove themselves right rather than striving to prove themselves wrong. MS needs to take a look, perhaps more especially those who think he's so wonderful.
     
    Annamaria, ukxmrv, Graham and 6 others like this.
  4. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    4,693
    Likes Received:
    36,631
    Location:
    Canada
    Yup. I'm bummed out that the process has not yielded result. But I am very much encouraged that the process is of the highest level of quality.

    What a change from decades of mediocrity. Not all of it, of course, there are very good researchers who toiled at it in the past and others presently, but the combination of high quality within a sustainable research effort is brand spanking new. High quality research in ME used to be done-and-gone because funding dried out. Here there's an actual strategy at work.

    The concepts of falsifiability and null hypothesis are connected in that the point of both is to assume being wrong and building hypotheses about what would it mean if it were wrong, what must be found if the hypothesis doesn't hold up. But a null hypothesis can be made without the assumption of being wrong, as in comparative trials. Instead every psychosocial trial not only assumes their treatment works but claims during the trial that it is safe and effective, biasing everyone in the process, while making explicitly unfalsifiable claims and holding them as true and validated.

    So the missing ingredient of the psychosocial research, which tests a null hypothesis that there should be no difference between any treatment arm, is the assumption of being wrong. Instead they assume it works, which is exactly how not to science. In PACE, they explicitly hypothesize that CBT and GET are superior, looking to find positive evidence, rather than proving themselves wrong.

    They also ignore null results, again exactly how not to science. And they are extremely biased, which could go on for a while now in trying to list all the things they screw up knowingly because they don't think it matters.
     
  5. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    6,047
    Likes Received:
    39,626
    Whilst reading your post @rvallee the thought occurred to me of what the PACE trial would have been like had it been conceived and run by real scientists. Hard on the heels of that thought was my next one, which is that PACE would never have got off the starting blocks, at least not in the form we know it.
     
  6. Guest 2176

    Guest 2176 Guest

    There is no hope. I'm applying for Dignitas
     
  7. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    2,122
    Likes Received:
    13,677
    Location:
    Australia
    That isn't the only one they initially looked at though. So you have to correct for multiple comparisons and the odds go way down.

    When 90% (or whatever) of the general population has the SNP, at best it is clear the SNP is not the most important part.

    I don't know why the gene testing is being emphasised when it is not at all impressive.
     
    Annamaria, Sarah94, Mij and 1 other person like this.
  8. Trish

    Trish Moderator Staff Member

    Messages:
    25,870
    Likes Received:
    126,837
    Location:
    UK
    I am very sorry to hear you are feeling so despairing @debored13. This illness can be so hard to cope with at times. I hope you can find some help and support to keep you going. None of us know when our illness may take an upturn, and we find a way to find some quality in our lives. I hope that can happen for you.

    I think there is hope that there will be significant progress in ME research over the next few years. Even if the definitive cause is not found, there are treatments being tried that may help at least some of us to regain a better quality of life.
     
  9. Sarah94

    Sarah94 Senior Member (Voting Rights)

    Messages:
    2,515
    Likes Received:
    7,944
    Location:
    UK
    What does "correct for multiple comparisons" mean?
     
    alktipping likes this.
  10. Londinium

    Londinium Senior Member (Voting Rights)

    Messages:
    254
    Likes Received:
    2,515
    In essence, the more things you test for then the higher the likelihood you'll find something that is just the result of random chance. Thus, when calculating a p-value (defined as, if there was no relationship, what is the chance that this result could occur?) we must adjust it for the number of separate factors we were measuring.

    For example, say the population has only either brown or blond hair, and there is a 50/50 split of each. I report that I observed a population of 8 ME/CFS patients and all had brown hair. I hypothesise that it is necessary, but not sufficient, for a person to have brown hair to get ME/CFS.

    If I had *only* looked at hair colour, then if there is actually no relationship between hair colour and ME/CFS, the chance of getting 8 same-colour-haired patients in a row is 0.5^7 = 0.78%. Thus it is highly unlikely for this to have occurred by chance. I therefore publish a paper claiming that hair colour is relevant to ME/CFS.

    Now, what if I hadn't just measured hair colour when collecting the data? What if I'd also checked sex, above/below average height, above/below average weight, etc., such that I'd collected 30 separate variables (each with a 50/50 split in the wider population)? Well, now the odds of my finding a relationship in any of these variables just by chance assuming there is no actual relationship is as follows:

    P(chance finding) = 1 - P(no chance finding)^(number of tests)
    = 1 - (1- 0.5^7)^30
    = 1 - (1-0.78%)^30
    = 1 - 0.992^30
    = 1 - 0.790
    = 21%

    So if I measure 30 variables instead of one, and one of those variables is consistent in all 8 patients, then the probability of that happening by chance is 21% not 0.8%. Thus we need always to adjust a p-value to account for the 'multiple comparisons' we are making. (The maths of the adjustment is not the same as that above, which I've simplified to demonstrate the concept). This is particularly important in genome studies, given the huge number genes involved and where a given allele may not have a 50/50 distribution in the wider population; e.g. you might find a gene of interest that is present in 90% of the population but that you think is over-represented in the patient group.


    (And this is a very real-world problem. There was a case of a Dutch (?) nurse prosecuted because the chance of the death rate in her patients being down to chance was something like "1 in 10,000". That sounds impressive, but if you have 30,000 nurses in your healthcare system and if you don't allow for multiple comparisons - i.e. "what is the chance of this death rate afflicting any nurse rather than this nurse - you could draw a dangerously incorrect inference.)
     
    alktipping, Annamaria, feeb and 10 others like this.
  11. JES

    JES Senior Member (Voting Rights)

    Messages:
    171
    Likes Received:
    1,005
    The 90% number is not impressive alone, but it's quite impressive in the reverse sense, that is, if it really turns out that literally every ME/CFS patient has this mutation (now we are up to 70 something). Then this mutation will be a sort of prerequisite of developing ME/CFS.

    Somebody in another post used the car engine analogy. To expand on it further, let's say we have a small percentage of car engines seemingly randomly failing and nobody has figured out why. But one day somebody makes the discovery that only engines with a part from subcontractor A belong to the failed ones. Subcontractor A happens to deliver parts to 90% of all engines, so most of them don't fail, but more importantly, all failed ones belong to subcontractor A, none to subcontractor B, which delivers parts to 10% of engines. If this was discovered, the manufacturer can begin to rule out what in the parts of subcontractor A, along with perhaps environmental factors like humidity and temperature, cause their engines to fail. In the given scenario, this discovery could be massively important.

    In the engine example, it's easier to see how the part itself could still be the core issue, even if most of the 90% of engines that come equipped with that part will never fail.
     
    Last edited: Oct 6, 2019
    alktipping, Annamaria and Sisyphus like this.
  12. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    2,122
    Likes Received:
    13,677
    Location:
    Australia
    I bet it won't turn out that every patient has the mutation.
     
  13. Milo

    Milo Senior Member (Voting Rights)

    Messages:
    1,052
    Likes Received:
    7,844
    The question remains: does it matter and is it relevant?
    Same question with red blood cell deformabilities: does it matter and is it relevant?
    The same question can be asked for any finding so far.

    5 years from now, will it still be relevant?
    5 years ago people were really concerned about the De Merleir test which was supposed to determine whether you would respond to Gc-Maf. Nagalase. Nowadays no one talk about GcMaf.
     
    alktipping likes this.
  14. JES

    JES Senior Member (Voting Rights)

    Messages:
    171
    Likes Received:
    1,005
    As far as I know, the initial observation was based on their 20 severely ill patients (link to presentation), so it seems that almost certainly we have at least around 50 patients that were tested after the hypothesis.
     
    Annamaria and Sarah94 like this.
  15. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    2,122
    Likes Received:
    13,677
    Location:
    Australia
    alktipping, Annamaria and Sisyphus like this.
  16. Londinium

    Londinium Senior Member (Voting Rights)

    Messages:
    254
    Likes Received:
    2,515
    Thanks, that would seem to give more statistical significance. We’ll have to await the data though.
     
    alktipping and Annamaria like this.
  17. Sisyphus

    Sisyphus Senior Member (Voting Rights)

    Messages:
    343
    Likes Received:
    1,223
    This is a dumb question, but what exactly is a null hypothesis? I understand it's an essential part of the scientific method, but being familiar with a term does not equal understanding it.
     
    alktipping likes this.
  18. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    4,693
    Likes Received:
    36,631
    Location:
    Canada
    Definitely not a dumb question. It's a simple concept but so profound it's hard to grasp. Not sure I do well enough to explain it but I'll try.

    It's assuming that the presence/addition of something will not change its outcome and measuring deviations from that expectation. It gives a measure of confidence for how unlikely that deviation could be a fluke by random chance. It becomes less reliable in fields where you cannot completely eliminate the influence of all external factors, but still useful.

    So in a weight loss trial about the efficacy of eating with your non-dominant hand, analysis will assume that there will be no differences between the group that ate with their dominant hand and the one that ate with the non-dominant one. Statistically you will not be able to tell either group apart simply from looking at the results, null hypothesis confirmed, eating with your dominant hand is unlikely to be the cause of someone gaining weight.

    It's a big change from the old pre-science ways of people trying to prove themselves right, which is easy to do for biased researchers. Also the current ways in psychomagic research, unfortunately.
     
    alktipping likes this.
  19. Londinium

    Londinium Senior Member (Voting Rights)

    Messages:
    254
    Likes Received:
    2,515
    Basically, it's the hypothesis that there is no link between the variables being measured. It's quite important in understanding p-values, which IIRC plenty of studies have found that even researchers misunderstand.

    Say I find that mean measurement of [Chemical A] is higher in the bloodstream of patients with [Condition X] compared to healthy controls. Looking at the distribution of [Chemical A] measurements in patients versus the distribution in controls and I find the mean is higher, with a p-value of 0.01. What does that p-value tell us? People often get this wrong and assume it means there's a 1% chance that [Chemical A] isn't linked with [Condition X].

    The p-value actually tells us that if the null hypothesis were correct - i.e. if there is no linkage between [Chemical A] and [Condition X] - what the probability is that I would have got a difference between patients and controls at least this large.

    (Admittedly it's been quiiiiiite a long while since I did stats...)
     
    rvallee, alktipping and Barry like this.
  20. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    6,047
    Likes Received:
    39,626
    So would this rephrasing from your example also be correct?

    If there truly is no correlation between [Chemical A] and [Condition X], there would be a 1% probability that due to random chance alone, we would see a big enough difference between patients and controls to erroneously suggest there is a correlation. i.e. 1% chance of a false positive.
     
    alktipping likes this.

Share This Page