Low-Dose Naltrexone restored TRPM3 ion channel function in Natural Killer cells from long COVID patients, 2025, Martini et al

Discussion in 'Long Covid research' started by forestglip, Apr 11, 2025.

  1. Kronos

    Kronos Established Member

    Messages:
    7
    Agree.

    To be honest, I don't know (not judging your statement in any way) if that conclusion can be drawn directly from from this test.
    Usually the null hypothesis being true isn't talked about in frequentist statistics since that's a bayesian idea and things get "funky".

    This covers the same topic:
    https://stats.stackexchange.com/que...e-for-or-interpretation-of-very-high-p-values
     
    Peter Trewhitt likes this.
  2. forestglip

    forestglip Senior Member (Voting Rights)

    Messages:
    2,068
    I don't know much about bayesian statistics, and I don't see any new insights on that page (though I don't totally understand all the probability terminology). But one comment has a similar view:
     
    Peter Trewhitt likes this.
  3. jnmaciuch

    jnmaciuch Senior Member (Voting Rights)

    Messages:
    530
    Location:
    USA
    For what it's worth (as non-simulated further validation of your earlier analysis), when I do differential gene expression analyses where I'm running ~10K tests, a portion of those will always come up as p>0.99 (and that is with a test that does not assume normality). Spot checking my most recent analysis, it was around 180 out of 13000 comparisons, so roughly 1%.

    Which just speaks to your earlier point of equal likelihood of any p-value under the null hypothesis. But my understanding was that for [edit: any one specific] test, it will never tell you anything other than whether you can reject the null hypothesis. The logic of the test is not reciprocal in that way.

    I've also gotten a 0.999 p-value when I was just doing a single comparison and it seemed unlikely that any of the assumptions were violated. I think it is sometimes just a luck of the draw thing, [Edit: though >0.9999 being reported twice in the results seems to indicate it's not just luck of the draw unless we're all witnessing a once-in-a-lifetime event. I agree it's most likely an assumptions thing]
     
    Last edited: Apr 16, 2025
    Peter Trewhitt likes this.
  4. forestglip

    forestglip Senior Member (Voting Rights)

    Messages:
    2,068
    That's my understanding as well.

    Yeah, definitely not impossible that they're the lucky 1 in 10,000. (Though technically even less of a chance than that since it's ">.9999" which could be any number between that and 1.)

    Out of curiosity I searched "p>.9999" and there are plenty of papers, though I suppose with millions of papers that have been written, that's to be expected.
     
    Peter Trewhitt likes this.
  5. jnmaciuch

    jnmaciuch Senior Member (Voting Rights)

    Messages:
    530
    Location:
    USA
    Sorry, I think I added my edit right after you quoted me. I had just realized that they reported >0.9999 twice which makes this extremely unlikely to be a luck of the draw thing
     
    Last edited: Apr 16, 2025
    Peter Trewhitt likes this.
  6. forestglip

    forestglip Senior Member (Voting Rights)

    Messages:
    2,068
    I was thinking about that. I agree that makes it even less likely, but I guess there is still the possibility that the results from the two tests are extremely correlated to each other, in which case the p values should also be similar. I don't know anything about these tests though. But I'm guessing the correlation would have to be very, very high for this to work out, and it'd probably make sense to look for other explanations.
     
    Peter Trewhitt and jnmaciuch like this.
  7. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    2,445
    Location:
    Norway
    1/10,000 * 1/10,000 = 1/100,000,000

    They ran more experiments than just two, so the probability is higher if the p-values are close to 1-(1/10,000).
     
    Peter Trewhitt likes this.
  8. forestglip

    forestglip Senior Member (Voting Rights)

    Messages:
    2,068
    I've been going down a p value rabbit hole the past couple days because it annoys me when something that seems like it should be intuitive isn't. This page explaining p values is excellent if you're interested.

    But anyway, specifically regarding your quote, which earlier I agreed with, here's a relevant quote from that page:
     
    Peter Trewhitt likes this.
  9. jnmaciuch

    jnmaciuch Senior Member (Voting Rights)

    Messages:
    530
    Location:
    USA
    Thanks for the link! That’s interesting, I suppose that makes sense now that I think about it. My intuition still makes me cautious about whether it’s valid to make inferences about anything other than rejecting the null hypothesis. I’ll have to sit with that a bit more
     
    Peter Trewhitt likes this.
  10. forestglip

    forestglip Senior Member (Voting Rights)

    Messages:
    2,068
    Oh yeah, I don't take it as much more than an interesting fact that if it is p=.99 the null hypothesis is at least slightly more likely to be correct than if p=.50. I think you'd probably have to do much more math to quantify if that's to a degree that's useful for any given test.

    Edit: Though I'm not totally sure what I said is true. I didn't dig much deeper into high p values, just thought the quoted part might be interesting.
     
    Last edited: Apr 18, 2025
    Peter Trewhitt and jnmaciuch like this.
  11. jnmaciuch

    jnmaciuch Senior Member (Voting Rights)

    Messages:
    530
    Location:
    USA
    Side note @forestglip since you mentioned you were unfamiliar with Bayesian statistics:

    our thought process about the two >0.9999 p values is exactly the intuition behind Bayes’ theorem.

    Given that we’re seeing two >0.9999 p-values in a research paper (data), and knowing how likely it is to get p > 0.9999 to begin with (prior probability), is it more likely that what we’re seeing is a result of 1) a random happenstance or 2) an error in the statistical analysis (posterior probability)?

    Apologies if I’m explaining something you already know, I just thought it was a neat, very intuitive example so it would be worthwhile pointing out
     
    Peter Trewhitt likes this.
  12. forestglip

    forestglip Senior Member (Voting Rights)

    Messages:
    2,068
    Thanks, basically all I know is that Bayes' incorporates prior knowledge about how likely something is to occur from before even running the experiment. I hear the word often and it seems interesting. So many things I want to learn about and too little energy and time but it's in the queue!
     
    Peter Trewhitt and jnmaciuch like this.
  13. Kronos

    Kronos Established Member

    Messages:
    7
    "having a high p-value is indeed evidence in favor of the null hypothesis, because a high p-value is more likely to occur if the null hypothesis is true than if it is false."

    I have a problem with that statement (if the "evidence" is supposed to be meaningful evidence).

    This is from a well known consensus paper:

    https://www.tandfonline.com/doi/epdf/10.1080/00031305.2016.1154108?needAccess=true
     
    Turtle, Trish and Peter Trewhitt like this.
  14. forestglip

    forestglip Senior Member (Voting Rights)

    Messages:
    2,068
    That does make sense, thanks for that paper.
     
    Peter Trewhitt likes this.
  15. Kronos

    Kronos Established Member

    Messages:
    7
    It really is a rabbit hole.

    The best way if one wants a deep understanding is imo literally to "do the math" from the beginning and forget the intuition.
    Write it down mathematically & deduce what is desired via known theorems (shortly spoken).
    Unfortunately I can't do that kind of deep thinking anymore due to symptoms (hello 24/7 severe headache).
    And testing in a way medicine needs it is not common in my domain (physics).
     
    Last edited: Apr 23, 2025 at 2:16 PM
    Peter Trewhitt, Utsikt and forestglip like this.

Share This Page