In regards to LC and ME/CFS research I often see researchers making the claim that current methods are not sensitive enough to detect certain problems, but that their new study using very similar methods that hundreds of others have used before would now be able to show differences because their methods are more sensitive.
That applies to researchers claiming to have to measure inflammatory markers using SIMOA assays rather than ELISA assays or using SIMOA assays rather than ELISA to detect viral antigens and a whole bunch of other technologies and methodologies. Especially in the viral persistence field this seems to be a massive hope.
It seems to me that in the past medical breakthroughs have mostly come through novel ideas and novel technological inventions rather than some technological advancement that shifts your measurement by a decimal place. I would think that in many cases such a technological advancement would not be strictly necessary if one could just circumvent accuracy problems by increasing the sample size.
Typically I cannot judge whether there is much truth to such hopes or whether this is just the type of stuff one typically can easily get grants for. Of course I don't discount the fact that small advancements over multiple decades can end up leading to large jumps and breakthroughs nor that the majority of research tends to be incremental rather than anything else.
I'm just wandering whether there are sensible historical accounts or reasonable arguments in regards to pathology why one should put much hope into an study that looks at the same things many other studies have looked at before but does so using a slightly better tool?
That applies to researchers claiming to have to measure inflammatory markers using SIMOA assays rather than ELISA assays or using SIMOA assays rather than ELISA to detect viral antigens and a whole bunch of other technologies and methodologies. Especially in the viral persistence field this seems to be a massive hope.
It seems to me that in the past medical breakthroughs have mostly come through novel ideas and novel technological inventions rather than some technological advancement that shifts your measurement by a decimal place. I would think that in many cases such a technological advancement would not be strictly necessary if one could just circumvent accuracy problems by increasing the sample size.
Typically I cannot judge whether there is much truth to such hopes or whether this is just the type of stuff one typically can easily get grants for. Of course I don't discount the fact that small advancements over multiple decades can end up leading to large jumps and breakthroughs nor that the majority of research tends to be incremental rather than anything else.
I'm just wandering whether there are sensible historical accounts or reasonable arguments in regards to pathology why one should put much hope into an study that looks at the same things many other studies have looked at before but does so using a slightly better tool?