Initially, I intended to write this as a response in another post, but I think it deserves its own post now. These thoughts have followed me for quite some time. I'd be interested in reading what your view on this is.
I often bring this up because I can't concur with this kind of logic as I described it in the context of vague antibody tests as control or in the context of psychosomatism/depression. Diagnose someone with psychosomatism/depression, and the therapy will most likely lead to improvement simply due to the general nature of psychotherapy. Just as you could diagnose someone with "pain syndrome" and give them pain killers. If a medical doctor in applied medicine builds their experience of diagnosis based on verification measured by improvement of therapy, they will eventually become more and more convinced that these kinds of diagnoses are accurate and true. For the same reason, homeopaths are so convinced about their globules. Every person has some kind of psychosomatic response.
In computer science and maths, this is what we would describe proof by induction. You can invent any kind of partial theory/diagnosis/mechanism/logic, and it will work in an inductive proof if given the right assumptions (making it complete). If physiological therapy was never invented and psychological therapy is the only thing we knew and could imagine, we could define every disease and therapy just by psychological discriminators and would see certain degrees of success within this realm. We could live in such a reality, and medicine would laugh at you if you put something like physis in your mouth, just as modern medicine would be laughed at during the ages of medieval medicine (potentially considering it witchcraft).
Since the enlightenment, we have developed empirical thinking, fortunately. Yet, this thinking in medicine is similarly imperative as it was during the medieval ages in that diseases are still classified very generally and symptom-based-first. During the medieval ages, there was not really a lack of sophisticated observation. Apart from empirical thinking, in my view, what we still lack today is functional thinking. Therapy is not primarily a consequence of a diagnosis and of symptoms. Diagnosis is just a necessary step so that therapy can happen. Symptom discriminators can be part of a diagnosis, but they do not really have to. We all have one disease, and that is aging. Every healthy person will eventually develop a disease, just like an HIV-positive person can be perfectly healthy for many years until the virus becomes active.
For that reason, the logic symptom -> diagnosis -> therapy is outdated and belongs to history IMHO. There are more and more smart home diagnostic technologies available, so that the economic aspect of this way of thinking isn't very relevant anymore. Soon, you can monitor most of your essential biomarkers at home every day. Not even blood has to be removed by physicians any more thanks to blood spot tests. All of these technologies aren't very precise yet, but the point of them isn't precision. It is to give you indications and likelihoods so that the response can improve your average lifestyle. As another instance, the potential in cancer prevention doesn't derive from the precision of tests anymore but from the interval of testing.
This means that therapy shouldn't be considered a consequence anymore. We all age and have risks for certain diseases. A variety of diagnostic measures can assess these risks. The greater the diagnostics coverage and the more often done, the more precise your status can be determined. But you will eventually get sick, and the risk can already be determined at your birth. If the most likely cause of death can already be determined, where is the point to it then? We all do therapy every day just by the choices we make in our lifestyle, diet, supplements, medication. Some people might associate this with orthomolecular medicine, but it is much more.
If disease definitions are only a means to the ongoing set of therapeutic measures (i.e., lifestyle, diet, etc.), then their definition methodology has to change. Then, diseases are best defined by projection from the complete set of available therapies. If you are looking at the choice of therapy with a pathophysiological mechanism, then the disease discrimination will perform best if the participating pathophysiological markers define it. You don't have Rheuma anymore then because Rheuma doesn't tell you the mechanism, neither does it clarify therapy's choice very well. Today, we know the mechanisms such as the HLA-B27 genotypes and other cytokine and chemokine system dysfunctions. These dysfunctions are shared among many autoimmune diseases. The type of dysfunction is the best discriminator for the choice of therapy. No reason to call something Rheuma anymore, or Hashimoto or the many other diseases that share so many similarities and overlaps. Symptomatic diagnosis only will remain relevant in the context of symptomatic therapy then. You would still have arthritis/"joint pain" or whatever term suits a selectable symptomatic therapy.
If you move on to this kind of logic, there is another potential benefit. At the moment, disease classification is mostly black and white thinking. Either someone has a disease or not. In some parts of medicine, we see progress, fortunately, for example, in that Diabetes II now has glucose intolerance approved as its precursor (and widespread awareness), which will lead to therapies such as Metformin prescription and a diet change. However, this is still black, gray, and white. The field of genetics is the most advanced within this realm. Despite the categorical nature of the DNA, assessment is done statistically. There is never a guarantee for any disease due to mutations, even if you have an identical twin who developed the disease after genetically testing positive for a rare disease. One random mutation can give someone the chance of not getting it. Risks can not be assessed very well in likelihoods either because there is no perfect comparison group as just described. Your risk can only be determined by comparing you to the rest of the population or ethnicity or other subgroups such as therapy subgroups.
If therapy is considered an omnivalent, categorical thinking becomes obsolete. A set of ongoing diagnostics will regularly inform you about your risks and numeric degrees of all the diseases that can eventually become more or less symptomatically apparent. The same set of ongoing diagnostics can inform you about the persistent therapeutic choices you make every day. The projection can be performed best by machine learning algorithms, not by medical doctors' interpretation and personal experience. Medical doctors can be of much more use in the field of research so that machine learning gets more input and becomes more precise.
This will all take some time until technologies are far enough for such a lifestyle to become a reality, but you have to start with the paradigm shift. Without the paradigm shift, this won't ever become possible because it requires starting at 0, regaining all the experience, and doing all the studies within the new paradigm. Currently, the success of a therapeutic choice is measured within the imperative paradigm, usually by double control (=categorical differentiation). This has to be repeated in numerical terms to have any meaning within the new paradigm. An intermediary age of studies could also infer to both paradigms. The first set of studies could also focus on the success rate of both paradigms. Statistically, there is only one possible winner, though. Early stages of the common pathological diseases can be determined far earlier and eliminated. Certainly, this will happen naturally when smart diagnostics and machine learning are omnipresent. But it could happen far earlier if a scientific motivation is put behind.
I often bring this up because I can't concur with this kind of logic as I described it in the context of vague antibody tests as control or in the context of psychosomatism/depression. Diagnose someone with psychosomatism/depression, and the therapy will most likely lead to improvement simply due to the general nature of psychotherapy. Just as you could diagnose someone with "pain syndrome" and give them pain killers. If a medical doctor in applied medicine builds their experience of diagnosis based on verification measured by improvement of therapy, they will eventually become more and more convinced that these kinds of diagnoses are accurate and true. For the same reason, homeopaths are so convinced about their globules. Every person has some kind of psychosomatic response.
In computer science and maths, this is what we would describe proof by induction. You can invent any kind of partial theory/diagnosis/mechanism/logic, and it will work in an inductive proof if given the right assumptions (making it complete). If physiological therapy was never invented and psychological therapy is the only thing we knew and could imagine, we could define every disease and therapy just by psychological discriminators and would see certain degrees of success within this realm. We could live in such a reality, and medicine would laugh at you if you put something like physis in your mouth, just as modern medicine would be laughed at during the ages of medieval medicine (potentially considering it witchcraft).
Since the enlightenment, we have developed empirical thinking, fortunately. Yet, this thinking in medicine is similarly imperative as it was during the medieval ages in that diseases are still classified very generally and symptom-based-first. During the medieval ages, there was not really a lack of sophisticated observation. Apart from empirical thinking, in my view, what we still lack today is functional thinking. Therapy is not primarily a consequence of a diagnosis and of symptoms. Diagnosis is just a necessary step so that therapy can happen. Symptom discriminators can be part of a diagnosis, but they do not really have to. We all have one disease, and that is aging. Every healthy person will eventually develop a disease, just like an HIV-positive person can be perfectly healthy for many years until the virus becomes active.
For that reason, the logic symptom -> diagnosis -> therapy is outdated and belongs to history IMHO. There are more and more smart home diagnostic technologies available, so that the economic aspect of this way of thinking isn't very relevant anymore. Soon, you can monitor most of your essential biomarkers at home every day. Not even blood has to be removed by physicians any more thanks to blood spot tests. All of these technologies aren't very precise yet, but the point of them isn't precision. It is to give you indications and likelihoods so that the response can improve your average lifestyle. As another instance, the potential in cancer prevention doesn't derive from the precision of tests anymore but from the interval of testing.
This means that therapy shouldn't be considered a consequence anymore. We all age and have risks for certain diseases. A variety of diagnostic measures can assess these risks. The greater the diagnostics coverage and the more often done, the more precise your status can be determined. But you will eventually get sick, and the risk can already be determined at your birth. If the most likely cause of death can already be determined, where is the point to it then? We all do therapy every day just by the choices we make in our lifestyle, diet, supplements, medication. Some people might associate this with orthomolecular medicine, but it is much more.
If disease definitions are only a means to the ongoing set of therapeutic measures (i.e., lifestyle, diet, etc.), then their definition methodology has to change. Then, diseases are best defined by projection from the complete set of available therapies. If you are looking at the choice of therapy with a pathophysiological mechanism, then the disease discrimination will perform best if the participating pathophysiological markers define it. You don't have Rheuma anymore then because Rheuma doesn't tell you the mechanism, neither does it clarify therapy's choice very well. Today, we know the mechanisms such as the HLA-B27 genotypes and other cytokine and chemokine system dysfunctions. These dysfunctions are shared among many autoimmune diseases. The type of dysfunction is the best discriminator for the choice of therapy. No reason to call something Rheuma anymore, or Hashimoto or the many other diseases that share so many similarities and overlaps. Symptomatic diagnosis only will remain relevant in the context of symptomatic therapy then. You would still have arthritis/"joint pain" or whatever term suits a selectable symptomatic therapy.
If you move on to this kind of logic, there is another potential benefit. At the moment, disease classification is mostly black and white thinking. Either someone has a disease or not. In some parts of medicine, we see progress, fortunately, for example, in that Diabetes II now has glucose intolerance approved as its precursor (and widespread awareness), which will lead to therapies such as Metformin prescription and a diet change. However, this is still black, gray, and white. The field of genetics is the most advanced within this realm. Despite the categorical nature of the DNA, assessment is done statistically. There is never a guarantee for any disease due to mutations, even if you have an identical twin who developed the disease after genetically testing positive for a rare disease. One random mutation can give someone the chance of not getting it. Risks can not be assessed very well in likelihoods either because there is no perfect comparison group as just described. Your risk can only be determined by comparing you to the rest of the population or ethnicity or other subgroups such as therapy subgroups.
If therapy is considered an omnivalent, categorical thinking becomes obsolete. A set of ongoing diagnostics will regularly inform you about your risks and numeric degrees of all the diseases that can eventually become more or less symptomatically apparent. The same set of ongoing diagnostics can inform you about the persistent therapeutic choices you make every day. The projection can be performed best by machine learning algorithms, not by medical doctors' interpretation and personal experience. Medical doctors can be of much more use in the field of research so that machine learning gets more input and becomes more precise.
This will all take some time until technologies are far enough for such a lifestyle to become a reality, but you have to start with the paradigm shift. Without the paradigm shift, this won't ever become possible because it requires starting at 0, regaining all the experience, and doing all the studies within the new paradigm. Currently, the success of a therapeutic choice is measured within the imperative paradigm, usually by double control (=categorical differentiation). This has to be repeated in numerical terms to have any meaning within the new paradigm. An intermediary age of studies could also infer to both paradigms. The first set of studies could also focus on the success rate of both paradigms. Statistically, there is only one possible winner, though. Early stages of the common pathological diseases can be determined far earlier and eliminated. Certainly, this will happen naturally when smart diagnostics and machine learning are omnipresent. But it could happen far earlier if a scientific motivation is put behind.
Last edited: