Debugging the Doctor Brain: Who's teaching doctors how to think?

SNT Gatchaman

Senior Member (Voting Rights)
Staff member
https://bessstillman.substack.com/p/debugging-the-doctor-brain

But how does a doctor know if their models are accurate or adequate?

Answering that question really means asking if the education student doctors receive is both adequately teaching the fundamentals as well as teaching resident physicians how to think about and evaluate their own thought processes. And Dan Luu’s “Why don’t schools teach debugging” got me thinking about the way science and medical education universally teaches the fundamentals: badly.

In medicine, we often mistake the speed of initial understanding with a students’ capacity for mastery. This expectation starts in pre-med courses. Organic chemistry (“O-chem”) is the big pre-med “weed out” course because it both requires high-volume memorization and is one of the first times students have to learn a new way of thinking.

A doctor’s foundational clinical mental models are built during residency, but the apprenticeship model of residency has flaws. An attending physician (an attending is a physician who has completed residency ) may be a skilled clinician but a poor educator, or not have the time, patience, or inclination to educate residents.
 
There are a lot of misaligned incentives in resident education: attendings are judged by metrics of speed and patient satisfaction, residents want to learn and be seen as “good” so they can graduate and be recommended for a job, and hospitals want more patients to be seen, faster.

How can we tell when the resident is quick and right via luck or guessing—and when they’re quick and right because they understand? We can’t, really, not until the right situation presents itself. And the truth is, doctors can get away with a lot of algorithmic thinking before a patient presents who is both complex in unexpected ways and in ways that might kill them if you get it wrong.

That’s what separates the physicians from many other members of the medical team: The training to get away from the algorithm and use a deep understanding to come up with novel solutions. That’s also why algorithmic thinking can be so dangerous. So many patients never bother reading the flowcharts before they arrive and only presenting with the allowed symptoms.

Skill can be confused with speed of mastery and competence can be confused with confidence, because we want it to be. What I keep coming back to is that so much of science and medical training comes down to perception of skill, as opposed to actual skill.

In The Name of the Rose, William of Baskerville is a monk but also a proto-detective in the mold of Sherlock Holmes, and when he’s trying to solve a series of increasingly bizarre murders, he tells his sidekick, Adso, that “we mustn’t dismiss any hypothesis, no matter how farfetched.” And so it often is in medicine. Being okay with uncertainty will make both residents and attending’s lives better, and more importantly, patient lives safer.

Incentivizing deep learning and deep thought means reducing the time pressure on both attendings and residents. If hospitals valued people over profits, they’d hire more attendings to both see patients and supervise, spreading both the patient care and educational workload. The existing arguments that this is cost prohibitive is laughable.

Until incentives align, and the hospitals reward and pay physicians for doing the work of educating in addition to their clinical work; until teaching attendings have adequate training on how to educate; until hospitals are willing to staff adequately so there’s time to teach, the system will remain broken.
 
I'm a huge physics nerd. It fascinates me. I consume so much pop physics stuff on a regular basis. Can't do the math, but I love it anyway.

And one fundamental principle in physics is that theory is useless until it is confirmed experimentally. Experiments are the key to everything. And medicine can't do them. Clinical trials are the closest they can get and they are so incredibly mediocre and inaccurate compared to physics experiments, literally many orders of magnitude off.

Thus they have no real ways of validating their models. Only experiments where all other things are equal can do that. It would help so much if they could accept this. It would invalidate the whole of psychosomatic medicine, and a lot of models. And it's probably too much to bear. But science needs experiments, and medicine is the worst discipline to do those, so they're stuck with having to do things differently, but choose not to.

For years we heard the same claptrap from the ideologues about how they're not saying this, the trials prove it. Of course they said it first, for many years. And since the trials have been largely invalidated they're back to simply asserting it, while pretending that the trials still hold. This is the real problem: sticking to what's wrong. Being wrong is fine. Sticking to it is unacceptable. Make a mistake and learn from it. But don't repeat it. Not even once.

And we face much of the same as programmers, where debugging is very close to an investigative-diagnostic process. That was always one of my biggest strengths. You need accurate immediate feedback. If it's not immediate, it at least needs to be accurate. And seeing how all-over-the-place clinical trials, medicine is very poorly equipped to debug, made even worse the stubborn refusal to listen to a damn thing patients tell them, most of the time anyway.
 
And we face much of the same as programmers, where debugging is very close to an investigative-diagnostic process. That was always one of my biggest strengths. You need accurate immediate feedback. If it's not immediate, it at least needs to be accurate. And seeing how all-over-the-place clinical trials, medicine is very poorly equipped to debug, made even worse the stubborn refusal to listen to a damn thing patients tell them, most of the time anyway.

@rvallee will be well familiar, but for non-programmers, there's a term called "caveman debugging" (perhaps now more inclusively termed "cave-dweller debugging"). Crude but effective, leading to the programmer exclaiming "ugh, me done bad", which is why I think the term was coined.

The idea was simply for the program to do its thing, but to also print out what it was doing and when into a text file log, so you can see where it was and what it was doing leading up to when it crashed or misperformed.

"The most effective debugging tool is still careful thought, coupled with judiciously placed print statements."

— Brian Kernighan, "Unix for Beginners" (1979)

Patients have this facility built-in. Nearly all of them will tell the doctor what is happening to them. The problem is on the medical side where the "programmer"/doctor often simply refuses to even read the logs to correct their error in coding - or in this case their medical management.

I.e. GET works roughly like this, in old-fashioned pseudocode —

10 distance = 200
20 go_for_walk(distance)
30 distance = distance + 100
40 if distance > 5000 then print("patient is cured"); exit
50 goto 20


But in reality you've actually got this —

10 distance = 200
20 go_for_walk(distance)
25 print(report_whether_symptoms_better_or_worse)
30 distance = distance + 100
40 if distance > 5000 then print("patient is cured"); exit
50 goto 20


So if you read the log you might see the "program" reporting "I'm getting worse" (even if it started off with "I'm getting better"). So you'd know to add another line to your GET program for safety —

10 distance = 200
20 go_for_walk(distance)
25 print(report_whether_symptoms_better_or_worse)
27 if report_whether_symptoms_better_or_worse == "I'm worse" then print("patient is non-responder - program failure/unsafe"); exit
30 distance = distance + 100
40 if distance > 5000 then print("patient is cured"); exit
50 goto 20
 
A few lifetimes back I dabbled in writing processor code in hex. Frequent printouts of the code was a big help in tracking what was going on. So much easier to keep track on paper than in your head.

WOW :jawdrop: One word in these three sentences on this thread in particular, combined with watching other discussions recently, has led me to what I believe is the last link in the chain that are the events of 2023 - which explain the situation we are now in. That word is 'code'. It impacts Singapore, Australia, Canada, USA, Ireland and the UK who share similar laws. I now have to read through a 300 page document so I may be out of the global tent for some time! But, I think it leads to a credible, fulsome and enforceable argument to counteract the push back across several areas which people with ME/CFS are experiencing.

The art of patients listening to other patients evidenced here in practice. I'll be back. Thank you @Sean
 
@rvallee will be well familiar, but for non-programmers, there's a term called "caveman debugging" (perhaps now more inclusively termed "cave-dweller debugging"). Crude but effective, leading to the programmer exclaiming "ugh, me done bad", which is why I think the term was coined.

The idea was simply for the program to do its thing, but to also print out what it was doing and when into a text file log, so you can see where it was and what it was doing leading up to when it crashed or misperformed.



Patients have this facility built-in. Nearly all of them will tell the doctor what is happening to them. The problem is on the medical side where the "programmer"/doctor often simply refuses to even read the logs to correct their error in coding - or in this case their medical management.

I.e. GET works roughly like this, in old-fashioned pseudocode —

10 distance = 200
20 go_for_walk(distance)
30 distance = distance + 100
40 if distance > 5000 then print("patient is cured"); exit
50 goto 20


But in reality you've actually got this —

10 distance = 200
20 go_for_walk(distance)
25 print(report_whether_symptoms_better_or_worse)
30 distance = distance + 100
40 if distance > 5000 then print("patient is cured"); exit
50 goto 20


So if you read the log you might see the "program" reporting "I'm getting worse" (even if it started off with "I'm getting better"). So you'd know to add another line to your GET program for safety —

10 distance = 200
20 go_for_walk(distance)
25 print(report_whether_symptoms_better_or_worse)
27 if report_whether_symptoms_better_or_worse == "I'm worse" then print("patient is non-responder - program failure/unsafe"); exit
30 distance = distance + 100
40 if distance > 5000 then print("patient is cured"); exit
50 goto 20
LOL. As a former programmer myself many, MANY moons back, this is a good analogy. No need to possess the pesky logic to write a program that works correctly, instead just stick their own beliefs in it and write EXIT.
 
Back
Top Bottom