We currently measure how healthcare providers practice medicine and what clinical decisions are being made-how do we determine WHY? Determining effect sizes by merging outcome data from groups of study participants obscures the reality that clinical interventions rarely work for everyone and under all circumstances. In large part, clinicians are unable to face that reality because they lack information about the biological, experiential and environmental sources of that heterogeneity, and the impact of that heterogeneity on ...
Read More
We currently measure how healthcare providers practice medicine and what clinical decisions are being made-how do we determine WHY? Determining effect sizes by merging outcome data from groups of study participants obscures the reality that clinical interventions rarely work for everyone and under all circumstances. In large part, clinicians are unable to face that reality because they lack information about the biological, experiential and environmental sources of that heterogeneity, and the impact of that heterogeneity on clinical responses. When an intervention known to be effective in study populations fails to work in a particular patient, clinicians therefore have little choice but to move to alternative interventions on the basis of pragmatic rules of thumb (heuristics). Assessing knowledge or even behavior by evaluating responses to an isolated clinical presentation forces participant responses into false "finite" siloes not dissimilar to inclusion/exclusion criteria in industry clinical trials-not actually real world or relevant in clinical practice. Since biological and social heterogeneity among trial participants interferes with the detection of true cause-effect relationships, controlled trials are carefully designed to minimize or eliminate heterogeneity's effects, primarily by aggregating the outcomes from many participants rather than drawing inferences from outcomes in individual persons. This intentional 'heterogeneity blindness' in controlled trials has brought with it a progressive shift from qualitative documentation of the concrete to quantitative assessment of the abstract-sophisticated statistical inferences about summary effect size in average patients. How do we create educational programs that are indeed measuring the appropriate outcomes? Do we want to limit our discussions to "educational" outcomes or is it possible to link learning objectives to meaningful patient-centric health outcomes?
Read Less