This is part 2 in a series on Science in Practice. See part 1 here.
As I said in part one, I’m gathering my thoughts for a piece, Science in Practice, that I’m writing for Hayes & Hoffman’s forthcoming book on Core Competencies. I’m starting with problems and here’s the third problem on my list that an evidence-based therapist faces: An evidence-based therapist must rely on case formulation and clinical judgment to guide decision-making.
Evidence-based practice calls for “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. This practice means integrating individual clinical experience with the best available external clinical evidence from systematic research.”1
“For any particular case of CBT practice, formulation is the bridge between practice, theory and research. It is the crucible where the individual particularities of a given case, relevant theory and research synthesize into an understanding of the person’s presenting issues in CBT terms which then informs the intervention.”2
But consider two cracks in this crucible of decision making that make things hard for an evidence-based therapist. First, case formulation has a meager evidence base.2 Kuyken’s dated but on-point review3 still seems the fairest summary of the state of the science:
- “The evidence for the reliability of case formulation is supportive of descriptive but not inferential hypotheses”
- “The evidence for the validity of case formulation is very limited but promising”
- “There is no compelling evidence that CBT formulation enhances therapy processes or outcomes.”
- “The evidence for the acceptability and usefulness of formulation is mixed.”
Second, clinical judgment is prone to serious weaknesses and few of us have the tools, routines, and environment to compensate. Kahnehman’s book, Thinking, Fast and Slow4 helps understand these weaknesses via dual processing theory. We have two modes of processing information: System 1, a fast, associative, low effort mode that uses heuristic short cuts to simplify information and reach ‘good enough solutions’ and System 2, a slow rule-based mode that relies on high effort systematic reasoning.
System 1 heuristics help us be fast and frugal. We quickly limit focus to one aspect of a complex situation. But this leaves us prone to a multitude of perception and reasoning biases and errors. In Kahneman’s rendition of dual processing theory, because the systems are considered hierarchical and discrete, the more rationale conscious system 2 constrains the irrational unconscious system 1, to save us from these biases and errors.
But based on the decision scientists I’ve been reading, the data look like these systems are integrated, not discrete, with both prone to “motivated reasoning”5. That is, we are adapted such that out-of-awareness we tend to access, construct, and evaluate information to serve our goals or ends. As Kahan6 puts it:
“if relatively effortless heuristic reasoning generates the result that is congenial to the extrinsic goal or interest, one will go no further. But if it doesn't -- if the answer one arrives at from a quick, impressionistic engagement with information frustrates that goal -- then one will step up one's mental effort, employing systematic (Kahneman's "System 2") reasoning.6”
Both our fast and our slow thinking are motivated. In fact, being skilled with system 2 thinking makes you better able to fend off or construe disconfirming evidence to fit your goals, not more unbiased (e.g., Kahan7).
Because slow thinking does not solve the problem of errors, Kahan argues for the utility of training both system 1 and system 2 to produce "reliable, preconscious apprehension of the phenomena that merit valid analytical processing.”8 He points to Margolis’ work on pattern recognition9.
“The ability to perform the valid conscious reasoning that consists in making valid inferences from observation, and the experience of doing so regularly, are what calibrate unconscious processes, and train them to select some for the attention of System 2, which is then summoned to attend to them.”
“(h)ow well we perform pattern recognition, for Margolis, will reflect the contribution of conscious, algorithmic types of reasoning. The use of such reasoning (particularly in collaboration with experienced others, who can vouch through the use of their trained pattern-recognition sensibilities that we are arriving at the “right” result when we reason this way) stocks the inventory of prototypes and calibrates the unconscious mental processes that are used to survey and match them to the phenomena we are trying to understand.”8
Now this helps me start to see the shape of the solutions I need as an evidence-based therapist!
To sum up the problem list for the evidence-based therapist I’ve got so far:
Problem 1: As much as I believe a completely structured application of an evidence-supported manual to its specified disorder would probabilistically give my client the best chance of a good outcome, the decision branching in even uncomplicated cases rapidly exceeds the manual’s ability to prescribe or guide decisions. As I try to solve this problem, I run into Problem 2, the difficulty and selection biases of finding and sorting through a fire hose of research for the best current evidence to apply. I also stand humbly facing Problem 3. My reliance on flawed clinical judgment and case formulation, the miraculous crucible where I flexibly resolve problem 1 and 2, is cracked by weak evidence and the errors and biases that heuristic shortcuts and motivated reasoning produce. Completely structured doesn’t work; completely flexible doesn’t work. What would actually help me out is a systematic way to train pattern recognition.
Said differently, structured flexibility looks promising. More on that next.
By the way, it’s outside the scope of this post, but there is completely fascinating science on decision making, science literacy, and socio-scientific decisions—i.e., using science to inform policy and public discussion. Seriously fascinating. Check out10 below for 3 favorites I’ve found so far if you’d like to read down that rabbit hole of goodness.
1 Sackett, D. L., Rosenberg, W. M., Gray, J. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: what it is and what it isn't. Bmj,312(7023), 72–73.
2 NOTE: I found one line of empirical work to operationalize and build the evidence-base for systematic case formulation (Padesky, Kuyken, & Dudley, coding manual http://padesky.com/pdf_padesky/CCCRS_Coding_Manual_v5_web.pdf and Kuyken, et al (2015). Assessing competence in collaborative case conceptualization: development and preliminary psychometric properties of the collaborative case conceptualization rating scale (CCC-RS). Behavioural and cognitive psychotherapy, 1-14.)
I am still searching. But this reflects problems 2—a fairly thorough Google Scholar search yielded little. Should I as a practitioner run with that conclusion after a reasonable effort? My search now is scholarly, so I’ll beat the bushes some more, but it does highlight the information-finding challenges practitioners face.
3 Kuyken W., (2006), Evidence-based case formulation: Is the emperor clothed? In Tarrier, N, & Johnson, J. (Eds), Case Formulation in Cognitive Behaviour Therapy: The Treatment of Challenging and Complex Cases, pp. 12 – 35.
4 Kahneman, D. (2011) Thinking, fast and slow. Farrar, Straus and Giroux, New York.
5 Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108, 480–498. doi:10.1037/0033-2909.108.3.480
7 Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013). http://journal.sjdm.org/13/13313/jdm13313.pdf
10 NOTE: Here are 3 favorites so far from the wide reading I am doing.
· Beautiful scholarship, big picture: Sinatra, G. M., Kienhues, D., & Hofer, B. K. (2014). Addressing challenges to public understanding of science: Epistemic cognition, motivated reasoning, and conceptual change. Educational Psychologist, 49(2), 123-138.
· Very cool Model-Evidence Link Diagrams as a way to teach scientific reasoning Lombardi, D., Sibley, B., & Carroll, K. (2013). Using model-evidence link diagrams to weigh alternative models in argumentation. https://www.researchgate.net/profile/Doug_Lombardi/publication/258473199_Whats_the_Alternative_Using_Model-Evidence_Link_Diagrams_to_Weigh_Alternative_Models_in_Argumentation/links/0a85e52fd21d24f867000000.pdf
· Lay of the land, learning terms: Stanovich, K. E., West, R. F., & Toplak, M. E. (2010). Individual differences as essential components of heuristics and biases research. The Science of Reason: A Festschrift for Jonathan St BT Evans, 355.