Advances in psychiatric treatment (2009), vol. 15, 72–79 doi: 10.1192/apt.bp.107.005298
ARTICLE
Niall Crumlish is Lecturer in Psychiatry in Trinity College, Dublin. His primary research interests are early psychosis, insight and transcultural psychiatry. Brendan D. Kelly is Senior Lecturer
in Psychiatry at University College Dublin. His research interests include the epidemiology of psychosis and relationships between mental illness and social factors.
Correspondence Dr Niall Crumlish, Jonathan Swift Clinic, St James’s Hospital, James’s Street, Dublin 8, Ireland. Email: niall.crumlish@tcd.ie
Niall Crumlish & Brendan D. Kelly

72
SummARy
Over the past decade, the study of error in medicine has expanded to incorporate new insights from cognitive psy chology, generating increased research and clinical interest in cognitive errors and clinical decisionmaking. The study of cognitive error focuses on predictable errors in thinking that result from the use of cognitive shortcuts or ‘heuristics’. Heuristics reduce the time, resources and cognitive effort required for clinical decisionmaking and are a feature of mature clinical thinking. Heuristics can also lead to bias and must be used with an awareness of their weaknesses. In this article, we describe heuristics commonly used in clinical decisionmaking and discuss how failure of heuristics results in cognitive error. We apply research findings on decision making in medicine to decisionmaking in psychiatry and suggest directions for training and future research into cognitive error in psychiatry.
DECLARATIoN of INTEREST
None.
To err is human, and medicine is no exception (Horton 1999). In the USA, Kohn and colleagues (1999) reported that at least 44000 deaths a year resulted from medical error; this statistic generated alarm not only among patients and the clinical community, but also in the Clinton White House (Pear 1999). As a result, subsequent years have seen substantially increased interest in medical error in both scienti c (Leape 2005) and popular literature (Gawande 2002). Indeed, the eld has grown to the point that sub-specialties in medical error research have opened up, including medica- tion error, diagnostic error and cognitive error.
In How Doctors Think, Professor Jerome Groop- man, a Harvard haematologist and writer with the New Yorker, has de ned cognitive errors in medi- cine simply, as ‘errors in thinking that physicians can make’ (Groopman 2007: p. 23). He argues that errors in thinking, rather than errors of technique, form the majority of mistakes in modern medicine, i.e. there is a ‘cascade of cognitive errors’ that results in a clinical error (p. 260). Groopman cata- logues common cognitive errors in medical practice and outlines practical strategies for acknowledging and correcting them. How Doctors Think gener- ated many enthusiastic reviews (Crichton 2007), of which few drew attention to the footnote on page 7: ‘I quickly realised’, wrote Groopman, ‘that trying to assess how psychiatrists think was beyond my abilities’.
The omission of psychiatry from How Doctors Think, and for this reason, was arguably un- necessary: the cognitive style of psychiatrists is surely not so esoteric as to be un-understandable. We suspect that Professor Groopman would have found psychiatrists to be like any other doctors, had he applied the literature on cognitive error to psychiatry. In this article, we do just that.
Cognitive error and heuristics
The study of cognitive error in medicine nds its roots in the literature on cognitive psychology from the past four decades (Redelmeier 2001). The key point of departure was the work of Amos Tversky and Daniel Kahneman, two psychologists whose studies of decision-making under conditions of un- certainty won the Nobel Prize for Economics in 2002. In a seminal paper for the journal Science, they discussed reliance on heuristics in decision- making (Tversky 1974). Heuristics are cognitive shortcuts that allow decisions to be reached in conditions of uncertainty. Many individual heur- istics are identifiable (Table 1), but what they have in common is that they reduce the time, resources and cognitive effort required to make a decision (Croskerry 2002). The use of heuristics can be contrasted with the hypothetico-deductive method of decision-making, in which all necessary evidence for and against any potential course of action is carefully examined and weighed. The latter assumes no bias on the part of the decision maker, and optimal time and resources.
Heuristics are useful, particularly when time and information are limited. Indeed, Groopman (2007: p. 36) argues that heuristics are ‘the foundation of all mature medical thinking’. However, they are prone to bias. Decisions based on heuristics are more likely to be wrong than decisions made using hypothetico-deductive methods (Croskerry 2003). Tversky & Kahneman noted that reliance on heuristics leads to cognitive bias and ‘severe and systematic errors’ (Tversky 1974). Heuristics that result in error are called ‘failed heuristics’ (Croskerry 2002). In this article, we refer to error resulting from failed heuristics as cognitive error.
Why should medical practitioners be prone to cognitive error?
Heuristics are likely to be used in situations of high complexity or uncertainty (Tversky 1974), when
Representativeness occurs when thinking is guided by a prototype, so that an event is not considered probable unless the presentation is prototypical of it. (In medicine, the event is often a diagnosis.) The representativeness heuristic may be useful when the doctor is confronted with a prototypical pres- entation: pulmonary embolism can be diagnosed almost without cognitive effort in a patient who presents with pleuritic chest pain of acute onset with dyspnoea following a deep venous thrombo- sis. A representativeness error may occur when the absence of prototypical features leads to atypical variants being missed: for example, if pulmonary embolism is not considered in the absence of severe
TABLE 1 Ten heuristics, with strengths and weaknesses of each
How psychiatrists think
 
there is a high cognitive load or a high density of decision-making (Croskerry 2002) and when time for individual decisions is short (Groopman 2007). These conditions are most obviously met in emergency medicine (Croskerry 2002), but in any branch of medicine, time is inadequate (Davidoff 1997) and cognitive effort is high (Schwarz 2005), while decisions are complex and must be made despite inherent uncertainty (uncertainty that is rarely acknowledged; Coles 2006).
Examples of cognitive error in medicine
The list of potential cognitive errors is long, with 30 failed heuristics described in an in uential paper on error in emergency medicine (Croskerry 2002). Here we discuss cognitive errors in medicine that may arise from the ten heuristics listed in Table 1. They include those discussed by Groop- man (2007), with others that recur in the literature (Tversky 1974; Redelmeier 2001; Croskerry 2002, 2003).
Representativeness
pleuritic chest pain. In fact, only 60% of patients over 65 years old who have a pulmonary embolism present with chest pain (Timmons 2003).
Availability
The availability heuristic is seen when a doctor’s assessment of the probability of an event is deter- mined by the ease with which an example comes to mind; a doctor reviewing a patient with headache may overestimate the probability of subarachnoid haemorrhage if they have recently seen such a case. Often, availability is a useful heuristic, as events come easily to mind either because they are com- mon or, if occurring more rarely, serious enough always to be considered as a possibility (e.g. meningitis). An availability error occurs when the probability of an event is overestimated because it comes easily to mind, or underestimated because it does not. In the above example, the doctor’s recent encounter with subarachnoid haemorrhage has no bearing on the likelihood that the current presentation is that of tension headache, migraine or a rarer, potentially serious condition such as temporal arteritis.
Anchoring
Anchoring is the tendency to focus on prominent features of a presentation too early in the decision- making process, to arrive at an early hypothesis and to fail to adjust it in the light of later information. First impressions are often accurate, particularly among clinicians with highly developed pattern recognition skills, but they may be wrong. Tversky & Kahneman demonstrated that adjustments from rst impressions are ‘typically insuf cient … payoffs for accuracy did not reduce the anchoring effect’ (Tversky 1974); that is, first impressions have
Heuristic
Strength
Weakness
Representativeness
Availability
Anchoring
Con rmation bias
Search satisfying
Diagnosis momentum
Commission bias
Affective heuristic
Playing the odds
Fundamental attribution error
Quick diagnosis, action through pattern recognition
Events that come to mind easily are common and should therefore be considered
First impressions often give valuable information
None
Saves the time and effort of a search for comorbidity, as often none exists
None
Avoids omission bias; optimal information is not always available in the real world
Clinicians should be sympathetic towards patients
Assumption of benign diagnosis or positive outcome is usually correct
Not applicable
Non-prototypical variants may be missed
Events that do not come quickly to mind are not considered
It is dif cult to move from incorrect rst impressions
Can compound the failure to adjust from initial impressions (anchoring)
Comorbidity, which is particularly common in psychiatry, is missed
Inaccurate diagnostic labels persist, potentially resulting in incorrect treatment and stigma
Adverse effects of unjusti ed treatment may violate the ethic of primo non nocere
Unpleasant diagnoses or interventions may not be adequately considered
Negative diagnoses or outcome may not be adequately considered
Patients may be inappropriately blamed and judged, to the detriment of their care
         
Advances in psychiatric treatment (2009), vol. 15, 72–79 doi: 10.1192/apt.bp.107.005298
73
Crumlish & Kelly

74
Advances in psychiatric treatment (2009), vol. 15, 72–79 doi: 10.1192/apt.bp.107.005298
lasting power, even when they are wrong and when correcting them in the light of contradictory information is rewarded.
Confirmation bias
Con rmation bias is the tendency to seek only information that will support rather than refute an initial hypothesis, or to selectively interpret information acquired after the hypothesis is formed in a way that supports it. The bias here is evident: a hypothesis that is true can withstand attempts to disprove it and should be subjected to such attempts. Con rmation bias is always an error, as it aims simply to avoid the cognitive effort that would be required to revise an initial impression, regardless of whether or not the hypothesis is correct.
Search satisfying
Search satisfying may follow on from anchoring and con rmation bias. Search satisfying is the tendency to stop the diagnostic process once one diagnosis has been made. Even in the event that the rst diagnosis is correct, search satisfying may be an error, as comorbid conditions are not considered. Examples are the second fracture in an X-ray or co-ingestants in poisoning (Croskerry 2003).
Diagnosis momentum
Diagnosis momentum occurs when a diagnostic label applied to a patient sticks, whether or not subsequent events confirm the diagnosis. A working diagnosis may become a nal diagnosis without any new diagnostic information having been acquired.
Commission bias
Commission bias is the tendency to action rather than inaction, even when the correct course of action is unclear and inaction may be more appropriate. A doctor exhibiting commission bias may decide to institute treatment without adequate information to guide it, believing that it is better to do something than nothing.
The affective heuristic
Affective error occurs when the clinician’s judge- ments are biased by their emotions or hopes: judgements of likelihood may be based on what the clinician would like to be the case rather than what actually is. A doctor may allow positive feelings towards a patient to in uence their clinical judgement: because the doctor wishes the patient well, a symptom may be interpreted benignly when a more ominous interpretation is valid.
Playing the odds
Affective error may combine with the heuristic of playing the odds. The latter is the tendency in ambiguous situations to opt for a benign inter- pretation, on the basis that benign causes and outcomes are more common than more ominous ones (tension headaches are more common than temporal arteritis). Playing the odds fails when a rare and serious disease similar in presentation to a common benign disease is missed.
Fundamental attribution error
The fundamental attribution error is the tendency to attribute someone’s behaviour to their dis- positional qualities rather than to environmental or situational factors (Ross 1977). However, people systematically underestimate the extent to which other people’s behaviour is in uenced by external factors (Fiske 1991). In medicine, the fundamental attribution error is the tendency to be judgemental and blame patients inappropriately for their illnesses. Classically, it occurs when patients present with symptoms that are in some way precipitated or perpetuated by their own behaviour, for example smokers who present with exacerbations of pulmonary disease or intravenous drug users who present with skin abscesses after injecting. This may have implications for the level of care received, as it may be felt that patients with illnesses that are not of their own making are more deserving of care.
Error in psychiatry
Mistakes in psychiatry can have serious conse- quences for patients, clinical teams and the wider community (Kapur 2000). However, the literature on error in psychiatry is small (Grasso 2003) and narrow, with most studies focusing on medication. Little has been written on diagnostic error, which was just brie y touched on in the most thorough review of error in psychiatric practice (Nath 2006). Some work has been done on error in predicting forensic risk (Freedman 2001). Other than a novel technical paper on cognition in emergency psy- chiatry (Cohen 2006), there has been no systematic study of cognitive error in psychiatry. There are, however, reasons why the practice of psychiatry might be prone to error of this type.
Why should psychiatrists be prone to cognitive error?
As noted above, heuristics are likely to be used, with their attendant risk of cognitive error, when there is a high cognitive load and limited time to make decisions, and in situations of complexity
How psychiatrists think
 
and uncertainty. On the face of it, psychiatry would appear to proceed at a more leisurely pace than emergency medicine. However, general adult psychiatrists frequently make decisions about risk (Holloway 1997). Moreover, psychiatric practice is practically defined by its complexity and uncertainty.
Error in diagnosis
Among the uncertainties of psychiatry are diag- nostic and symptomatic uncertainty. It has been argued that psychiatric diagnoses have limited reliability and validity (Read 2004). Psychiatrists have long strived to improve reliability, and DSM– III (American Psychiatric Association 1997) was developed largely for this purpose. Nevertheless, as recently as 2005, Robert Spitzer, who led the development of DSM–III, said that ‘the reliability problem’ was still not solved (Spiegel 2005). Valid- ity in part depends on reliability, and the validity of schizophrenia in particular has been questioned. The observation that two people with no symptoms in common can both be diagnosed with schizo- phrenia has raised doubts about the validity of schizophrenia as a discernible disease entity (Read 2004).
Diagnosis in part depends on the reliability of individual symptoms. The reliability of symptoms in psychiatry may be limited for reasons such as subjectivity on the part of the diagnostician, with excessive scope for interpretation of symptoms or signs; underreporting of symptoms by patients, with no reliable objective method of identifying unreported psychopathology; and overreporting of symptoms, again, without the possibility of objectively verifying them. Diagnostic reliability also depends on agreement about the degree of severity of symptoms necessary for a clinical dis- order to be diagnosed, when symptoms occur on a continuum (e.g. situational anxiety symptoms v. persistent panic, or intermittent ideas of reference v. paranoid delusions). The decision to diagnose a DSM–IV–TR mental disorder, when symptoms are clearly elicited, depends on the psychiatrist’s judge- ment as to what constitutes ‘clinically signi cant distress or impairment in social, occupational, or other areas of functioning’ (American Psychiatric Association 2000). Clearly, the degree of impair- ment judged to be ‘clinically signi cant’ may vary from psychiatrist to psychiatrist. The psychosocial dimension of diagnosis distinguishes psychiatry from other disciplines and adds to diagnostic com- plexity; to take a medical example, the diagnosis of hypothyroidism depends on the results of a thyroid function test, not the degree of functional impair- ment the hypothyroidism appears to cause.
Error in risk assessment
In addition to diagnosis, there is a high degree of subjectivity and uncertainty in psychiatric decision-making regarding risk: for example, in the frequently taken decision of whether or not to detain a patient under the Mental Health Act after an episode of self-harm. The subjectivity may reside in whether or not ambiguous symptoms are held to be psychotic, or depressive, or neither; whether or not the patient is underreporting symp- toms in the hope of being discharged, perhaps to self-harm again; and whether or not the patient will adhere to a commitment to engage with follow- up. The decision-making process may be further complicated by the attitudes and preferences of carers. Discharge may increase the risk of suicide, violence towards others or deterioration of mental state, affecting the patient, carers and community; unnecessary admission may inappropriately stig- matise a patient and family, may signal that self- harm and admission to hospital is an appropriate response to a crisis, and may result in the use of an in-patient bed that will then not be available to an- other patient. This decision must be made despite research demonstrating that risk prediction is dif- cult and imprecise (Kapur 2000).
Examples of cognitive error in psychiatry
The ten heuristics discussed can give rise to cog- nitive error in psychiatry as in medicine.
Representativeness
Representativeness error occurs when atypical variants of a disorder are missed because the clini- cian is relying on a prototypical presentation. In psychiatry, prototypical presentations may be un- reliable for a number of reasons. A given diagnosis can, according to current diagnostic classi cation, present in various ways. For example, DSM–IV–TR major depressive disorder requires the presence of low mood and/or reduced interest and pleasure, plus three or four of another seven symptoms. Clearly, two people with major depressive disorder can have very dissimilar clinical presentations. Similarly, schizophrenia can be diagnosed even in the absence of delusions or hallucinations, the prototypical symptoms; a patient with negative and disorganised symptoms alone can also be diag- nosed with schizophrenia. Additionally, presenta- tions that are prototypical in one population may not be so in another; for example, depressive dis- orders in later life rarely meet rigorous diagnostic criteria (Beekman 2002). Other disorders with textbook presentations, such as neuroleptic malig- nant syndrome or Wernicke’s encephalopathy, could be vulnerable to representativeness error. In
Advances in psychiatric treatment (2009), vol. 15, 72–79 doi: 10.1192/apt.bp.107.005298
75
Crumlish & Kelly

76
Advances in psychiatric treatment (2009), vol. 15, 72–79 doi: 10.1192/apt.bp.107.005298
Wernicke’s encephalopathy, the classic triad of confusion, ophthalmoplegia and ataxia is present only 16% of the time (Thomson 2008).
Availability
The availability heuristic is in play when an inter- vention is chosen because it was recently selected for another patient with a similar presentation or was discussed at a recent journal club: this ‘availability’ precludes a full assessment of need for the patient in question. It has been suggested (Waddington 2000) that referral letters to psychotherapists might lead to availability error, as the diagnostic formulation suggested in a letter would be easily remembered and thus be considered likely.
Anchoring
Referral letters may lead to anchoring as much as to availability. If a colleague writes that a patient has a diagnosis of schizophrenia, it requires a certain amount of cognitive effort and con dence to adjust this diagnosis (and to write back with a dissenting opinion). As might be expected, a strong anchoring effect has been reported in decisions regarding patients with antisocial personality traits (Richards 1990).
Confirmation bias
Confirmation bias may be the most common cognitive error in psychiatry. Con rmation bias depends on the ambiguity of the information used in decision-making, so that the clinician can interpret it to suit a pre-existing hypothesis. In psychiatry, diagnostic information is often so subjective that the same symptom can be inter- preted in opposing ways; and unlike most medical symptoms, a psychiatric symptom is not always considered absent simply because a patient says it is. In a woman with progressive weight loss who reports an intake of 3000 calories a day, a diagnosis of anorexia nervosa can be justified as easily as a diagnosis of coeliac disease, the assumption being that the self-report of someone with anorexia nervosa is unreliable (Groopman 2007). Alternatively, the decision whether or not to diagnose psychosis and start the patient on a year or a lifetime of antipsychotic medications may hang on the interviewer’s idiosyncratic interpretation of the patient’s experiences, or the subjective distinctions between a delusion and an overvalued idea, or between a ‘true’ and ‘pseudo-’ hallucination.
Search satisfying
Once a psychiatric diagnosis that could explain medical symptoms is made, search satisfying may
result in the overlooking of medical comorbidity. Similarly, ‘psych-out’ error occurs when medical conditions (such as delirium, central nervous sys- tem infections, metabolic disorders or head injury) are misdiagnosed as purely psychiatric conditions (Croskerry 2003). Consistent with this, it has been shown that mentally ill patients receive unequal access to medically necessary procedures, even after controlling for other confounders (Kisely 2007). One might expect search satisfying to occur especially frequently in psychiatry, as comorbid- ity between Axis I disorders is common (Kessler 1994) and symptomatic overlap is signi cant be- tween DSM Axis I and Axis II disorders (Flanagan 2006).
Diagnosis momentum
Diagnosis momentum may also occur in psychia- try. A decision to commence a trial of antipsychotic medication, after a provisional diagnosis of psy- chotic disorder has been made from incomplete information, may result in diagnosis momentum. The trial of treatment may subsequently be taken as evidence of a nal rather than a provisional diagnosis. At later clinic visits, no further symp- toms may have emerged, but the diagnosis may go unquestioned. Con rmation bias may, in fact, lead to the circular conclusion that the lack of symptoms is evidence for antipsychotic effectiveness.
Commission bias
Commission bias occurs when an intervention is undertaken although the correct course of action – whether and how to intervene – is unclear. In psychiatry, when the patient’s psychopathology is unknown, treatments may be instigated on the basis of assumptions about their mental state. Indeed, this may be necessary, as in the case of a mute patient with profound psychomotor retardation and reduced uid intake, who is treated with a trial of electroconvulsive therapy. Treatments may also be instituted, however, without any real reason to expect that they will help the patient. For example, when a patient or family member (or even the psychiatrist) is frustrated with the rate of recovery, the psychiatrist may prematurely increase the dose of an antidepressant, possibly resulting in worsening of adverse effects without therapeutic gain, so as to be seen to be ‘doing something’.
The affective heuristic
Affective error commonly accompanies con rma- tion bias in psychiatry: if either of two diagnoses can be made to t ambiguous symptoms, a sympathetic psychiatrist may opt for the more benign, and this decision may be based more on hope than objective
fact. Clearly, this is an error if it results in a serious diagnosis not being considered.
However, a psychiatrist may be aware of all possible diagnoses in a particular case, may be aware of the in uence of hope on decision-making, and may still be faced with enduring diagnostic uncertainty. Additionally, cases arise in which the distinction between more benign and more severe diagnoses is not of prime importance when choosing a treatment. In a patient presenting with marked social anxiety and avoidance, it can be dif cult to decide whether the diagnosis is a primary anxiety disorder or major depression with mood-congruent paranoid ideation; in either case, an antidepressant and psychological treatment are likely to help. What may not help is disclosing a suspected diagnosis of psychosis to the patient. The consequences of diagnosis with a severe mental illness include an increased risk of self-stigma and low self-esteem (Birchwood 1993), as well as depression and suicidality years later (Crumlish 2005). The psychiatrist might well err on the side of the more benign diagnosis, to avoid the negative psychological consequences of labelling with the more severe disorder.
Playing the odds
The heuristic of playing the odds – decision- making biased towards a positive outcome, since positive outcomes are statistically more likely than negative outcomes – may be at play whenever a psychiatrist discharges a patient who is at chronic high risk of suicide. Prediction of suicide is exceed- ingly dif cult (Kapur 2000) and the discharging psychiatrist, regardless of their risk assessment skills, has little idea whether the patient will act on suicidal impulses before the next scheduled appointment. It is the rarity of completed suicide – even among high-risk groups – that allows the psychiatrist con dently to discharge such a patient, and the playing the odds heuristic fails when that rare, catastrophic event happens.†
Fundamental attribution error
Psychiatric patients may be particularly vulnerable to fundamental attribution error (Croskerry 2003), as challenging patients, such as those who recur- rently self-harm, may be inappropriately judged or blamed for their behaviour (Na si 2007), with inadequate attention paid to the circumstances in which it occurs.
Taking steps to avoid cognitive error
The rst step in reducing the impact of error is to acknowledge that it exists and is a part of every- day practice. Horton (1999) argued that clinicians
should move away from the idea of the ‘perfect doctor’ and focus on learning from error, when it occurs.
A barrier to addressing cognitive error may be the perception that to admit error in decision-making is to admit weakness as a clinician. In fact, cog- nitive error results from the use of heuristics, and the use of heuristics is characteristic of doc- tors with good clinical acumen (Croskerry 2002). Competing strategies, such as always relying on the hypothetico-deductive method for diagnosis or exhaustively investigating patients, are not practical in the real clinical world. Heuristics should not be abandoned, but should be used consciously, with an awareness of their potential pitfalls (Groopman 2007). Croskerry (2003) has gone so far as to suggest that the term ‘failed heuristic’ should be replaced by the term ‘cognitive disposition to respond’, so as to remove the stigma of bias or personal failure from discussion of cognitive error.
Another barrier to prevention of error is the perception that all cognitive error is inevitable. In fact, strategies exist for reducing cognitive error – one such is cognitive debiasing (Croskerry 2002) – and individual cognitive errors can be avoided or allowed for, provided that clinicians are aware of them (Table 2). Psychiatrists may have an advantage over other doctors in this regard, as psychiatrists have frequent exposure to the cog- nitive psychology that underpins cognitive error (Redelmeier 2001). Also, being familiar with trans- ference and countertransference, psychiatrists are intuitively aware of fundamental attribution error and affective error, i.e. that feelings for a patient affect clinical decision-making.
For both trainees in psychiatry and practising psychiatrists, teaching in cognitive psychology could usefully incorporate training on cognitive biases in clinical decision-making. Trainee psy- chiatrists should be familiarised with the common cognitive biases and teaching should include cogni- tive forcing strategies such as insisting on a differ- ential diagnosis even when the diagnosis seems obvious – it may seem obvious because of undetec- ted biases (Bradley 2005). Such training should include non-punitive supervision, so that trainees can be corrected on errors and learn from them without damage to team cohesion or careers. Trainers should be willing to accept feedback from junior staff, including critique of their decisions. All doctors should actively seek feedback from patients and carers, and encourage them to ask searching questions about the rationale for diag- noses and interventions (Groopman 2007). Addi- tionally, psychiatry could usefully adapt the tradition of the morbidity and mortality conference common in surgery (Holland 2007).
†Coping with the practical and emotional aftermath of patient suicide is discussed in this issue of Advances by St John- Smith et al (pp. 7–16) and Callender & Eagles (pp. 17–22). Ed.
How psychiatrists think
  
Advances in psychiatric treatment (2009), vol. 15, 72–79 doi: 10.1192/apt.bp.107.005298
77
Crumlish & Kelly

TABLE 2 Ten cognitive errors (failed heuristics), with debiasing strategies for reducing error
Cognitive error
Cognitive debiasing strategies
Representativeness
Availability
Anchoring
Con rmation bias
Search satisfying Diagnosis momentum
Commission bias
Affective error
Playing the odds
Be aware of individual variation; always ask ‘what else could this be?’; rule out worst-case scenario
Judge cases on their own merits rather than recent experiences; be aware of the recency effect; routinely question the objective basis for clinical decisions
Avoid early judgements and preconceptions; do not assume that information from referrers is accurate
Try to discon rm initial hypotheses; ensure that alternatives are considered; routinely consider, and argue the case for and against, several diagnoses or treatments
Always consider comorbidity; be aware of points of similarity and difference between comorbidities Question previously documented diagnoses; review criteria for diagnoses to ensure agreement
Review evidence for any intervention; identify dangers associated with action; set clear, timed goals for any intervention, if instituted under conditions of uncertainty; be prepared to stop an intervention if targets are not achieved
Be aware of the in uence of emotion on decision-making; recognise liking for a patient and be conscious of hopes for the patient as distinct from objective facts
Be aware of the risk of a negative outcome; if there is doubt about the outcome, review the evidence carefully and err on the side of caution until more information is available
Recognise and try to understand dislike for a patient; avoid value judgements; recognise that patients’ lives and behaviours are complex and that judgemental treatment oversimpli es those complexities; imagine a friend or relative in the patient’s position
                 
Fundamental attribution error
Adapted from Redelmeier 2001; Croskerry 2002, 2003.
Conclusions
The study of cognitive error in psychiatry is at an early stage. We have noted reasons why psychia- trists might be prone to cognitive error, but this is largely speculation on our part. There are no data on the prevalence or consequences of cognitive error among psychiatrists, and research in the area would be welcome. Individual cognitive errors are targets for research in psychiatry, just as psychol- ogists and psychotherapists have studied anchoring (Richards 1990), availability (Waddington 2000) and fundamental attribution errors (Na si 2007). Equally, empirical evidence to support the effec- tiveness of cognitive debiasing strategies is still minimal (Bradley 2005), and further work is needed to develop rigorous, evidence-based pro- grammes for teaching and supervision. Indeed, given the ubiquity of cognitive error and bias in medical practice, it is particularly appropriate that strategies for minimising error should be carefully evaluated – if only to avoid bias.
References
American Psychiatric Association (1997) Diagnostic and Statistical Manual of Mental Disorders (3rd edn) (DSM–III). APA.
American Psychiatric Association (2000) Diagnostic and Statistical Manual of Mental Disorders (4th edn, revised) (DSM–IV–TR). APA.
Beekman ATF, Geerlings SW, Deeg DJH, et al (2002) The natural history of late-life depression. A 6-year prospective study in the community. Archives of General Psychiatry; 59: 605–11.
Birchwood M, Mason R, MacMillan F, et al (1993) Depression, demoraliza- tion and control over psychotic illness: a comparison of depressed and non- depressed patients with a chronic psychosis. Psychological Medicine; 23: 387–95.
Bradley CP (2005) Can we avoid bias? BMJ; 330: 784.
Cohen T, Blatter B, Almeida C, et al (2006) A cognitive blueprint of collaboration in context: distributed cognition in the psychiatric emergency department. Arti cial Intelligence in Medicine; 37: 73–83.
Coles C (2006) Uncertainty in a world of regulation. Advances in Psychiatric Treatment; 12: 397–401.
Crichton M (2007) Where does it hurt? New York Times; April 1 (www.nytimes. com/2007/04/01/books/review/Crichton.t.html).
Croskerry P (2002) Achieving quality in clinical decision making: cognitive strategies and detection of bias. Academic Emergency Medicine; 9: 1184– 204.
Croskerry P (2003) The importance of cognitive errors in diagnosis and strategies to minimize them. Academic Medicine; 78: 775–80.
Crumlish N, Whitty P, Kamali M, et al (2005) Early insight predicts depression and attempted suicide after 4 years in rst-episode schizophrenia and schizophreniform disorder. Acta Psychiatrica Scandinavica; 112: 449–55.
Davidoff F (1997) Time. Annals of Internal Medicine; 127: 483–5. Fiske ST, Taylor SE (1991) Social Cognition (2nd edn). McGraw-Hill.
Flanagan E, Blash eld R (2006) Do clinicians see Axis I and Axis II as different kinds of disorders? Comprehensive Psychiatry; 47: 496–502.
Freedman D (2001) False prediction of future dangerousness: error rates and Psychopathy Checklist – Revised. Journal of the American Academy of Psychiatry and the Law; 29: 89–95.
Gawande A (2002) Complications: A Surgeon’s Notes on an Imperfect Science. Metropolitan Books.
Grasso BC, Bates DW (2003) Medication errors in psychiatry: are patients being harmed? Psychiatric Services; 54: 599.
Groopman J (2007) How Doctors Think. Houghton Mif in.
Holland J (2007) A role for morbidity and mortality conferences in psychiatry.
Australasian Psychiatry; 15: 338–42.
Holloway F (1997) The assessment and management of risk in psychiatry: can
we do better? Psychiatric Bulletin; 21: 283–5.
Horton R (1999) The uses of medical error. Lancet; 353: 422–3.
Kapur N (2000) Evaluating risks. Advances in Psychiatric Treatment; 6: 399– 406.
Kessler RC, McGonagle KA, Zhao S, et al (1994) Lifetime and 12-month prevalence of DSM–III–R psychiatric disorders in the United States. Results from the National Comorbidity Survey. Archives of General Psychiatry; 51: 8–19.
 
78
Advances in psychiatric treatment (2009), vol. 15, 72–79 doi: 10.1192/apt.bp.107.005298
How psychiatrists think
 
Kisely S, Smith M, Lawrence D, et al (2007) Inequitable access for mentally ill patients to some medically necessary procedures. Canadian Medical Association Journal; 176: 779–84.
Kohn KT, Corrigan JM, Donaldson MS (1999) To Err Is Human: Building a Safer Health System. National Academy Press.
Leape LL, Berwick DM (2005) Five years after To Err Is Human: what have we learned? JAMA; 293: 2384–90.
Na si N, Stanley B (2007) Developing and maintaining the therapeutic alliance with self-injuring patients. Journal of Clinical Psychology; 63: 1069–79.
Nath SB, Marcus SC (2006) Medical errors in psychiatry. Harvard Review of Psychiatry; 14: 204–11.
Pear R (1999) A Clinton order seeks to reduce medical errors. New York Times; 7 December: 1.
Read J (2004) Does schizophrenia exist? Reliability and validity. In Models of Madness. Psychological, Social and Biological Approaches to Schizophrenia (eds J Read, LR Mosher, R Bentall): 43–56. Brunner-Routledge.
Redelmeier DA, Ferris LE, Tu JV, et al (2001) Problems for clinical judgement: introducing cognitive psychology as one more basic science. Canadian Medical Association Journal; 164: 358–60.
Richards MS, Wierzbicki M (1990) Anchoring errors in clinical-like judgements. Journal of Clinical Psychology; 46: 358–65.
Ross LD (1977) The intuitive psychologist and his shortcomings: distortions in the attribution process. In Advances in Experimental Social Psychology (ed L Berkowitz): 173–220. Academic Press.
Schwarz N (2005) When thinking feels dif cult: meta-cognitive experiences in judgement and decision making. Medical Decision Making; 25: 105–12.
Spiegel A (2005) The dictionary of disorder. New Yorker; 5 January: 56–63.
Thomson AD, Cook CC, Guerrini I, et al (2008) Wernicke’s encephalopathy: ‘plus ca change, plus c’est la meme chose’. Alcohol and Alcoholism; 43: 180– 6.
Timmons S, Kingston M, Hussain M, et al (2003) Pulmonary embolism: differ- ences in presentation between older and younger patients. Age and Ageing; 32: 601–5.
Tversky A, Kahneman D (1974) Judgement under uncertainty: heuristics and biases. Science; 185: 1124–31.
Waddington L, Morley S (2000) Availability bias in clinical formulation: the rst idea that comes to mind. British Journal of Medical Psychology; 73: 117–27.

MCQ answers
12345 afafafafat btbfbfbtbf cfcfcfcfcf dfdfdtdfdf efetefefef
MCQs
1 The following are heuristics:
. a misrepresentation
. b anchoring
. c effective error
. d information bias
. e medication error.
2 Heuristics are likely to be used:
. a in situations of certainty
. b by inexperienced clinicians
. c when time is relatively unlimited
. d when the decision to be taken is straightforward
. e when there is a high cognitive load.
3 With regard to fundamental attribution error: a people overestimate the extent to which external
factors in uence other people’s behaviour
b doctors are never judgemental about their patients c fundamental attribution error is synonymous with
unconditional positive regard
d smokers are vulnerable to fundamental attribution
error
e psychiatric patients are rarely affected by
fundamental attribution error.
4 The following characteristics of psychiatry protect against cognitive error:
a frequency of decisions about risk
b training of psychiatrists in cognitive psychology c ambiguous psychopathology
d uncertainty in decision-making about risk
e complexity of decisions about admission.
5 Practical steps to avoid cognitive error include: a cognitive forcing strategies
b avoiding all heuristics
c discouraging questioning by patients
d exhaustively investigating all patients e cognitive biasing strategies.
Advances in psychiatric treatment (2009), vol. 15, 72–79 doi: 10.1192/apt.bp.107.005298
79










