How can COVID-19 public inquiries avoid the ‘hindsight bias’ trap?
The first in a series of IPPO blogs on the optimal design of COVID-related public inquiries discusses how to limit the risks of distorted memories, failure to contextualise, and over-reliance on certain experts
Christoph Meyer
‘Hindsight bias’ is the tendency to exaggerate in retrospect how predictable an event was at the time, either to ourselves or other people. Also known as the ‘knew it all along fallacy’, it is well-evidenced in psychological studies of medical diagnoses, auditing decisions and terrorist attacks.
Hindsight bias doesn’t only distort the memories and reasoning of individuals. It can also affect organisations, political actors and news media in debates over who is to be blamed and what lessons should be learned from surprises, disasters, crises and failures.
It can also be used by politicians as a defence in public. Last year, Prime Minister Boris Johnson countered criticisms of his UK Government over the death toll in care homes by calling opposition leader Keir Starmer ‘Captain Hindsight’ – after the superhero from the South Park series who dishes out useless advice in the immediate aftermath of accidents and disasters.
Cultural references aside, hindsight bias is a serious problem for public inquiries, whether they are aimed at lesson-learning, accountability, or a mixture of both. It can lead to misinterpretation of cause-effect relations, underestimation of the difficulty of taking decisions during periods of high uncertainty and pressure to act, and flawed prescriptions of how to overcome problems.
Flawed diagnosis
My colleagues and I encountered this phenomenon while researching our book on what it takes for warnings of mass atrocities or armed conflict to be noticed, prioritised, accepted and acted upon. We found many studies were prone to exaggerate the availability and quality of early warnings, including in the influential case of the 1994 Rwandan genocide, leading to a mantra that ‘warning is not the problem, lack of political will is’. This flawed diagnosis fed a sense of fatalism among potential warners, instead of prompting experts to ‘up their game’ by becoming more persuasive, improving the relationships between experts and politicians, and/or designing better warning-response processes and systems.
Those in charge of public inquiries are aware they may be accused of a ‘cover up’ or ‘whitewash’ if they conclude that a disaster was unavoidable and that no individual was at fault, given the conditions at the time. That understandable sensitivity may amplify hindsight bias among reviewers, while media campaigns can further feed the temptation to find scapegoats – potentially leading to individuals being unjustly singled out and punished.
This can, in turn, result in criticised officials or organisations developing management techniques that may reduce the reputational risk to themselves, but increase the risks to others. For example, the fall-out from the Baby P case is thought to have contributed to social workers spending more time on risk management bureaucracy and less time with families, leading to more children being taken into care.
Ways to reduce hindsight bias
Some may argue that hindsight bias is unavoidable – particularly in highly salient and complex cases with substantial uncertainty, such as the COVID-19 pandemic. It can arise from filtering, reorganising and interpreting the same ‘facts’ according to narratives that reflect observers’ worldviews and political leanings about science, health, society and the role of government. As facts do not speak for themselves and each observer comes with ideational baggage, it is futile to strive for, or claim, objectivity.
The human brain is not easily ‘debiased’ in its distorted memory and reasoning. Politicisation and mediatisation bring inevitable pressures that cannot be wished away. Fortunately, there are steps that inquiry chairs can take to reduce hindsight bias in the analysis of past decisions:
- First, the recruitment of inquiry panel members should aim to ensure not only the right mixture of expertise, but also promote a spirit of curiosity and open-mindedness. This could be achieved by avoiding panel members who have publicly expressed strong views supportive of particular narratives, even if they may appear vindicated by events. In the case of a COVID inquiry, it could help to invite some international experts who had not already criticised UK experts or governments. This is usually preferable to simply ‘balancing out’ different views.
- Secondly, the inquiry chair needs to develop a detailed chronology of events that accurately indicates the evolution of the threat, the processes of gathering evidence, and the specific knowledge claims that were made (both supporting and refuting the severity of the threat). Such a detailed chronology can help to properly contextualise what information was available when decisions were taken, what arrived later, and why. For instance: how soon was it known that SARS-CoV2 spread asymptomatically? When were new virus variants discovered? When did tests of vaccines deliver positive results about efficacy? And what advice did expert bodies provide to ministers when, with regard to testing, travel restrictions, and lockdown options?
Weighing the evidence
Inquiries need to weigh more highly evidence about a decision or event that was produced at the time or close to it, over evidence taken long afterwards. Especially in interviews with practitioners, possible memory distortions need to be compensated for and reduced – for example, by prompting witnesses with texts and statements they made at the time. This is also why inquiries should not wait until memories are beginning to fade, and public narratives about the meaning of a crisis have consolidated.
Furthermore, when assessing what was known or knowable, inquiries must not simply look at information and analysis in the public domain and assume these were noticed and processed by government experts or decision-makers. Instead, they need to ask themselves whether such analysis was actually noticed, processed and understood by organisations, officials and decision-makers. Were articles in scientific journals such as the Lancet read by key advisors or decision-makers? What were the competing demands on the attention of these decision-makers, officials and expert committees, and how much pressure was on them to decide and act quickly?
Which experts are heard?
When judging whether decision-makers followed the best expertise and expert advice, inquiries must avoid biased sampling of those experts who turned out to be right in retrospect. Experts may have been right for the wrong reasons, or wrong but based on a sound analysis. Instead, they need to look at the preponderance of the most authoritative expert advice alongside the degree and nature of any disagreements.
For example, did experts disagree regarding the infectiousness of the virus or the public health measures needed to stop its spread? Were warnings expressed clearly and forcefully enough to be noticed, understood and prioritised? Or were they hidden somewhere within long reports and qualified in various ways? How reliable were the sources of evidence and how confident were experts? How probable were particular outcomes or scenarios deemed to be at the time? Or did forecasts lack measures of probability and confidence? And were the practical implications of any forecasts clearly spelled-out by experts – for instance, in terms of mortality rates and implications for the NHS?
Furthermore, which experts could be deemed authoritative and credible? How good was their training, research, track-record and trustworthiness? Did politicians follow the most authoritative advice, or did they pick experts according to whether their views were closer to their own policy preference? It was reported, for example, that three out of four experts invited to a crucial UK Government meeting about whether to impose a second lockdown were supportive of a less restrictive approach than was advocated by the Government’s main expert advisory body, SAGE.
Alternatively, did ministers have reasonable grounds to prefer some experts’ advice over others – for example, those who may have ‘cried wolf’ too often, or whose credibility may have been affected by conflicts of interest or hidden political agendas. Were there good grounds to doubt the credibility of WHO advice about COVID-19, given the influence of China?
Consider alternative scenarios
Finally, inquiries should reveal and question any underlying assumptions about why things turned out the way they did. They should systematically consider alternative scenarios and whether different advice and decisions would necessarily have had better outcomes. The parliamentary ’Lessons Learned’ report, for example, stated that locking down earlier in March 2020 would have saved many lives. However, it was far less decisive over whether the SAGE-recommended ‘circuit-breaker’ lockdown would have saved many lives too. Comparisons with measures taken in other countries may help to imagine, substantiate and explore counterfactuals, even if no country or epidemiological situation are truly identical.
All these steps together can reduce hindsight bias, and thus increase the robustness of the analysis and identification of lessons yet to be learned. They are, however, not a substitute for judgments about what we can expect from experts and law-makers in terms of courage, compassion, prudence or openness to inconvenient advice.
Christoph Meyer is Professor of European & International Politics at King’s College London
- Support from the ERC (FORESIGHT, Grant 202022), the ESRC (INTEL, Grant ES/R004331/1) and the John Templeton Foundation (Centre for the Study of Governance and Society: ‘The Political Economy of Knowledge and Ignorance’, Grant 61823) is gratefully acknowledged.