Time Poor, Knowledge Rich: How to Access to Relevant Evidence In A Crisis
Caroline Larsson
The pandemic prompted a significant shift in the research landscape, with many scholars redirecting their attention towards investigating topics related to COVID-19.
Consequently, a staggering volume of records emerged, including both formal papers and pre-prints (papers yet to have gone through a peer-review process) appearing in journals, databases, and on websites. For those working in government and the public sector looking for the most up to date scientific advice to inform specific and urgent policy questions, it was therefore not a straightforward task to sift through the growing mountain of evidence.
This information also needed to be timely enough to be of use to officials making decisions in real-time.
One way to capture such a flood of research is to gather and organise information using evidence synthesis methods, such as an evidence map. During the pandemic, I worked at the EPPI Centre developing their Living Map of systematic reviews of social research focused on COVID-19, as part of the IPPO partnership.
In this blog, I will look at the origins of evidence maps, and the production, limitations and successes of this Living Map.
Although the map has subsequently been archived it remains accessible here.
Mapping Evidence
There is not a fixed definition of what evidence mapping is, and although the focus in this text is on a map constituting an illustrative component, there are ways to map evidence that does not include this (e.g. scoping reviews). An evidence map is a method to organise and present research in an accessible format, facilitating the ability of identifying available evidence in a field and determining the scope and location of research on a specific topic. Rather than looking at a specific intervention or exposure as in traditional reviews, it essentially involves exploring a broader subject area to gain a better understanding of the research landscape.
Mapping evidence is also helpful as an initial step before conducting a systematic review as it can help indicate where there is little or no evidence and help funders target where more research is needed. It allows policymakers and researchers to pinpoint gaps and inform their decisions. Maps designed to discover gaps typically include both primary and secondary research and are often called evidence gap maps. The method follows a systematic approach in determining what to include, employing specific eligibility criteria and a selection process.
For a living map, like the one of focus in this blog, the term “living” signifies an ongoing searching, screening, and coding process that continues even after publication so that there is a continuous representation of the evolving body of evidence within an area. This is possible by having a partly automated screening process where studies are identified using machine learning which speeds up the process and requires less human labour.
The mapping feature offers a visual representation of the available evidence in a format that the user can interact with. Evidence maps are often structured with Y and X axes to which codes are assigned to enable a representation of the evidence grouped according to the different categories on each axis. In the case of the living map these were “topic” and “population.” There are also filter codes allowing for a more focused exploration. Some maps also distinguish between the quality of the research included in the review by assigning it different colours.
When time is limited
In 2022, I was fortunate to have an internship at the EPPI Centre when they were developing the map. The map production was the focus of my master’s dissertation, and I will now share some of what I learned from that experience.
There were two key findings that I explored further: timely considerations and user engagement.
The work with the living map began in November 2020 with the aim of releasing it by March 2021. The plan was for the map to encompass primary and secondary social research related to COVID-19 and other global health emergencies. However, due to the surge in research during the pandemic there were concerns about whether that inclusion criteria was too broad.
In the piloting phase, the inclusion criteria was tested with records drawn from a prior living map on COVID-19 health research that the EPPI Centre had published in March 2020 for NIHR, known as ‘the health map.’ During this trial, they had their suspicions confirmed—the inclusion criteria was broad enough to include the majority of the records they had run through the automatic screening. The understanding was that keeping the inclusion criteria that broad would mean that they would have an unmanageable amount of records coming in, making the screening and coding of all these records too time-consuming if the map was to be published on time.
The team decided to narrow down their inclusion criteria to systematic reviews of social research focusing on COVID-19, meaning not to include primary research and research focusing on other global health emergencies. This pragmatic decision allowed them to publish the map on time. However, this also meant that although showing gaps in systematic review research, the map did not show gaps in social research overall. By that, it lost part of its functionality as an evidence gap map.
The most notable distinction between the health map and the living map was the level of complexity in the coding schema. The health map features a more straightforward design with 11 topic codes, while the living map consists of 12 topic codes, 14 population codes, and an additional four filter codes encompassing gender(s), research question(s), countries, and policy response(s). Furthermore, the health map benefited from more people working on it. Initially, the health map employed automated screening to identify eligible records, followed by manual screening and coding by team members. Over time, it evolved into a fully automated system, which remains accessible on the EPPI website. For this to happen with the living map, it would have taken a more substantial amount of coded records to become fully automated, and with its numerous codes and fewer personnel resources, this would have taken a very long time.
What could it be used for?
The decision to include a wide range of social research on COVID-19 also meant including studies of varying quality. Some people in the team questioned why lower-quality research was included. The argument for including it was that the map aimed to represent the entire research landscape like an ‘evidence gap map’ would, and it simply would not have been time to assess the quality of the research. However, as the map evolved into a database of evidence organised by research characteristics (rather than an evidence gap map), it could be argued that a quality assessment of the records would have been necessary for the map to be more beneficial for policymakers. In practice, the map provided much evidence but placed the responsibility on users to assess the quality of each study, adding an extra layer of work for those searching for reliable information.
What does this mean for users?
It is difficult to say whether the map could, in fact, have been useful for policymakers to understand better what research was out there. User engagement was integral to the project’s design. However, the reality fell short of the ideal. The team knew the importance of aligning the map’s design and content with users’ needs but struggled to establish substantial engagement in this regard. Time constraints emerged as a primary reason for this deficiency in user engagement.
After the map was published, the team attempted to identify who was using the map to understand the potential users’ needs. It was possible to track the number of users, but it was not possible to see who the users were unless they filled out an optional user survey. The user survey was available on the map’s homepage to collect user demographic information to enable the team to gain insights into the user base. However, the survey resulted in a limited number of user scenarios. It was also impossible for them to know whether the tracked users were external or internal users, meaning if the users were people outside of the ones working in the team.
Lessons from the pandemic
Owing to the combination of there not being enough people working on the map, the amount of social research on COVID-19, and the lack of evidence on users using it, work on the map stopped in December 2022.
The pandemic highlighted the importance of evidence-informed decision-making. The development of evidence maps, like the one discussed here, showcases some of the challenges and complexities involved in translating research into actionable insights for decision-makers. As we face an uncertain future and urgent needs for policymaking in areas such as AI and climate change, researchers and policymakers must collaborate effectively to make informed decisions that benefit the wider population.
Ultimately, the dynamics of policy and research require a delicate balance between speed and quality. Critical situations will continue to arise, and our ability to navigate these temporal conditions will determine our success in serving the greater good.