How should we use algorithms to tackle, not widen, social inequalities as part of the COVID-19 recovery?

How should we use algorithms to tackle, not widen, social inequalities as part of the COVID-19 recovery?

The importance of algorithmic systems to our daily lives has grown rapidly during the pandemic. Used in the right way, they offer historic opportunities to benefit both people and the planet – but they also carry serious potential for social harms that demand our urgent attention

Zeynep Engin

A clear distinguishing characteristic of COVID-19 from past pandemics and disasters is that it came into a world that was already increasingly being shaped by computer algorithms.

The steady generational shift to a more online lifestyle sped up rapidly after governments across the globe started announcing lockdowns in early 2020 in response to the pandemic. Within a few weeks, the majority of the world’s population found themselves isolated in their homes, developing an almost existential dependency on algorithmic systems to carry on most of their daily activities.

Never have our lives so depended on real-time data and algorithmic assistance, from monitoring and controlling the spread of COVID-19 to continuing with our work ((to differing degrees) and receiving public and other services (healthcare, education, social life, etc) – not to mention the way scientists developed new vaccines in record time.

We also know that this trend is irreversible. Algorithmic processes are here to stay with increasing prominence during and beyond COVID-19. They offer historic opportunities for both people and the planet after this ‘great reset’ – but they also carry serious potential for social harms and inequalities that demand the urgent attention of all stakeholders.

The rapid rise of algorithms during COVID-19

As Professor Ian Goldin wrote in his recent IPPO blog, COVID-19 has made the case for addressing inequality more compelling than ever. As the outbreak spread globally, factors such as systemic racism, marginalisation and structural inequalities have not just led to poor health outcomes for disadvantaged groups, but also widened the gap in many other ways between different communities, workforce groups and geographic locations.

Algorithms were central to this, given that work, education and social life all moved to largely online environments. In the UK, for example, the summer of 2020 saw protests in front of Parliament as a result of the GCSE and A-Level results controversy which had a grade standardisation algorithm (produced by Ofqual) at its centre, discriminating particularly against the top students from disadvantaged backgrounds.

And while many traditional businesses had to bear the direct economic consequences of the lockdowns, giant online platforms such as Amazon (which increasingly provides its services through algorithmic processes) have grown massively, generating potentially exploitative employment models while posing a serious threat to local jobs.

The 2020 US election campaigns were dominated (again) by online platforms targeting citizens individually through algorithmic personalisation and enforcement of new policies by private actors. Such mechanisms can have very diverse impacts on different demographic groups, affecting democratic institutions and processes in ways not yet well understood.

Meanwhile, the Global North-South divide has grown even wider during the pandemic, with a key factor being the poor quality or even total absence of online services for a large proportion of the world’s population, on top of their less-competent healthcare capacities. With these data also feeding into the next phases of algorithmic management/governance in the (hopefully) post-COVID world, one may expect to see many further steep increases in these inequality patterns over the next few years.

In short, the world’s economies stand at a crossroads. Growing use of algorithms in everyday life could further exacerbate inequality – or, if developed and deployed responsibly, they could help to rebuild the global economy and politics in ways that enable us to address some of the historically persistent problems of human and institutional decision-making.

How algorithms feed into everyday social inequalities

In everyday life, algorithms appear in many potentially problematic forms – for example, as targeted content and personalised recommendations determining individual citizen choices and behaviour (online shopping, social media, search engines); as risk assessment tools supporting critical life decisions (loans, job recruitment, criminal sentencing); as mediators that organise everyday life and work (navigation, retail); and in the form of autonomous agents directly interacting with humans and other algorithms (e.g., Internet of Things devices, smart city applications) in complex ways that are beyond human comprehension.

Given the scale and diversity of these algorithmic applications, potential inequality concerns vary vastly for different individuals, communities, demographic groups, geographies, sectors and cultures. For example, inequalities may emerge from representation bias within the datasets used for training; from design choices and interaction models for the algorithmic systems being used; and from the context and environment in which they operate.

Pre-emptive biases and discrimination in data can skew the results. Power imbalances in agenda-setting may produce exclusionary governance practices. Automation through algorithmic systems can further ignore digital divides, and allow for malicious practices that affect large populations at once, while also being fine-tuned at the level of individual citizens.

How to make algorithms a force for good

However, with genuine political will and conscious planning, the same algorithms can also be a force for good in the post-COVID world. For example, with transparency and accountability, algorithms can function as ‘neutral’ mediators that enable blind assessments, eliminating human bias and prejudice in a wide range of application domains, while also helping to mitigate for past biases embedded in historical datasets.

Current problematic uses in welfare decisions, criminal sentencing and identification can be addressed through direct investment on data quality and algorithmic optimisation for fairness, rather than financial savings and/or statistical efficiency being the default goals (can governments be considered ‘efficient’ when they are not ‘just’?). The growing field of algorithmic fairness should therefore be a clear direction for research amid post-pandemic recovery plans.

Algorithms are best at pattern recognition with unprecedented speed and precision. They also offer huge opportunities for personalisation/localisation of services that can be a basis for achieving more equitable societies (using the same algorithms that currently empower multinational platforms’ profit models). When optimised according to individual needs and circumstances fairly, and made accessible to everyone, services personalised at the level of individuals and communities can help with better education, better healthcare and better mobility for all groups – including those who have been historically discriminated against due to systemic problems.

This, however, also leaves us with an important philosophical debate on where to draw the line between personalisation/localisation vs. equal treatment of individuals and communities, especially when it comes to high-stakes governance decisions. For the first time in history, we have the realistic capacity to factor in all individual circumstances when making critical decisions – although this may challenge established equality principles in democratic governance practices. Above all, we have a unique opportunity for a 21st-century social re-organisation, potentially enabling us to go beyond the ‘equality’ ideal and to aim for ‘equity’ instead.

Key areas of concern that need addressing

Assessment of algorithmic behaviour in social contexts, both as independent agents and as support tools for human decision-makers, should be another major priority area for research within a broader pandemic-recovery agenda that seeks to tackle social inequality.

As algorithms increasingly mediate important decisions and evolve dynamically through complex interactions, the public and policymakers alike are concerned about losing their traditional capacity to regulate systems. Key areas of research that are emerging in response to these concerns include:

  • Ensuring auditability and licensing of algorithmic systems, both at the design stages and throughout their lifetime, by setting appropriate checks and balances;
  • Transparency of the algorithm design;
  • Explainability of the algorithmic processing, and
  • Assignment of accountability for algorithmic decisions in accordance with existing (and potential new) laws.

Why we also need a longer-term view

Beyond the immediate desire to mitigate systemic inequalities highlighted and exacerbated by COVID-19 through algorithmic support, a longer-term perspective on the issue should account for scenarios for post-work societies in which large proportions of the existing workforce might end up losing their direct economic relevance to algorithms.

Professor Yuval Noah Harari defines this development as the rise of a potential ‘useless class’, referring to large populations around the world as algorithms take more and more responsibilities and control in everyday life and work. This puts low-income groups and economically less-developed countries (labour-intensive economies) at particularly high risk to start with.

A forward-looking policy research agenda should therefore also focus on alternative scenarios and models for human-machine interactions and the new economic models in hybrid environments that may generate new types of inequalities. For example, Mike Walsh, author of The Algorithmic Leader, formulates a dire scenario in which a different kind of class-based divide is emerging ‘between the masses who work for algorithms, a privileged professional class who have the skills and capabilities to design and train algorithmic systems, and a small, ultra-wealthy aristocracy, who own the algorithmic platforms that run the world’.

A more futuristic research agenda should also consider potential divides not just between human societies, but between humans and non-organic algorithmic entities that visionary thinkers have been warning us about for some time. In a world increasingly run by algorithms, with large human populations losing their economic and political relevance, investing in research now about these types of scenarios may prove its worth for humankind – especially regarding potential future crises such as climate catastrophe.

Dr Zeynep Engin is currently researching algorithmic governance at UCL. She is the founder and director of Data for Policy CIC, and Editor-in-Chief of Data & Policy. A previous paper on ‘Algorithmic Government’ by Dr Engin can be found here.