Voices for the AI Summit – could a pandemic be managed better?
The latest post in our series asking what writers would say at the AI Summit, focuses on what we could do differently, especially during pandemics. It uses the case study of all we are learning about leadership (or lack of it) during the last COVID pandemic. Then considers how AI might have helped.
Regular readers will know that Tony Boobier is a leading author (on the impact of AI). He is also an advisor and mentor to leaders and businesses around the world. Drawing on his extensive experience of data, analytics, and AI, Tony has shared his leadership tips with us before. For instance, his critiques of ‘business’ social media usage, cost cutting & the lack of thinking skills.
Building on the warning to not just drift into an AI future unprepared, from Tristan Mobbs, in this post Tony considers a concrete case study. We can all bemoan mistakes that were made during the government of UK’s response to the last global pandemic. But, how could AI have helped? In his engaging post, Tony brings to life both opportunities and potential pitfalls.
Asking leaders to think how AI could have helped with COVID-19
If I’d been invited to the AI Summit, I would have asked attendees to reflect on how the Pandemic would have been managed if AI had been operating effectively. Put another way, and assuming that AI develops in the way that experts are predicting, how might the ‘next’ Pandemic be managed?
It’s a natural extension of my ‘lockdown’ book ‘AI and the Future of the Public Sector’. That considers the impact of advanced tech on those elements that make us operate effectively as a caring society. Coincidentally and happening in parallel to the AI Summit, it takes into account the review of the UK Government’s rather messy response to COVID-19.
Understanding AI in the context of a pandemic, when humanity is at its most vulnerable, not only deals with technical issues such as drug development and hospital management. It also extends our focus to how issues such as ethics, privacy, and bias would be applied.
Does a more digital future help us respond differently?
Against this background of the past global incident and a possible future one, it’s useful to remember that the number of connected devices is growing. Already there are said to be 16 billion – 2 for every person on the planet. By 2030, that number will have doubled and is set to increase exponentially, especially as 5G and 6G telecoms take hold.
Globally, we are becoming increasingly digital. The subjectivity of many decisions made might be replaced by automation, especially as predicted job losses start to bite. Algorithms will replace sentiment. It could result in some uncomfortable and perhaps unpalatable outcomes. We can’t be certain but it’s likely that an AI-infused pandemic response might comprise a combination of ‘positives’ and what I perhaps generously describe as ‘less positives’.
On the ‘positive’ side, it’s likely that AI and advanced analytics will assist in the earlier detection of pandemics, and will play a major part in the fast-track development of new (and untried) vaccines. With the traditional cost of vaccines being in the trialing process, fast-tracking should reduce development costs and make drugs more affordable in less-privileged parts of the world. Some might suggest that untried drugs bring greater risks but let’s put that aside for the moment, alongside the matter of the profitability of Pharma companies.
The role of data, analytics & AI in improving decisions
Location analytics will also contribute to our understanding of Pandemic hotspots geographically and should result in a slowdown of global spread. Uncontrolled travel seemed to be a major contributor to global spread. Better data will also provide greater insight into the demographics of those most likely to be affected by gender, location, and age.
For me, the approach we took last time feels too ‘broad brush’ – like painting a Canaletto of Venice with a decorating brush. Where is the accuracy of the response?
In recognising that, like the last time, demand for medical services will exceed supply, AI might be used (as part of other scheduling tools) to prioritise medicines and beds. Or to triage patients. Real or ‘Strong’ AI has no room for sentiment. So, overall, data-driven information will help create actionable insights, ensuring that decisions are objective and neither political or emotional. Taken positively, it’s a really powerful point of view.
Is there a potential dark side of AI in such a scenario?
There might however be a negative element. Let’s explore it. The first issue is that a data-driven or AI-driven response depends on having a full picture of the situation. Without a full picture, the data fails to be representative of the whole. Failure to do so means that the suggested actions are biased, either consciously or unconsciously. Those with least access to devices or who simply do not have them, typically the poorest in society, will not be able to provide data. Put another way, in a digital age, in a pandemic the poor will be at best marginalised. At worst they could even be excluded from the decision making process. It is they who will be worst affected by the next pandemic.
Let’s discuss control mechanisms. Used to its full potential, AI could also be used to monitor the activity and behaviour of the population. This could be derived from information about where and how we spend, how far we have travelled and if we have left our ‘designated zone’. (Who remembers the Covid ‘Stay at Home’ and ‘5 Mile’ Rules?) Consider supply and demand. Could the ‘toilet roll’ panics leading to empty shelves might be replaced with some sort of rationing? What might be the digital alternative to WW2 ration books?
What about freedom of expression? Perhaps not everyone will agree with governmental decisions. AI in the form of facial recognition could be used to identify those ‘on the streets’. Social media analytics might play a part in identifying those ‘adversely’ commenting on social media feeds (however this might be defined). Matters of trust could also be affected. Will AI in a pandemic give us better information about what is happening? How could we be sure that the ‘Matt Hancock’ or ‘Patrick Vallance’ or equivalent face we see on our screens every evening is real, and isn’t some sort of AI generated deep fake?
What will actually happen? We need to start practicing to learn
Of course, much of this might sound quite dystopia. That’s not the intention. This blog isn’t an updated iteration of ‘Big Brother’ of Orwellian fame. I’m not sure that Orwell had ever thought of life during a pandemic. His ‘1984’ book was based on other complex political matters rather than the response to a pandemic, even if there might be arguably some parallels.
Perhaps all this – the positives and not-so-positives – point to the complexity and contradictory nature of an AI-infused world. How does – or will – the AI Summit respond? How can our leaders, with no experience in this new AI-infused world, really offer sensible guidance? Personally, I’m not sure that they can. Maybe, let’s just get on with it, and see what happens.
I agree with Tony. We need a healthy balance of considering the potential ethical concerns and our intended policies to mitigate such risks. But, we also need to avoid (as Tristan highlighted) the risk of spending so long thinking about it that we don’t get on with doing anything. The real learning will come by practice, review, reflection, and improvements. It might sound scary but we need to start what AI version of test & control experimentation in our society as well as our business. Then be open and honest about what we learn and how we will improve in the future. Practice is needed and watchfulness.