Development and AI:
the future of British leadership
Resham Kotecha
Conservative Party candidate, a Social Mobility Commissioner and Head of Engagement for Women2Win. She works in data policy and sits on the Government’s Smart Data Council.
Imagine a world where technology could use mobile phone data to identify people in financial need and transfer money directly to them. Imagine a world where we could use facial recognition to help refugees find their family and friends, whilst also providing an education to children living in refugee camps. Imagine a world where we could forecast impending natural disasters and warn people with enough time to evacuate.
Thanks to Artificial Intelligence, we don’t have to imagine – we already live in a world where we can do all this, and more.
Technological progress has always offered a way to lift people out of poverty and to alleviate global challenges. The industrial revolution lifted millions of people out of poverty, and improved living conditions at a pace not seen before in history. New AI technologies could turbocharge global efforts in achieving the Sustainable Development Goals, and to ending absolute poverty. They could provide vital assistance during crises and they could help fight climate change. Projections suggest that advances in AI could double economic growth rates by 2035. If we plan effectively, technology could lift many millions of people out of poverty in a short time again.
The Government has made clear its intention to be a world leader in AI. It is busy preparing for the Global AI Safety Summit in a few weeks – the first of its kind. At UNGA, the Foreign Secretary announced the UK’s ‘AI for Development’ programme – and set out the UK’s vision for using AI to benefit the
world’s poorest.
Today, 700 million people are living in extreme poverty (living on less than $2.15 a day). Almost 80 million people are displaced due to conflict and persecution. Over 140 million people could be displaced by climate change in the next 25 years. Conflict, persecution, and climate change disproportionately affect resource-constrained regions. These threats are putting pressure on the most vulnerable in the world – the people who are least able to mitigate their significant and traumatising impacts.
The use of AI in development is nascent, but developing quickly. Its use is enabled by rapidly increasing datasets, impressive improvements in computing power, increasing global digital connectivity and accelerating improvements in algorithmic design. The application of AI, including Machine Learning (ML), offers impressive potential across agriculture, healthcare, humanitarian crises, education, and climate. AI is able to build on data-driven ML to forecast disasters such as flooding, drought, and famine. It can use historic data to predict movements of displaced migrants and can track terrorist groups through social media. It can process images, text, and social media posts at an incredible rate, and then alert emergency services to human rights abuses and people trafficking.
However, the risks associated with AI are as great as its potential. Algorithms can perpetuate racial stereotypes, discriminate against minority groups, and embed systemic issues. They can lead to unfair outcomes for minority groups, impact access to resources, and invade people’s privacy. These issues are of particular importance in areas of political instability, where there is a history of ethnic conflict, and where populations are already in conflict. Algorithmic opaqueness makes it challenging to recognise when they are amplifying inequalities. It can be close to impossible to establish accountability, assign responsibility, or to seek redress for negative consequences.
These powerful technologies are shaped by the data that feeds them. This means that their applications are only as good as the data that they rely on. A lack of relevant or timely data can hinder the development of suitable algorithms. When evolving human behaviour and environmental factors are not accounted for, algorithms can provide deeply flawed predictions. Biased data or bad data will inevitably lead to biased or bad algorithms.
Algorithmic predictions based on people with different cultures, behaviours, and life circumstances are likely to be less successful, and in critical situations, might put people in harm. These negative consequences can be exacerbated by the biases within those datasets, and the biases of those creating and utilising them.
Rigorous data and algorithmic evaluation can be labour and cost-intensive. They need those leveraging the technologies to have the knowledge, skills, and resources to assess the outcomes, and for Governments to establish a data ecosystem and framework that support effective application and evaluation. As we look to the AI Safety Summit, and beyond, our approach to AI should leverage the incredible potential it has to achieve the UNSDGs and to improve the lives of millions. Our ambition should be to develop a global strategic roadmap to catalyse the progress of AI for development. We should use a participatory approach, making sure technologies are designed and assessed by a diverse group of people. We should encourage an open data approach where it is safe to do so, so that algorithms, and outcomes, can be tested. We should aim to be world leaders in data literacy, data analysis and data cleansing that are needed to truly benefit from algorithmic power. Our ambition should be to develop a global strategic roadmap to catalyse the progress of AI for development.
Most importantly, our approach to AI in Development should include a renewed commitment and passion to ending poverty – at home, and around the world.