printer icon
Maria Luciana Axente

How do we ensure the responsible use of AI by Governments?

With robust public policy

Lessons learned from the UK

Important lessons can be learned from the latest cases of misuse, underuse or over use of algorithms in high stake use cases, wildly reported in the media in 2020. Perhaps the most important lesson is on the importance of public confidence for the success of using AI and algorithms in the public sector and beyond. Various reports and studies have highlighted that AI can have a positive contribution to the various challenges facing our societies, but in order to foster the benefits and mitigate potential risks, any adoption of responsible AI must be done carefully and with the agreement of the people who will use it, in private sector but increasingly also in the public sector. The public policy in the UK has managed to foster public trust and confidence in AI and algorithms by focusing on understanding and explaining both the actual potential and limitations of use in real life use cases.

In 2021, the UK’s Office for Statistics Regulation released a report which highlighted the limitations of statistical models behind algorithms (factors included the precision of  problem definition, available data, volatility of the objective variable, assumptions made, data quality). It also outlined the role of accountability across the complex social, economical and political context those algorithms operate in.

When the pandemic hit, it was only natural for us to gravitate toward technology and AI as one of the tools to alleviate some of the hardships created. A 2021 report from the UK Government Centre for Data Ethics and Innovation on use of AI and data-driven technologies in the pandemic stressed that “artificial intelligence did not play the outsized role many thought it would in relief efforts.”

According to the PwC global CEO survey, the current economic and societal environment has shifted decision makers’ focus towards a number of pressing societal problems, including the pandemic and health crisis, cyber threats, and overregulation, as well as climate change and increasing inequality. Responsible use of data and AI remains a priority for many executives according to PwC Responsible AI Study as a component of multi-faceted solutions to business problems, rather than as a stand alone ‘silver bullet’.

Organisations and policymakers looking to realise the benefits of data driven technologies, including AI, recognize the need to build long term trust in the way they design, use, but most importantly govern those initiatives; this is paramount for achieving responsible and ethical outcomes. Balancing the need for speed and agility in deploying AI for current solutions with proactive mitigation of possible negative consequences can be tricky, especially in the public sector where the incentives and responsibilities differ from those corporates, with public pressure and potential consequences more acute.

It is in this arena that well considered public policy can make a significant contribution. Robust responsible AI policy in the public sector can address a number of the challenges of balancing benefits with risks and clarify how and where AI should be used in certain public contexts. In addition, it can help to shape solutions to some of the big challenges in the AI domain; including fostering collaboration and co-design of AI, educating and empowering citizens, and designing an agile regulatory framework.

The UK has been able to effectively navigate the landscape of AI ethics and respond quickly to controversies over the role of algorithms in public life, in part thanks to a well developed public policy debate that has taken place over several years. The UK provides an example of how good policy has a transformational impact on how AI is used in society.

Public policy as a convener for a thriving AI ecosystem

In 2017 the UK government commissioned a report aimed at understanding the potential impact of AI on the economy as well as laying out some key recommendations for the government to take forward. “Growing the artificial intelligence industry in the UK”, an independent review carried out by Professor Dame Wendy Hall and Jérôme Pesenti, was not only the input for the UK AI sector Deal, the first national plan on AI, but also set the foundation for the UK AI ecosystem. The report’s richness and depth was informed by consultation with a wide variety of stakeholders from government departments, academic and research institutions, to civil society and for profit and non-for-profit organisations. It was the first time these groups came together under the umbrella of AI for the UK, and through a series of events and workshops, rich and diverse perspectives were collected and a cross disciplinary collaborative community was created. This collaborative ethos continues to link government departments with academia, think tanks and businesses to co-create and iterate the story of AI in the UK. The variety of perspectives, views and experiences contributes strongly to the UK ecosystem’s strength and ethical foundation.

Public policy for setting the tone of AI in the UK

Additional policy work followed, aimed at exploring the role of AI in the economy and everyday life.. The UK Parliament’s House of Lords special AI Committee framed topics around risks and implications of AI, the impact on individuals, potential opportunities, possible risks and implications, responsible public engagement and ethical issues. The report “AI in the UK: ready, willing and able?” remains to date one of the most influential public policy documents on AI inspiring many national states and supranational organisations in considering a more holistic approach and framing for AI covering societal impact, risks, ethics and public engagement. The work and members of the commision later set up the All-Party Parliamentary Group on Artificial Intelligence (APPG AI), a cross parliamentary group exploring the implications of AI in the UK, a group which over the last 4 years has become the UK’s main hub for AI policy debate.

Public policy to empower stakeholder advocacy

Various policy documents in the UK were written for broader audiences; the quality of content and importantly the process of drafting made them appealing and accessible to a wide range of stakeholders who wanted to understand how AI might impact their lives and work, and how the audience could contribute. For example, The Royal Society partnered with the Ada Lovelace Institute, the Alan Turing Institute, the British Academy, DataKind UK, the Leverhulme Centre for the Future of Intelligence and the Open Data Institute to explore Citizen Data Science and the “application practical governance mechanisms in civil society, in recruiting and using volunteer data science skills and in enabling the collaborative use and maintenance of open datasets.”

Public policy as a source for robust, focused and agile regulation

A number of UK regulators have also been key stakeholders in the AI policy space. Not only were they contributors to the policy debates, but they set up and ran collaborative projects aimed at a deeper understanding of potential issues. For example Project ExplAIn, collaboration between the Information Commissioner’s Office (ICO) and The Alan Turing Institute (The Turing) came about as a result of Professor Dame Wendy Hall and Jérôme Pesenti’s 2017 independent review followed by the Government’s AI Sector Deal. These tasked the ICO and the Turing to “…work together to develop guidance to assist in explaining AI decisions”. Regulators joined the discussion around which topics may require hard regulation, and which topics could be addressed via a guidelines based approach regulation. 

The publication of those reports have had an important impact on the governance of AI systems in the public space, by providing robust guidance around the use of AI across multiple use cases, but also to educate the public, government agencies and other organisations on the benefits and limitations of the technology.

Creating a responsible AI ecosystem requires multiple stakeholders with diverse perspectives to come together and contribute to a robust public policy debate. The UK leads by example on responsible AI by fostering collaborative and inclusive platforms for discussion at many different levels. From researchers to policymakers, civil servants to activists, ethicists to engineers, public policy benefits from multiple voices contributing in a shared forum. With the new UK AI national strategy  set to be released later this year, the UK will continue to contribute to the responsible deployment of this revolutionary technology.

Maria Luciana Axente Responsible AI & AI for Good Lead at PwC UK I Award winning global AI Ethics expert and advisor I Forbes Women Defining The 21st Century AI Movement