Last month the Committee published its review into Artificial Intelligence and Public Standards. This blog reflects on the major developments in AI in the public sector that led to our review, examines progress made on the Committee's recommendations, and identifies the key organisations to follow to keep up with the UK's AI ethics debate.
A Tale of Three Reviews
At the request of DCMS and BEIS, Dame Wendy Hall and Jerome Pesenti reported on the potential of AI in the UK in October 2017. Their report, 'Growing the Artificial Intelligence Industry in the UK' made recommendations across four areas: data, skills, research and adoption.
Parliamentarians began examining the opportunities and challenges of AI with the establishment of the House of Lords Select Committee on Artificial Intelligence. Chaired by Lord Clement-Jones, the Committee published its report 'AI in the UK: Ready, Willing and Able?' in April 2018.
These reports, along with developments in UK policy on AI, set the stage for CSPL's examination of how AI can be introduced to the UK's public sector in a way which upholds - rather than undermines - the Seven Principles of Public Life.
Government action: The UK's Specialist AI Institutions
Four new institutions are at the centre of the UK's AI strategy: the Alan Turing Institute, the Office for Artificial Intelligence, the Centre for Data Ethics and Innovation, and the AI Council.
The Hall-Pesenti report recommended that the Alan Turing institute should become the UK's national centre for artificial intelligence and data science. Based at the British Library, the Turing Institute brings together academics from 13 leading UK universities to study the impact AI will have on our society.
The government's Industrial Strategy, published in November 2017, pushed AI to the fore of Britain's economic ambitions. AI was one of four 'Grand Challenges' identified as key to Britain's future. The industrial strategy led to 2018's £950m AI Sector Deal, designed to catalyse AI research, development and adoption in the UK. A new government office - the Office for AI - was established to implement the deal. It sits jointly under DCMS and BEIS.
The Industrial Strategy recognised that ethics would be key to the successful adoption of AI in the UK, and led to the establishment of the Centre for Data Ethics and Innovation. Though currently operating out of DCMS, the government intends to place the Centre on a statutory footing.
The AI Council is an independent committee of experts in industry, academic and the public sector that advises government on policy and research.
Progress on CSPL Recommendations
The Committee made recommendations to these bodies and others to strengthen the UK's ethical framework around the deployment of AI in the public sector. In some areas, progress has already been made since the turn of the year.
We concluded that there should not be a new AI-specific regulator for the UK, but rather that all existing regulators must meet the challenges AI poses to their sector. A new body is needed, however, to help existing regulators in this task. It was clear to us that the Centre for Data Ethics and Innovation is best-placed to fulfil this role, helping both regulators and government keep on top of any new regulatory challenges. We were therefore pleased to see the Centre recently published its first set of recommendations to government, on the issue of online targeting, and we hope to see further progress soon on the Centre's transition to a statutory footing.
The Committee also recommended that guidance on AI in the public sector jointly published by the Office for AI, Government Digital Service and Turing Institute be made easier to use and promoted more extensively. In January this year, an edited version of the guidance was released, and it improves significantly on user-friendliness. We hope the guidance is adopted widely.
We are following closely developments in public sector procurement, following the release of the Draft Guidelines for AI procurement in September last year and the planned launch of the Crown Commercial Service's new AI framework in August 2020. The Committee recommended that the government use its market power to incentivise private companies to produce ethics-compliant AI products; we hope the new framework delivers on that goal.
We have also been pleased with the reception from key stakeholders to our recommendation that new guidance is developed on the Equality Act and artificial intelligence. This was one of the Committee's most important recommendations and we look forward to seeing progress in this area.
The Committee will be publishing a series of blogs on some of our other recommendations throughout the rest of the year.
Keeping on top of the AI ethics debate
As well as the organisations outlined above, the UK has a flourishing network of academic institutes, think tanks, and policy networks examining AI and its ethical challenges.
Academics at the Oxford Internet Institute and the Leverhulme Centre for the Future of Intelligence regularly publish sector-leading research, such as this paper by Sandra Wachter, Brent Mittelstadt, and Chris Russell on EU non-discrimination law and AI. The University of Buckingham hosts the new Institute for Ethical AI in Education.
Public sector organisations are also examining the impact of AI in their particular domains - for example the Lord Chief Justice has set up an AI advisory group and the Department for Health and Social Care has established NHSX. The new London Office of Technology and Innovation (LOTI) seeks to strengthen digital innovation in local government across the capital and ACAS, the independent government-funded arbitration body, has just published a report on algorithms in the workplace.
Think tanks and professional associations regularly contribute to the UK's AI debate too. The Royal United Services Institute (RUSI) recently published a report on data analytics and algorithms in policing, following the Law Society's examination of algorithms in the criminal justice system. The Ada Lovelace Institute studies the impact of AI on society, and the innovation foundation NESTA has examined the impact of AI in government.
Journalists and civil liberties groups keep a close eye on the expansion of AI in the UK. Last year the Bureau of Investigative Journalism investigated government data systems. Big Brother Watch and Liberty campaign on issues of tech-enabled surveillance.
For real life examples, those wanting to look at specific case studies of AI in public service delivery could start with Ofsted's use of machine learning to risk assess its inspections, research currently being undertaken at Moorfields Eye Hospital in collaboration with DeepMind, or West Midlands Police's Data Analytics Lab, which is overseen by its own specialist ethics committee. The Metropolitan Police is also moving forwards with the implementation of live facial recognition.
CognitionX and Tech UK are invaluable networks for those wanting to find out more about the state of AI in the UK.
This list is not exhaustive. AI in the UK and across the world is fast-moving and rapidly changing. CSPL made its recommendations based on the state of play at the turn of the year, and we look forward to seeing how our recommendations are implemented as part of exciting future developments.