Skip to main content

AI brings potentially huge benefits to the public sector but we need clear standards and greater transparency

Posted by: , Posted on: - Categories: Artificial Intelligence, Ethical standards, Nolan Principles

the words 'review into artificial intelligence' above four icons showing a code of conduct, a person sitting at a laptop, binary code, and a circuit board

Chat GPT and other AI tools are starting to transform our interactions with a range of organisations, including in the public sector. The benefits and risks involved were highlighted in the media recently by educationalists concerned about the lack of advice for schools on handling developments in AI.

Back in 2020, pre-pandemic, the Committee produced a report and recommendations on how we ensure that high ethical standards can be upheld as technologically assisted decision making is adopted more widely across the public sector.

Honesty, integrity, objectivity, openness, leadership, selflessness and accountability were first outlined by Lord Nolan in 1995 as the standards expected of all those who act on the public’s behalf. They are the basis for codes and rules guiding conduct and decision making across the public sector. But what happens when decisions are made or assisted by a machine? 

Our 2020 report argued that adherence to high public standards will help fully realise the potential benefits of AI in public service delivery. All seven Nolan principles are relevant and valid as AI is increasingly used for public service delivery, but we found AI poses a challenge to three Principles in particular: openness, accountability, and objectivity. 

Our report argued that public sector organisations are not sufficiently transparent about their use of AI and that it is too difficult to find out where machine learning is currently being used in government.

We called on government to establish a coherent regulatory and governance framework for AI in the public sector, and to produce practical guidance and enforceable regulation on transparency and data bias as a matter of urgency.

At that time, the weight of evidence was that the UK did not need a specific AI regulator, so we recommended that all regulators step up to the challenges that AI poses to their specific sectors, led by the CDEI as a centre for regulatory assurance to assist regulators in this area. 

Our report said that public bodies using AI to deliver frontline services must comply with the law surrounding data-driven technology and implement clear, risk-based governance for their use of AI to prepare for the changes AI will bring to public sector practice. We also asked government to use its purchasing power in the market to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards.

The Committee has retained a close watching brief on progress in this area and intends to hold a round table in the Autumn to see if – three years on – we are developing the frameworks and governance that will give the public confidence that new technologies will be used in a way that upholds the Seven Principles of Public Life as the public sector moves into a new AI-enabled age.

There are exciting benefits to be gained from this new technology; government and regulators need to act swiftly to keep up with the pace of AI innovation.  

Download the Committee’s report on AI and public standards

Watch a short film about the work of the Committee on Standards in Public Life

Sharing and comments

Share this page