No one could have known then that schools and universities would be closed and that hundreds of thousands of students would not even sit exams; working out what is fair in this unique scenario was never going to be easy.
The use of a statistical model to calculate grades has been criticised as a ruthless way to deal with the aspirations and achievements of a generation of young people. Yet algorithms have been used to set results for some time, only with better data.
Was this debacle a result of the algorithm itself? Or is this really about openness, accountability and objectivity?
The Committee’s review considered whether the Nolan Principles remain relevant to AI innovation in the public sector – do they need reworking for new technology? And are the existing governance structures of our institutions and regulators able to properly scrutinise AI in public policy making?
The Committee recognises that the use of algorithms and AI in public sector decision-making offers huge potential to help design better policy and deliver more efficient and effective public services. AI is already being used to improve communication and engagement with citizens through chatbots across government, for example.
Our research demonstrated that the seven principles remain a valid guide for public sector practice, and do not need reformulating for AI, but that three principles are particularly relevant – openness, accountability and objectivity:
On openness, the public can only scrutinise and understand the decisions made by government and public bodies like Ofqual if they have access to the information about the evidence, assumptions and principles on which policy decisions are being made. Citizens should have access to information about decisions that affect their lives.
Then there is the question of who is accountable for those decisions? The outcome of an AI system is not simply the product of the software itself, or any single decision maker. All public officials have a degree of professional accountability for their areas of responsibility but ultimately, accountability has to rest with those who chose to adopt and implement the system as part of their responsibility for public service delivery. It is the role of senior leadership to ensure that suitable governance is in place for any risks that a system poses, and it is senior leadership who should be held accountable for any decision an AI system takes. Accountability cannot be outsourced to an algorithm.
On objectivity, it is well understood that AI has the potential to produce discriminatory effects if a data set is in some way flawed or an algorithm works in a biased way. In this particular example, the model has been criticised for relying heavily on historical data because it could exacerbate and embed existing socio-economic bias, unintentionally favouring weaker candidates at better schools while discriminating against stronger pupils from others that have historically underperformed.
Our research found that far from needing a super regulator or new and unique principles, successful AI governance is a question of clear regulation and proper controls for understanding, managing and mitigating risk.
Among other recommendations, we called on Government Departments and public bodies to assess the potential impact of a proposed AI system on public standards at the design stage and to continue to review and monitor the impact on standards every time a substantial change is made.
We also recommended that to remain accountable for their decisions, public bodies need to enable people to challenge decisions and seek redress using procedures that are independent and transparent. This is the case whether AI is used or not.
We hope that this crisis acts as a clarion call for government, public bodies and regulators to act swiftly to implement our recommendations. AI and decision-making by algorithm may be a new challenge, but it can be managed with existing tools and established principles.