Struggling to combat the code: An alternative account of reviewing automated decision-making in the UK - Combatting the Code book forum

Alexandra Sinclair provides the third post in our book forum on Yee-Fui Ng’s Combatting the Code: Regulating Automated Government Decision-Making in Comparative Context. To see all posts, please click here.

Alexandra Sinclair

26.11.2025

Introduction

New technologies always lay bare the ambiguities present within existing frameworks of legal regulation. The use of artificial intelligence (AI) and automation in public-sector decision-making is no different. Automation in the administrative state has amplified the already contradictory and ambiguous aspects of administrative procedure. Does administrative decision-making necessitate a process of human cognition; are consultation obligations able to be met through AI simulation of public opinion; to what extent must courts exercise deference in evaluating the predictive assessments of AI models; and can the source code and training data of machine learning models satisfy the requirements of reason-giving are a few of the novel questions public sector automated decision-making raises.

Using existing laws to regulate AI

Yee-Fui Ng’s new book Combatting the Code: Regulating Automated Government Decision-making in Comparative Context addresses head on the question of how automation exposes gaps in legal regimes. The book selects four existing public law frameworks: rationality review, anti-discrimination law, freedom of information law and data protection and examines their present efficacy in regulating automated decision-making in the public sector. Ng analyses three jurisdictions: the United States (US), Australia and the United Kingdom (UK). This is a valuable project examining the extent to which existing public law frameworks can exercise effective oversight over automated systems. It is all the more important because in her chosen jurisdictions AI-specific regulation looks increasingly unlikely. In the US the Republicans attempted to impose a moratorium on state AI regulation and the Trump administration revoked Biden’s Executive Order regulating AI. In the UK, both the last Conservative administration and the present Labour administration have promised a ‘pro-innovation’ and ‘turbo-charged’ approach to AI and have rebuffed calls for AI-specific regulation in favour of a sectoral approach.  In Australia there has been no action on implementing proposed AI guardrails and a recent report from the Productivity Commission recommended AI-specific regulation only as a ‘last resort’.

Given this political context, Ng’s project is important in examining the extent to which existing regulatory architecture can facilitate effective oversight of automated decision-making in the public sector. Such an exercise is also fundamental to any future law reform efforts. Only with a clear picture of what the law currently regulates can gaps be identified and a case made for how any gaps should be filled.

Ng’s book comes from the perspective of an Australian academic who highlights the comparatively few express legal provisions pertaining to automated decision-making, data protection or human rights in Australian law compared to its UK and European counterparts. The book suggests that Australia needs to go the way of the UK and Europe and implement legal reforms—particularly to Australia’s privacy regime. This point is well made, and no doubt Australia could benefit from stricter regulation on the processing of personal data. However, while the UK certainly has more legal provisions that purport to regulate automated decision-making than Australia it is doubtful this has led to more responsible automated decision-making in government or success for litigants challenging automated decisions in court. Despite automation being endemic in many government processes there have been very few legal challenges to automated decision-making in the UK over the last ten years.

The need for further research

Only so many conclusions about how to effectively regulate automated decision-making can be drawn from looking at the text of the legal frameworks and the (limited) case law.  Ng’s book is an essential building block for future research and it highlights that what is sorely needed is empirical work studying how public law frameworks are operating in practice. Examination is needed of the present barriers for litigators and litigants bringing claims challenging automated decisions and for regulators in facilitating effective enforcement. It is necessary to examine the broader structural barriers that have long made challenging government secrecy difficult and which are particularly pernicious in the automated context.

The realities of the UK’s AI governance

There are at least four ways in which the UK’s current framework is failing in effective oversight of public sector automated decision-making:

  1. Ng points to Article 22 of the General Data Protection Regulation (GDPR) – the right to object to a solely automated decision—and Articles 14 and 15—the ability to obtain ‘meaningful information about the logic’ of automated decisions.  However, there are no court decisions in the UK where Articles 14, 15 or 22 have been successfully relied upon. As Ng points out, Article 22 has serious limitations because it applies only to solely automated decisions. Where a government body states that there was meaningful human oversight this is virtually impossible for a claimant to disprove given the significant informational asymmetries.

  2. The UK’s freedom of information laws have been reasonably ineffective at obtaining information about automated decisions. Much government automation in the UK is in the context of fraud detection and immigration. Government departments frequently rely on s 31 of the Freedom of Information Act 2000 (UK) (FOI Act) on the basis that disclosure of information about an automated system will prejudice the prevention or detection of crime or immigration control. Refusal of information in reliance on s 31 is a common occurrence. Ng notes the Public Law Project received information under the FOI Act about the automated tool used by the Home Office to identify possible sham marriages for investigation. However, because the Equalities Impact Assessment was heavily redacted the Public Law Project brought a claim seeking disclosure of the criteria used by the tool, arguing that the Home Office had wrongly relied on s 31 of the FOI Act. Both the First Tier Information Tribunal and the Upper Tribunal upheld the Home Office’s decision to refuse to disclose the criteria. The criteria used to select couples for sham marriage investigations remain unknown.

  3. The body in charge of enforcing both the FOI and GDPR regimes is presently failing in that role. Ng highlights that on paper the Information Commissioner is a powerful body. She notes ‘its strong enforcement powers, with the ability to compel organizations to comply with the data protection laws and issue significant penalties’.  However, this is a description of the Information Commissioner that few in the UK would recognise at present. The UK Information Commissioner issued no enforcement notices in the year 2024/2025, nine reprimands and only two GDPR fines compared to Germany and Spain who both issued over 200 fines respectively. Erdos lays blame with the high level of discretion that resides with the Information Commissioner to downgrade or refuse to issue fines and the pressures on it from government not to undermine the UK’s pro-innovation data economy by too heavily enforcing data protection laws.

  4. There have been countless examples of unfair and unlawful automated decision-making in the UK over the last ten years. Migrants in London and disabled communities in Manchester have had benefits suspended on the basis of algorithmic outputs. Students in the UK had their visas cancelled because of highly unreliable voice recognition software. During the Covid pandemic the Office of Qualifications and Examinations Regulation initially allocated students’ A-level grades via algorithm. The Secretary of State continues to identify potential sham marriages via a machine learning model. These instances have occurred while the frameworks Ng describes were in place. This raises an interesting question on which further research is clearly needed. Are the examples mentioned above simply standard examples of administrative law non-compliance which has long been a feature of government administration? Or do these examples suggest that the nuances and ambiguities in the application of public law frameworks to automated systems are preventing them from having their desired regulatory effect?

Substantive rationality

Finally, I wish to address the book’s focus on substantive rationality or reasonableness review under English law. Ng only analyses one ground of review in the book (rationality) when judicial review is traditionally the strongest public law control of government decision-making. The choice to analyse substantive rationality/unreasonableness as the chosen ground in the UK is somewhat surprising. Judicial review is primarily a series of process doctrines. Most judicial review doctrines determine if a fair, lawful and rational decision-making process has been undertaken. For example, the courts assess whether an individual was given notice of the decision, the opportunity to provide relevant information and reasons for the decision made.

Ng’s choice to examine substantive rationality means there is no sustained analysis in the English context of the ways that machine learning models might subvert or hinder traditionally fair administrative procedures ie through opaque systems preventing transparent notice and reasons for a decision or correlative inference inhibiting the careful consideration of only relevant and probative information. Examining these process doctrines may have been a more natural fit for the comparative scholarship given that in the US context Ng does examine the doctrine of due process which is much closer to English procedural fairness doctrine.

Courts are more reticent to interrogate the lawfulness of the substantive outcome of a decision because that risks invading on the political domain and disrupting the allocation of power between the executive, legislature and the courts. The courts will intervene on the basis of the substantive outcome alone only where ‘no reasonable authority…could have come to it’. This standard is an attempt to balance ensuring powers are exercised rationally, while avoiding the temptation of a reviewing court substituting the decision-maker’s discretion with their own. Moreover, automated systems are often used in contexts where mass administration of government programmes is required. The government’s interest in delivering these programmes is generally of paramount consideration to be weighed in the balance. Governments spend large sums of money procuring and developing digital systems. Given this dynamic, I think it is rare that substantive rationality will be a fruitful ground for claimants. As the Court of Appeal stated in Pantellerisco, one of the Universal Credit cases mentioned by Ng, there is: ‘extraordinary complexity of designing a system such as universal credit, and…it necessarily involves a range of practical and political assessments of a kind which the Court is not equipped to judge’.

Conclusion

In conclusion, Ng’s book is required reading for contemporary public law scholars, barristers, solicitors or government lawyers as public-sector decision-making is increasingly made up of automated components. It provides a comprehensive starting point for any legal representative or litigant seeking to determine which laws and legal frameworks they can utilise to challenge an automated decision. The comparative element is also necessary considering there have been so few legal challenges within any one jurisdiction to automated decision-making. Ng’s book gives practitioners the material to draw upon the approaches of other courts even if there isn’t relevant case law within their own jurisdiction.  It also provides an essential foundation for future scholarship.

Dr Alexandra Sinclair is a Postdoctoral Research Fellow at Sydney Law School and a Research Fellow at the ARC Centre of Excellence on Automated-Decision-making and Society which receives funding from the Australian Research Council (CE200100005).

Suggested citation: Alexandra Sinclair, ‘Struggling to combat the code: An alternative account of reviewing automated decision-making in the UK - Combatting the Code book forum’ (26 November 2025) <https://www.auspublaw.org/blog/2025/11/struggling-to-combat-the-code-an-alternative-account-of-reviewing-automated-decision-making-in-the-uk-combatting-the-code-book-forum>

Previous
Previous

Combatting the Code book forum - Author Response

Next
Next

Reason-Giving Without Reasoners? Confronting Generative AI Use in Administrative Processes - Combatting the Code book forum