Combatting the Code book forum - Author Response
Yee-Fui Ng replies to reflections from Anna Huggins, Frank Pasquale and Alexandra Sinclair on her book Combatting the Code: Regulating Automated Government Decision-Making in Comparative Context. To see all posts, please click here.
Yee-Fui Ng
28.11.2025
I am deeply grateful to Frank Pasquale, Anna Huggins, and Alexandra Sinclair for their insightful comments on my book. It is such a pleasure to have such deep engagement and interaction with my work.
My decision to write this book stemmed from the large-scale scandals and controversies arising from automated decision-making that erupted in democracies such as Australia, the United States (US) and the United Kingdom (UK), which have harmed hundreds of thousands of people. The echoes of Robodebt in Australia, the MiDAS automated system in Michigan, and the UK Post Office Scandal reverberate until today, and prompted me to question: how did things go so wrong in these advanced liberal democracies with their sophisticated checks and balances?
I was fortunate to be able to travel the comparator jurisdictions as part of my research. I visited New York University on a Fulbright Scholarship, and Oxford University for my first sabbatical. The book is thus enriched by the insights from my discussions with leading academics, political advisers and civil society organisations in these jurisdictions.
My intention in writing the book was then to devise a governance framework incorporating legal, political and managerial controls for automated government decision-making, in order to try to prevent future disasters from recurring.
The book shows that legal challenges through the avenues of judicial review, anti-discrimination, data protection and public sector privacy, and freedom of information (FOI) have variegated levels of success. This suggests that there is a need for legal reform to better tailor laws for the age of artificial intelligence (AI).
Political controls operate alongside legal mechanisms, through the activities of parliamentary committees and oversight bodies, such as ombudsmen, auditors and information commissioners, who scrutinise government action. These bodies have ventilated many issues relating to the use of automated decision-making in government through public inquiries, and have set standards and guidelines that operate across government.
By the time we get to a legal challenge, it is almost too late, as the harm has already been done to many vulnerable individuals. This points to the need for prospective controls within departments and agencies in terms of risk and impact assessments, as well as internal and external auditing processes.
Thus, the book proposes the managerial aspect of the accountability framework, which involves the customs, practices, and policies within government agencies that are perceived to be binding by front-line administrators, and which are backed by sanctions, as well as management structures that allow for oversight of agency operations.
I argue that this three-pronged approach of legal, political and managerial controls is required to address all dimensions of the design, implementation and auditing of automated technologies through both internal and external oversight processes, as well as prospective and retrospective measures.
Pasquale and Reason-Giving
Pasquale delves into the difficulties of reason-giving in the generative AI context. His post identifies a critical tension that sits at the heart of contemporary administrative law: as generative AI becomes capable of producing increasingly sophisticated justifications for decisions, we risk creating what he aptly terms ‘reasons without human reason-givers’—a simulacrum of accountability that undermines the very legitimacy it purports to establish.
Pasquale’s focus on the Nevada unemployment benefits case is particularly illuminating. The promise that AI can reduce decision-making time ‘from three hours to five minutes’ reveals the economic logic driving algorithmic adoption, but also exposes a troubling conflation: the assumption that producing a textual justification is equivalent to engaging in the intellectual and moral work of justification. As Pasquale rightly emphasises, these are fundamentally different undertakings.
Pasquale also highlights the inadequacy of the ‘human in the loop’ metaphor that dominates discussions of AI governance. The Nevada system, like many proposed safeguards, positions human reviewers as error-checkers rather than decision-makers. This represents a profound inversion of proper roles for administrative decision-makers: the AI makes the substantive determination, and the human validates it, rather than the human making the decision with computational assistance.
The doctrine of non-fettering discretion, which I discuss and which Pasquale generously highlights, speaks directly to this problem. If an administrator becomes so dependent on an algorithmic system that they effectively surrender their discretion to it, they have fettered their decision-making authority in a manner that may render decisions unlawful. The fact that they retain nominal authority to override the system does not cure this defect if, in practice, they consistently defer to its outputs.
Sinclair and the UK Context
Sinclair’s post illuminates the gap between the law on the books and the law in action in the UK. She argues that the UK’s seemingly robust regulatory architecture has not prevented widespread unlawful automation, and points to the lack of successful challenges under the General Data Protection Regulation (GDPR) in the UK, the ineffectiveness of FOI laws, and poor enforcement by the UK Information Commissioner.
My book acknowledges the limitations of Article 22 of the GDPR due to the requirement of being ‘solely automated’. I note that although Article 22 of the GDPR has not been successfully deployed in the UK to date, it has been utilised successfully in EU jurisdictions. More recent EU case law has adopted a more critical approach to only nominal human involvement, leading to successful challenges. These include a successful appeal by a group of drivers against Uber and Ola, in what was known as the ‘robo-firing’ cases, who challenged decisions made using opaque algorithms that managed, fined, and fired workers (see p 113 of my book).
FOI is certainly a mixed story in the UK, with some successes and some failures to obtain AI information. The failings of the ‘pull’ method of extracting information from government through FOI animates the central recommendation in my book of a ‘push’ model: embodied in a proactive mandatory centralised transparency register of AI tools in government, which is properly enforced by an independent regulator. This is an ideal we have not yet seen come to pass in the jurisdictions examined, as Sinclair’s discussion of the UK Information Commissioner highlights.
Sinclair questions the book’s focus on substantive rationality. It was a mammoth task to compare between three distinct jurisdictions, where two are derived from the Westminster tradition (UK and Australia) with comparable institutions, to the US, which is a jurisdiction with a different constitutional and administrative law framework and very divergent interactions between institutions. Consequently, I made the choice to focus only on grounds that have been actually mounted as legal challenges in the various jurisdictions, which led to my focus on rationality. Whilst there are many other grounds of judicial review that are ripe with potential for utilising in an AI context, as commentators such as Rebecca Williams have highlighted, my comparative task was to map US, UK and Australian legal concepts in a way that belied the differences in their constitutional, human rights and administrative law frameworks.
This led to the book’s focus on three main forms of challenging government AI decision-making: challenges to the input (through anti-discrimination), output (through rationality) and use of data (through data protection and privacy).
Sinclair rightly emphasises that identifying relevant legal doctrines is only ‘part of the story’—we must also examine how these frameworks operate in practice, and why they so often fail to provide meaningful accountability. I look forward to the future empirical research Sinclair calls for, and I hope my book can serve as one foundation for that essential work.
Huggins and the Australian Context
Huggins’ commentary captures the core arguments of the book but enriches the conversation by connecting it to the most current developments in automated government decision-making in Australia.
Huggins draws attention to the Commonwealth Ombudsman’s damning August 2025 report on the Targeted Compliance Framework debacle, dubbed by some commentators as Robodebt 2.0. The repeated controversies surrounding automation in the Australian government despite the extensive ventilation of Robodebt through court challenges, a royal commission, and current National Anti-Corruption Commission investigation points to the need for more concentrated reform efforts.
The glacial pace of legal reforms in Australia belies the importance of the issue at hand. Two years after the royal commission, the Attorney-General’s Department issued a public consultation in 2024 that has yet to report. Huggins’ analysis of the challenges facing legislative reform efforts is astute. The tension she identifies between the push for AI regulation and the Productivity Commission’s warnings about over-regulation reflects a familiar pattern in Australian and international policy debates.
Ultimately, Huggins’ commentary reinforces my central argument: public law must evolve to meet the challenges posed by automated government decision-making, but legal reform alone will not suffice. We need a multi-dimensional approach that combines legislative safeguards, robust judicial oversight, political accountability, and—critically—improved internal governance within agencies themselves. Only then can we hope to prevent Robodebt 3.0.
Conclusion
Automated governance is here to stay, but it is only through a process of self-reflection, legislative and institutional reform, judicial reinterpretation and strengthened oversight body scrutiny, that governments can avoid repeating the mistakes of the past.
Once again, I am indebted to the brilliant panel for their incisive comments on my book.
It is my hope that my book will provide a framework for governments to take a more deliberative approach to adopting AI and automation, and to ensure that these programmes are implemented in a fair, effective and accountable manner, without harming vulnerable citizens.
Yee-Fui Ng is an Associate Professor at Monash University who researches in the areas of public law and political integrity, and a 2021-22 Fulbright Scholar.
Suggested citation: Yee-Fui Ng, ‘Combatting the Code book forum - Author Response’ (28 November 2025) <https://www.auspublaw.org/blog/2025/11/combatting-the-code-book-forum-author-response>