Regulating Automated Government Decision-Making: An Australian Perspective - Combatting the Code book forum
Anna Huggins provides the first post in our book forum on Yee-Fui Ng’s Combatting the Code: Regulating Automated Government Decision-Making in Comparative Context. To see all posts, please click here.
Anna Huggins
24.11.2025
Associate Professor Yee-Fui Ng’s new book, Combatting the Code, makes an important and timely contribution to debates about regulating automated government decision-making. Her in-depth comparative analysis of grounds of legal challenge for automated government decision-making across four dimensions (judicial review for rationality, anti-discrimination, public sector privacy and data protection, and freedom of information) in the United States (US), United Kingdom (UK), and Australia is an impressive feat. She also proposes a new framework for technological governance that moves beyond a focus on external legal and political accountability measures by foregrounding the importance of internal managerial controls for automated systems within government agencies.
There are many issues raised in Ng’s book that will capture the interest of scholars and practitioners engaged with public law discourses about the use of artificial intelligence (AI) in government. In this comment I reflect upon three issues that highlight the book’s currency and relevance to ongoing debates about automated government decision-making, and its design and regulation, in Australia.
Concerns about ‘Robodebt 2.0’
Ng selects government use of automation in social security as a case study, and identifies the ‘Robodebt’ scheme as an Australian example of the harm and inefficiency caused by poorly designed and implemented automated government decision-making systems. At the time of its deployment in 2016, Services Australia’s online compliance intervention, colloquially known as ‘Robodebt’, used an automated data-matching and assessment process to raise debts against welfare recipients the system flagged as having been overpaid. Legal errors encoded in the automated system led to hundreds of thousands of erroneous welfare debts and a record class action settlement in September 2025, in which the Australian Government agreed to pay $475 million in compensation, in addition to the $112 million previously agreed to in 2020. The total financial redress now exceeds $2.4 billion, including $1.76 billion in debts that were forgiven, cancelled, or paid back.
The Robodebt Royal Commission final report in 2023 highlighted the serious impacts flawed automated systems can have on highly vulnerable people. It provided 57 recommendations to address these risks, 56 of which have been implemented or agreed to in principle by the Australian Government.
Despite this, a report issued by the Commonwealth Ombudsman in August decried the ‘significant, if not catastrophic’ impacts of another automated system on vulnerable citizens. The Department of Employment and Workplace Relations (DEWR) was found to have unlawfully terminated the income support payments of 964 jobseekers using an incorrectly coded automated system under the Targeted Compliance Framework (TCF). Welfare advocates have drawn parallels with Robodebt, with some dubbing this latest controversy ‘Robodebt 2.0’.
Ng identifies that a key issue of automated decision-making in welfare is using computerised systems to implement rules in a way that limits discretion and accountability (32). Indeed, the rules-based systems commonly used by Australian government agencies, which rely on deterministic ‘if this then that’ algorithms, apply a rigid logic and are ostensibly suited to non- discretionary decisions only. In the TCF example, the Commonwealth Ombudsman found that the rules-based automated system used by DEWR unlawfully bypassed the requirement in the authorising statute to exercise a discretion before cancelling income support. After the introduction of this legislative requirement in 2022, the agency did not ensure that their processes and computer systems were complying with the amended legislation. The automated system thus did not reflect the text or intent of the legislation. Stronger safeguards are needed to prevent repeats of the Robodebt scheme’s mistakes and to ensure that automated systems used for government decision-making align with legislative requirements.
Enhancing automated system design
Ng rightly observes that there is nothing in the courts’ reasoning in the Amato and Prygodicz cases concerning the flawed Robodebt automated system that would preclude the use of automated decision-making for welfare decisions in Australia, so long as the methodology underpinning such automated systems was accurate (66). She outlines that the courts’ focus has been on the rationality of the methodology of the automated decision-making process compared to the legislative requirements. A corollary of this is that future automated systems using different, more reliable algorithms and data points may well align with the intent of the legislation and thus be held to be valid.
The technological governance framework Ng proposes includes a suite of legal, political, and managerial controls. Her discussion of managerial controls acknowledges the importance of how automated systems are designed to translate rules into code, including the calibration of error rates and the desirability of ex ante risk and impact assessments. The next stage of the conversation could engage further with technological design solutions that seek to embed public law principles in the architecture of automated systems, thus contributing to emergent discourses on the rule of law ‘by design’. This underscores that it is not just the elements of governance frameworks that need to be calibrated to address automated systems; such systems also need to be designed to align with public law expectations. A focus on enhancing automated system design is particularly important in light of the emphasis on the methodology of automated decision-making processes in the Amato and Prygodicz cases, and the current absence of a dedicated Australian legislative framework for automated government decision-making.
The desirability of legislative safeguards
An important recommendation from the Robodebt Royal Commission was for ‘legislative reform to introduce a consistent legislative framework in which automation in government services can operate’ (Recommendation 17.1). The Australian Government has agreed to this recommendation in principle, and the process has commenced. In late 2024, the Attorney-General’s Department issued a consultation paper and invited public submissions on this topic. Ng’s book provides valuable and timely insights to inform this legislative reform agenda.
Debates about legislative reform for automated government decision-making are related yet distinct from broader debates about regulating AI in Australia. Momentum has been building towards the introduction of mandatory guardrails for AI in high-risk settings, with former Industry minister Ed Husic recently backing the development of a dedicated Artificial Intelligence Act, despite deregulatory pressures in the US and European Union (EU). However, two days later a Productivity Commission interim report recommended a pause on the plan for mandatory guardrails, warning that over-regulation could stifle AI’s $116 billion dollar economic potential (Draft Recommendation 1.3). The Treasurer Jim Chalmers has welcomed the report, encouraging a ‘responsible middle course on AI’ that includes regulating ‘as much as necessary’ to protect Australians, ‘but as little as possible’ to encourage the AI industry.
Potential legislative reform to address automation in government services is connected to these broader debates about AI regulation, but may well have a separate trajectory due to its more limited focus on the use of AI in government. The regulation of automated government decision-making raises distinctive issues due to expectations that government officials will exercise power over citizens in ways that not only promote efficiency and consistency, but also align with administrative law principles underpinning lawful decision-making, including lawfulness, fairness, rationality, and transparency. Ng’s concluding reflections on future directions, including regulatory design questions regarding the desired scope and intensity of new legal and regulatory frameworks, provide a valuable foundation to inform these debates.
Significantly, Ng’s analysis reinforces the gaps in existing legal frameworks for judicial review of rationality, anti-discrimination, public sector privacy and data protection, and freedom of information. Across all four of these dimensions Ng argues that existing Australian legal mechanisms provide limited avenues for successfully challenging flawed automated government decision-making. Viewed holistically, Australia’s legal protections for individuals adversely affected by automated government decision-making are comparatively weak. As Ng concludes, ‘public law needs to evolve’ to meet rising AI challenges and new ‘legislative regimes may be required’ (228).
As the Productivity Commission notes, Australia is likely to be a ‘regulation taker’ when it comes to AI regulation. In considering legislative reform options for automated government decision-making, ‘legal transplants’ involving the ‘transfer of a legal regime or rule from one jurisdiction to another’ can hold significant appeal. For example, aspects of the EU’s General Data Protection Regulation or Artificial Intelligence Act can, prima facie, provide legislative reform options that Australia can consider adopting. However, unless attention is paid to key differences in legal contexts and cultures, challenges can arise in clearly identifying which types of rules are transplantable, and which are potentially problematic due to entrenched jurisdictional differences.
Ng’s fine-grained analysis is attuned to key differences in the selected comparator jurisdictions, which need to be understood to clarify the opportunities and limits for legal transplants and harmonisation. As Ng observes, the US, UK, and Australia are appropriate comparator jurisdictions due to their shared liberal democratic framework and increasing adoption of automated systems within their government agencies. However, there are also salient differences between the public law frameworks within these jurisdictions. For instance, in the absence of significant constitutional or human rights protections in Australia, administrative law, known for its ‘especially rigid’ separation of judicial power, remains the primary avenue for the protection of individuals adversely affected by government decisions. In contrast, administrative law in the US encompasses rule-making in addition to adjudication, and is reinforced by substantial constitutional protections incorporating a Bill of Rights. The UK has an unwritten constitution, and reflects external influences incorporated into domestic law from the EU, including the European Convention on Human Rights and the General Data Protection Regulation (9–10), shaped by interpretive processes of proportionality and human rights balancing. Legal frameworks for automated government decision-making in the UK therefore need to be understood against politico-legal commitments to human rights and respect for human dignity that do not have the same purchase in Australia and the US. There are thus deep cultural differences in constitutional and political values that affect not only the legal frameworks for challenging automated government decision-making, but also the extent to which legislative design responses are transplantable across jurisdictions.
As debates about regulating automated government decision-making in Australia continue, there are myriad lessons that can be learned from Ng’s multi-faceted, insightful, and context-sensitive analysis. It provides much needed structure and clarity regarding gaps in existing public law frameworks and opportunities to calibrate legal, political, and managerial accountability mechanisms to narrow these gaps.
Anna Huggins is a Professor and the Deputy Head of School at QUT Law.
Suggested citation: Anna Huggins, ‘Regulating Automated Government Decision-Making: An Australian Perspective - Combatting the Code book forum’ (24 November 2025) <https://www.auspublaw.org/blog/2025/11/regulating-automated-government-decision-making-an-australian-perspective-combatting-the-code-book-forum>