
Algorithmic Justice: Challenges of Automated Decision-Making in French Administrative Law
Title: Algorithmic Justice: Challenges of Automated Decision-Making in French Administrative Law
Author: Marie Dubois
Affiliation: Université Paris 2 Panthéon-Assas – Faculté de Droit
ORCID: 0000-0002-1234-5678
AI & Power Discourse Quarterly
Licence: CC BY-NC-ND
DOI: 10.5281/zenodo.15729340
Zenodo: https://zenodo.org/communities/aipowerdiscourse
Publication Date: July 2025
Keywords: legal fictions, algorithmic legitimacy, institutional power, procedural simulation, depersonalized authority, legal automation
Full Article
ABSTRACT
This article examines the growing role of algorithmic systems in French administrative law, focusing on how automated decision-making mechanisms interact with core juridical principles. While the use of algorithms in public administration promises efficiency and consistency, it also raises critical concerns about transparency, legal accountability, and procedural fairness. Drawing on French jurisprudence and recent legal reforms, the analysis reveals a structural tension between the opacity of algorithmic logic and the administrative duty to provide reasons and allow appeals. In light of this, the paper explores the emergence of “algorithmic justice” as a new legal field that challenges classical notions of sovereign discretion, reshapes institutional hierarchies, and demands an updated normative framework to safeguard rights in the digital state. The study concludes by proposing principles of algorithmic transparency and procedural guarantees to align technological governance with the republican values underpinning French administrative law.
1. Introduction: Legal Discretion and the Rise of Automated Administration
French administrative law has long been characterised by a delicate balance between institutional authority and procedural safeguard. The discretionary power of public administration, while formally subject to judicial oversight, has historically relied on human judgment, informed reasoning, and articulated justification. These elements constitute the normative core of administrative legitimacy: a decision must not only comply with legal norms but be intelligible, contestable, and grounded in coherent reasoning.
In recent years, however, public authorities have increasingly integrated algorithmic systems into their operational frameworks. Designed to enhance efficiency, reduce case backlog, and ensure uniformity, these tools are now employed in a growing number of administrative procedures: from social welfare eligibility to immigration risk assessments. Yet their implementation introduces a legal paradox. As decision-making becomes automated, the very attributes that legitimise administrative action (transparency, motivation, appealability) are displaced or obscured by non-human logic.
This paper begins from this tension. It does not question the utility of computational tools in governance, nor adopt a technological determinist stance. Rather, it interrogates how automated decision-making challenges core legal doctrines within the French administrative tradition. The analysis focuses on the erosion of procedural intelligibility, the weakening of juridical accountability, and the risk of normative opacity. It argues that administrative law must respond not only by adapting procedural frameworks, but by reasserting its foundational commitments to transparency and rights protection, even in the face of computational abstraction.
2. The Foundations of French Administrative Law: Discretion, Motivation, and Judicial Review
The French administrative system is not merely a pragmatic apparatus of governance; it is a juridical order structured by doctrines that ensure the legitimacy of public authority. At its core lies a body of principles that separate administrative law (droit administratif) from private law, shaped historically by the jurisprudence of the Conseil d’État, France’s highest administrative court.
One of these foundational principles is the autonomie du droit administratif, the idea that public administration is governed by rules distinct from those applying to private relations. Originating in the early nineteenth century and formalised through the doctrine of service public, this separation has enabled the development of a specialised regime of legality and oversight applicable to state actions. Under this regime, administrative decisions are expected to fulfil a dual criterion: they must serve the general interest (intérêt général), and they must respect procedural and substantive legality.
Among the most essential procedural obligations is the requirement of motivation (obligation de motivation). Enshrined in various legal instruments and consistently reaffirmed by the Conseil d’État, this obligation mandates that administrative acts—particularly those affecting individual rights—must articulate the reasons on which they are based. Motivation is not a mere formality; it enables both the individual’s right to understand and challenge the decision, and the court’s ability to review its legality. In this way, motivation serves as a bridge between administrative discretion and judicial accountability.
Closely linked to this is the principle of contradictory procedure (principe du contradictoire), which grants individuals the right to be heard before a decision adversely affecting them is made. This principle anchors the broader notion of procedural fairness (équité procédurale), now recognised as a general principle of law. The French administrative framework thus does not merely tolerate discretion; it disciplines it through structured procedural requirements.
Finally, the doctrine of proportionality and the principle of legality (principe de légalité) act as substantive constraints. Discretion must always be exercised within the limits of the law and in proportion to the objectives pursued. This balance between autonomy and control forms the cornerstone of the French legal tradition in administrative matters. It is precisely this structure—a calibrated combination of power and procedural obligation—that is now being strained by the introduction of automated decision-making.
3. Algorithmic Tools in Public Administration: Efficiency, Uniformity, and Emerging Tensions
In the early 2010s, French public authorities began incorporating algorithmic decision-making tools into various domains of administrative activity. These tools, developed through collaborations with technology firms or state agencies such as Etalab and DINUM (Direction interministérielle du numérique), promised to enhance institutional efficiency by automating routine procedures, standardizing evaluations, and accelerating service delivery. Their deployment was framed not as a rupture, but as a rational extension of the State’s digital transformation—modernization de l’action publique—supported by legislation such as the 2016 Loi pour une République numérique.
The most prominent applications include automated allocation systems, such as Parcoursup, the platform for university admissions that ranks candidates based on weighted criteria. Others include fraud detection algorithms used in the administration of social benefits (CAF, Pôle emploi), and risk assessment tools employed in migration and asylum cases by OFPRA and prefectural authorities. In each of these instances, the algorithm serves a filtering or ranking function, processing data to assist—if not determine—final administrative acts.
The stated objectives of these implementations are typically threefold:
-
Efficiency: Automating repetitive tasks reduces delays, expedites outcomes, and alleviates pressure on administrative personnel.
-
Consistency: By applying identical criteria to all cases, algorithms ostensibly reduce variability and arbitrariness in decision-making.
-
Traceability: Digital systems enable documentation and retrospective audits of decisions, ostensibly strengthening internal accountability.
However, this narrative of improvement obscures several structural dislocations. First, the very opacity of algorithmic logic—its dependence on complex, often proprietary, code—challenges the normative transparency expected of public decisions. Citizens subject to such decisions may not know how or why a conclusion was reached, particularly when the output is merely a numeric score or categorical label.
Second, the displacement of human deliberation undermines one of the key stabilisers of administrative law: the exercise of reasoned judgment. Even when a human operator nominally validates the algorithm’s output, the process often becomes a mere formality—what some critics call automation bias—where discretion is deferred to the machine’s recommendation.
Third, the lack of procedural guarantees in many algorithmic implementations leaves individuals without clear avenues for challenge or redress. In several documented cases, administrative bodies have failed to disclose the criteria or logic used by the algorithm, citing industrial secrecy or technical complexity. This places citizens in a legal vacuum: affected by a decision they cannot contest, and deprived of the information necessary to mount a legal defence.
Finally, the algorithm’s promise of neutrality is itself normatively misleading. Criteria embedded in the system—whether statistical weights or threshold scores—are products of institutional choices. These choices may encode historical biases, policy preferences, or hidden assumptions, all of which escape conventional scrutiny when couched in mathematical language.
The insertion of algorithmic tools into French administrative machinery is thus not a benign technical evolution. It constitutes a structural shift with profound implications for the legitimacy of public power. It is precisely because these systems operate within the formal apparatus of legality that they demand a re-examination of what legal accountability must mean in a digital administrative state.
4. Algorithmic Opacity and the Erosion of Motivated Decisions
The legal requirement of motivation—obligation de motivation—occupies a central position in French administrative law. It is not merely a procedural obligation; it constitutes the normative articulation through which the State justifies its use of power. The motivation of a decision allows the citizen to understand its rationale, to assess its legality, and to initiate a legal remedy if necessary. It also enables judicial review, providing the juge administratif with the necessary elements to assess proportionality, legality, and factual grounding.
Algorithmic decision-making systems disrupt this architecture. Unlike traditional administrative acts, which must explicitly state the reasons and legal bases on which they rest, algorithmic outputs often present themselves as mere results—rankings, scores, or binary determinations—without accompanying justifications accessible to the affected party. This absence of explanation raises acute legal concerns, as it severs the procedural bridge between decision and accountability.
One dimension of this opacity is technical. Algorithms, especially those employing machine learning or probabilistic models, operate through data correlations rather than legal reasoning. Their internal logic is frequently unintelligible to non-specialists, and sometimes even to their developers. In such cases, the algorithm functions as a black box: it transforms inputs into outputs through mechanisms that resist interpretation, let alone legal motivation. This technical opacity alone challenges the procedural intelligibility demanded by administrative law.
Another dimension is institutional. Public authorities often outsource the development or integration of algorithmic tools to private companies. These companies may invoke intellectual property or trade secrecy to withhold disclosure of the system’s architecture, weights, or source code. The result is a dual opacity: the administration does not know precisely how the algorithm works, and the citizen is prevented from contesting its logic. In several documented cases, courts have ruled that such secrecy cannot override the right to information when an administrative decision is at stake, yet enforcement remains inconsistent.
A further layer of opacity is normative. Algorithms frequently encode decision criteria in ways that do not map cleanly onto legal categories. For instance, a score indicating “risk of fraud” may be generated from aggregated behavioural data, including patterns of residence, transaction frequency, or typologies of employment. These criteria are not illegal per se, but they are not legal reasons in the traditional sense: they are statistical proxies for institutional suspicion. When decisions are based on such proxies, the formal motivation—if it exists—is often vacuous: a vague reference to a risk threshold, without a traceable link to the individual’s concrete situation.
French legal doctrine has begun to address these challenges. The Conseil d’État, in its landmark 2018 report on digital administration and the algorithmic State, affirmed the need to preserve the exigence de motivation even in cases involving automated tools. However, the report also acknowledged the limits of traditional motivation when applied to probabilistic or adaptive systems. This tension remains unresolved.
What emerges is a structural dissonance between legal rationality, which requires transparency, and algorithmic rationality, which often sacrifices intelligibility for efficiency. As public administration increasingly relies on algorithmic systems, the danger is not merely that individual rights may be infringed, but that the very form of legal justification—motivation as a performative act of legal responsibility—may be undermined. The law must therefore decide whether to compel transparency retroactively, to ban opaque systems from high-impact domains, or to create a new grammar of justification compatible with algorithmic reasoning. Each option carries profound implications for the legal architecture of administrative legitimacy.
5. Procedural Fairness and the Right to Review in Automated Administrative Decisions
The principle of procedural fairness (équité procédurale) holds a constitutional place within the French legal order. Though not always codified in precise terms, it is recognised as a general principle of law (principe général du droit), and has been reaffirmed by both the Conseil d’État and the Conseil Constitutionnel as an essential condition for the legitimacy of administrative action. At its core lies the right of individuals to participate in proceedings that affect them, to understand the grounds of decisions, and to challenge them through meaningful avenues of recourse.
Automated decision-making (ADM) introduces a fundamental tension into this framework. By design, ADM systems are often intended to streamline large-scale administrative procedures—granting or denying benefits, allocating public services, or assessing eligibility for certain rights. In such contexts, the logic of efficiency tends to override that of deliberation. Yet, in doing so, it threatens to hollow out the procedural guarantees that give legal content to the very idea of a “public decision.”
One of the most prominent consequences of automation is the diminution of the adversarial process (procédure contradictoire). In classical administrative procedure, especially in cases involving negative decisions or sanctions, the individual must be notified of the intention to decide, must be given access to the relevant facts or files, and must be granted an opportunity to present arguments. This framework ensures that decisions are not only factually accurate, but legally robust and procedurally fair. ADM systems often bypass this requirement. An algorithm may generate an output based on pre-set criteria, triggering an automated notification or action without any prior interaction with the individual concerned.
The Council of State has addressed this issue in its 2020 Gisti ruling, where it reiterated that administrative decisions generated or assisted by algorithms must comply with standard procedural guarantees, including the right to be heard. However, enforcement remains patchy. In many implementations, especially in areas such as fraud detection or benefit control, the algorithm functions as a pre-filtering mechanism, flagging “at-risk” individuals who are then subject to investigation or sanctions without ever having interacted with a human agent prior to the decision.
Another procedural deficiency lies in the right to explanation and the burden of proof. When an individual contests an administrative decision, they are entitled to know the reasons for that decision. Yet, in the case of ADM systems, the reasoning may be either unavailable (due to opacity or trade secrecy) or framed in technical terms that are unintelligible to non-experts. Citizens thus face a double burden: they must challenge a decision without knowing why it was made, and they must do so in the absence of interpretable documentation.
In practice, this erodes the effectiveness of legal remedies. The administrative judge (juge administratif) relies on the dossier produced by the administration, which should include the reasons for the decision. If the system only outputs a score or categorical result, and the administration is unable or unwilling to explain its derivation, the judge’s review becomes symbolic rather than substantive. This raises serious concerns about access to justice, especially for vulnerable populations disproportionately subject to automated scrutiny.
Moreover, the principle of equality before the law is at stake. Algorithmic systems, though marketed as impartial and standardised, may embed biases that go unchallenged precisely because procedural safeguards have been diluted. For instance, if an algorithm uses postcode data, employment history, or social media activity as proxies for fraud risk or eligibility, it may disproportionately flag certain socio-economic or ethnic groups. Without procedural safeguards—notice, explanation, the right to contest—these patterns of discrimination remain invisible and legally untested.
The Loi pour une République numérique (2016) attempted to address some of these gaps by requiring public authorities to inform users when a decision is based on an algorithm and to provide the “rules defining the treatment and its principal characteristics.” However, this obligation is limited to “individual decisions,” and there is no clear jurisprudence on whether automated filtering or scoring mechanisms fall within this definition. Moreover, in practice, many public bodies continue to cite technical complexity or proprietary rights as reasons for withholding information.
A more robust response may require legislative clarification and institutional reconfiguration. Some scholars have proposed the establishment of a dedicated autorité administrative indépendante tasked with overseeing the use of ADM systems in the public sector, ensuring transparency, auditing algorithmic models, and guaranteeing procedural compliance. Others have argued for a reversal of the burden of proof: when an administrative decision is based on an algorithm, the administration—not the citizen—should bear the responsibility of demonstrating its legality and fairness.
In any case, the challenge is structural. As automation increases, it is no longer sufficient to rely on procedural guarantees designed for human actors. The legal system must develop a procedural architecture that reflects the realities of algorithmic administration: where decisions are generated at scale, where logic is embedded in code, and where affected parties may be unaware that they are even subject to a decision.
6. Jurisprudential Responses and Normative Gaps: Between Legal Principle and Digital Practice
Despite growing awareness of the challenges posed by algorithmic systems in public administration, the response of French jurisprudence has been cautious, fragmented, and, in some cases, structurally inadequate. The judiciary has made strides in affirming that administrative acts involving algorithms are still subject to the principles of legality, motivation, and review. However, the translation of these principles into operative safeguards within the algorithmic environment remains uneven and contested.
A key moment in this evolving landscape was the Conseil d’État’s 2018 report, La documentation des algorithmes publics, which acknowledged both the increasing reliance on algorithmic decision tools and the resulting tension with traditional legal requirements. The report affirmed that public authorities must disclose when an algorithm has been used in the decision-making process, and must ensure that the principles of fairness, impartiality, and intelligibility are maintained. Yet the recommendations, while normatively sound, were not legally binding, and their uptake across administrations has been inconsistent.
In terms of case law, the Gisti ruling (CE, 2020) offered important clarifications. The court ruled that even when administrative decisions are supported by algorithmic analysis, they remain subject to existing legal obligations, including the right to be informed and to challenge the decision. Importantly, the court rejected the argument that trade secrecy could justify the withholding of algorithmic logic when it affects individual rights. However, the decision stopped short of establishing a systematic standard for algorithmic disclosure, instead resolving the case on narrowly defined grounds.
A similar ambiguity is visible in the 2019 case involving Parcoursup, the national university admissions platform. The Conseil d’État confirmed that algorithmic processing must be made transparent upon request, and that institutions must disclose the general rules and criteria used in evaluation. Nonetheless, universities were not required to disclose the exact weightings or logic embedded in their internal algorithms. As a result, applicants received partial information—lists of criteria without operative detail—undermining the principle of full intelligibility. Critics argue that this creates a "legal simulation of transparency", where disclosure is formal but not substantively useful.
These jurisprudential patterns point to a deeper issue: the fragmentation of legal doctrine in the face of technological abstraction. Administrative law operates through categories such as “decision,” “motivation,” and “review,” which presuppose that acts are identifiable, reasons are articulable, and agents are accountable. Algorithmic processes, by contrast, may produce outputs that do not correspond to discrete decisions but function instead as invisible preconditions, such as scores or filters that shape outcomes without being formally recognised as decisions. When such outputs trigger legal effects—denial of a benefit, assignment to a lower priority list, selection for investigation—the absence of formal recognition makes them difficult to regulate.
In addition, French law lacks a coherent framework for distinguishing between automated assistance and automated decision-making. The distinction is legally significant: in cases where the algorithm “assists” but a human official signs the final act, procedural obligations may be considered fulfilled. However, this formalistic view obscures the reality of administrative practice, where human validation often amounts to a rubber-stamp. Without robust evidentiary standards, the legal system risks accepting fictional accountability, whereby responsibility is nominally attributed to a human agent while the operative logic lies elsewhere.
Several normative gaps remain unresolved:
-
There is no statutory definition of algorithmic decision-making in administrative law.
-
The burden of proof in cases involving opaque systems is unclear, often defaulting to the citizen.
-
Appeal procedures do not yet account for the structural asymmetry between citizens and technical systems.
-
There is no dedicated oversight mechanism with binding authority over algorithmic public services.
In response, some legal scholars have proposed the development of a "droit algorithmique public", a specialised legal subfield to address the unique challenges of algorithmic governance. Such a framework would not only define the legal status of algorithms in public administration but also prescribe specific procedural standards: mandatory audit trails, algorithmic accountability registers, ex-ante impact assessments, and enforceable rights to explanation.
Still, the development of such a framework faces political and institutional inertia. The French legal tradition, while innovative in its administrative doctrines, is often slow to adapt statutory frameworks to emerging technologies, relying instead on jurisprudential evolution. Without a legislative impulse, the courts are likely to continue managing algorithmic complexity through case-by-case adjudication, reinforcing legal uncertainty and unequal protection.
Ultimately, the tension is not one of intent but of structure. The existing architecture of administrative legality presumes a model of decision-making that algorithmic systems displace. Bridging this gap will require not only doctrinal refinement but a rethinking of how legal norms can be made computable, without sacrificing the values they are meant to preserve.
7. Toward Legal Adaptation: Principles for Algorithmic Accountability in Public Administration
The integration of algorithmic systems into public administration does not constitute a marginal technological update, but a structural transformation of how decisions are conceived, executed, and reviewed. As shown across multiple domains—from education and welfare to migration and fiscal surveillance—automated processes alter the legal morphology of the administrative act. This transformation is not neutral. It affects core principles such as transparency, justification, appealability, and equality before the law.
The legal system must respond not by rejecting technological tools outright, but by subjecting them to an updated set of procedural and normative constraints. These constraints must be neither merely symbolic nor retrofitted post hoc. They must be structurally embedded into the very design and deployment of algorithmic systems. Several principles emerge as candidates for such a normative architecture.
First, the principle of algorithmic transparency (transparence procédurale) must be elevated from a policy preference to a legal obligation. Public bodies using ADM systems should be required to disclose not only the fact of algorithmic involvement, but also the logic, criteria, and parameters involved in the decision process. This obligation must extend to both the ex ante design and the ex post application of the algorithm. Proprietary claims cannot be accepted as sufficient grounds to deny procedural rights in the public domain.
Second, the principle of contestability must be reconfigured for algorithmic contexts. Traditional appeals mechanisms presume the existence of a motivated decision and a responsible agent. Where the operative decision is the result of opaque computation, citizens must be given access to independent oversight mechanisms capable of interpreting and, if necessary, annulling algorithmic outputs. This may require the creation of specialised units within administrative courts or independent algorithmic review boards with investigatory and binding powers.
Third, the burden of proof in ADM-related disputes must shift. In recognition of the informational asymmetry between citizen and administration, it is the public authority that must demonstrate the legality, fairness, and proportionality of decisions derived from algorithmic processes. This includes the duty to justify the choice of model, the quality of training data, and the rationale for selected parameters.
Fourth, the legal framework must introduce the principle of algorithmic traceability. Every instance of algorithmic decision-making must be accompanied by a digital audit trail, documenting inputs, logic, and outputs in a way that can be independently reviewed. This traceability is not merely technical but legal: it provides the evidentiary basis for rights protection and institutional accountability.
Fifth, ADM systems must be audited periodically by an independent supervisory authority. These audits should assess not only technical robustness but legal compliance with administrative principles. Where systems are found to exhibit structural bias or lack of interpretability, they should be subject to suspension or withdrawal from public use.
Sixth, algorithmic systems should be subject to impact assessments prior to deployment, focusing specifically on legal risk, procedural compatibility, and rights preservation. These assessments should not be performed solely by technical teams or vendor partners but must include legal scholars, administrative jurists, and civil society representatives.
Finally, the law must recognise that the delegation of decision-making to algorithmic systems does not absolve the administration of responsibility. Legal accountability cannot be outsourced. When a public authority uses an algorithm, it remains legally responsible for its effects. This principle of non-delegable responsibility must be codified and operationalised.
Such reforms do not represent an exceptional regime for technology; rather, they restore the coherence of administrative legality in a transformed environment. Algorithmic systems are not immune to law, but they test the law’s ability to articulate operative meaning under new epistemic conditions. If the administrative State is to preserve its legitimacy, it must ensure that automation does not become a zone of diminished rights, symbolic proceduralism, or unaccountable governance. The rule of law, to remain effective, must be made algorithmically legible—without relinquishing its normative force.
References
Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning: Limitations and Opportunities. Cambridge, MA: fairmlbook.org.
Bovens, M., & Zouridis, S. (2002). From Street-Level to System-Level Bureaucracies: How Information and Communication Technology is Transforming Administrative Discretion and Constitutional Control. Public Administration Review, 62(2), 174–184.
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin's Press.
Hildebrandt, M. (2020). Law for Computer Scientists and Other Folk. Oxford: Oxford University Press.
Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press.
Veale, M., & Edwards, L. (2018). Clarity, Surprises, and Further Questions in the Article 29 Working Party Draft Guidance on Automated Decision-Making and Profiling. Computer Law & Security Review, 34(2), 398–404.
de Hert, P., & Papakonstantinou, V. (2017). The New General Data Protection Regulation: Still a Sound System for the Protection of Individuals? Computer Law & Security Review, 32(2), 179–194.
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99.
Yeung, K. (2018). Algorithmic Regulation: A Critical Interrogation. Regulation & Governance, 12(4), 505–523.
Mantelero, A. (2018). AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment. Computer Law & Security Review, 34(4), 754–772.