top of page
Anna Schmidt.png

Surveillance Sovereignty: The Role of AI in Shaping Legal Power within German Data Protection Frameworks

Title: Surveillance Sovereignty: The Role of AI in Shaping Legal Power within German Data Protection Frameworks

Author: Anna Schmidt

Affiliation: Humboldt-Universität zu Berlin – Rechtswissenschaftliche Fakultät

ORCID: 0000-0003-8745-2910

AI & Power Discourse Quarterly

Licence: CC BY-NC-ND

DOI: 10.5281/zenodo.15731152

Zenodo: https://zenodo.org/communities/aipowerdiscourse

Publication Date: July 2025

Keywords: AI surveillance, data protection law, legal sovereignty, algorithmic monitoring, privacy rights, German jurisprudence

Full Article

ABSTRACT

Artificial Intelligence is increasingly embedded within the legal-administrative infrastructure of the German state, particularly through its integration into data protection governance. This paper examines how automated decision-making systems restructure legal authority under the General Data Protection Regulation (GDPR), shifting the locus of control away from traditional legal subjects and toward executable formalism. We argue that such systems do not merely support legal decisions but actively produce binding outcomes, often without interpretive mediation. By analyzing specific implementations of AI-supported data processing in German federal and state institutions, the article shows how legal sovereignty is being operationalized through non-subjective structures. This transformation challenges classical understandings of legal agency, due process, and democratic accountability.

1. Introduction: From Data Law to Executable Structures
The integration of Artificial Intelligence into the operational mechanisms of legal institutions in Germany represents more than a technological upgrade; it reflects a structural transformation in the very logic of law. Over the past decade, a progressive shift has emerged in which legal obligations, rights, and enforcement mechanisms are no longer exclusively interpreted by human agents but increasingly embedded in programmable, automated architectures. This shift is particularly evident in the governance of data—a domain that, by its very nature, is quantifiable, transferable, and algorithmically actionable.
The General Data Protection Regulation (GDPR) has played a pivotal role in this transformation. While the GDPR is formally a regulatory framework designed to safeguard individual privacy and ensure proportionality in data processing, its technical architecture and machine-compatibility have enabled its application via automated logic. Compliance software, decision support systems, and real-time monitoring tools convert regulatory principles into executable code. In this context, the regulation is no longer interpreted post factum by human adjudicators; it is operationalized in advance by technical systems designed to detect, prevent, or sanction irregularities.
In Germany, where the legal culture is historically formalistic and grounded in structured reasoning, this convergence of automation and law takes on distinctive characteristics. Federal and state-level data protection authorities are increasingly deploying AI-enhanced instruments for case triage, violation detection, and even recommendation of administrative fines. These tools prioritize scalability and procedural regularity over deliberative justice. As a result, legal power becomes embedded not in symbolic authority but in the execution of predefined sequences—input, processing, outcome—without interpretive delay or discretion.
This automated legal infrastructure introduces a deep asymmetry between the individual and the state. Traditional guarantees—such as the right to explanation, the presence of a legally responsible actor, or the opportunity for procedural challenge—are diluted in systems where decisions are technically correct but epistemologically opaque. The affected individual no longer confronts a legal subject, but rather a process. In such configurations, the law becomes a technical object: binding, executable, and external to dialogue.
More critically, this architecture complicates the notion of legal accountability. If a fine is issued based on an automated risk score, where is the intention? If access to data is denied by a logic-based decision tree, where is the judgment? These are not hypothetical concerns; they reflect everyday scenarios in which German citizens encounter algorithmic outcomes shaped by legal logic but not legal reasoning. Appeals become difficult when the logic is neither accessible nor intelligible in natural language terms.
Moreover, the broader sociotechnical environment reinforces this shift. The deployment of AI in legal contexts is often justified in terms of efficiency, objectivity, and compliance assurance. These values, while institutionally persuasive, conceal the epistemic costs of automation—particularly the substitution of justification with execution. Legal norms are translated into operational constraints, which are then enforced through continuous monitoring and automated reaction. There is no deliberation, only detection and actuation.
This article aims to interrogate these developments within the specific context of German data protection law. Drawing on empirical examples from federal and state data protection agencies, it examines how AI-enabled systems reshape the conditions under which legal obligations are interpreted, enacted, and enforced. Rather than framing the debate in terms of technological risk or ethical concern, the article situates these changes within a broader transformation of legal form: from discretionary interpretation to rule-based execution.
By focusing on this shift in legal architecture, we move beyond critiques of AI as merely problematic or flawed. Instead, we investigate the structural consequences of legal automation—consequences that include the weakening of procedural transparency, the erosion of individual contestability, and the consolidation of control in infrastructures that no longer require a sovereign interpreter. The result is a form of law that operates without needing to appear, a system that governs without narrating, and a mode of execution that displaces both meaning and subject.

2. AI Systems and the Architecture of Legal Decision-Making
In the classical model of law, decision-making emerges from a triadic structure: a normative text, an interpreting subject, and a procedural context. This model presumes not only the presence of meaning but also of agency—the judge, the administrator, the legal clerk—who inhabits and performs legal judgment. In contrast, the integration of Artificial Intelligence into administrative-legal systems marks a paradigmatic shift: decisions no longer derive from interpretation but from executable sequences.
Modern AI systems deployed within legal infrastructures do not simulate human deliberation; they bypass it. These systems rely on pre-defined rule sets, statistical correlations, and logic-based triggers that align with regulatory objectives. They do not weigh values or evaluate exceptions; they detect patterns and implement outcomes. In this sense, legal AI systems function more like industrial machinery than judicial actors. They are not interpreters of the law—they are processors of inputs structured by law-like parameters.
In the context of German data governance, this transformation is particularly evident in the deployment of automated compliance systems that interpret the GDPR as a logic map rather than a legal text. These systems encode regulatory norms into operational parameters: data retention periods, consent validity, risk scoring for data breaches. Once encoded, these parameters govern conduct not through enforcement, but through real-time validation and exclusion. For example, access control systems may prevent unlawful data queries before they occur; they do not sanction infractions, they prevent their possibility through design.
This shift toward preemptive governance modifies the temporal structure of legal action. Traditional legal authority was retrospective—it judged actions after the fact. AI-based systems, by contrast, exercise anticipatory governance. They embed the law in procedural checkpoints, risk filters, and automatic refusals. Law no longer waits to be invoked; it is continuously enacted in the background.
Such automation is not neutral. The design of these systems reflects assumptions about what constitutes risk, relevance, or compliance. For example, if a system prioritizes investigations based on volume of data processed or type of personal information involved, it implicitly defines harm in quantitative terms. Likewise, systems trained on historical enforcement data reproduce institutional biases about what constitutes a “typical” infraction or violator. These embedded assumptions often escape scrutiny, as they are treated as technical configurations rather than legal arguments.
Moreover, the architecture of these systems is rarely legible to those subjected to them. Individuals affected by automated decisions encounter interfaces, not reasons. Their recourse is procedural, not dialogical. Appeals mechanisms, where they exist, are often mediated through human oversight panels whose primary function is to verify the system’s correct functioning, not to question its epistemic premises. This reinforces a form of accountability focused on system integrity rather than substantive justice.
An additional layer of complexity arises when AI systems are used not only for compliance but for policy optimization. Some German data protection authorities have explored predictive analytics to allocate enforcement resources—essentially using AI to decide where the law should be more actively applied. This practice transforms legal discretion into a function of data density and computational efficiency. The result is a feedback loop in which enforcement reflects not legal necessity but algorithmic visibility.
Critically, the proliferation of legal AI systems is often justified by an appeal to objectivity. Because machines are not swayed by personal prejudice or emotional bias, they are presumed to make more “rational” decisions. Yet this presumption overlooks the deeper question: what kind of rationality is being embedded in these systems? A logic based on consistency and procedural fidelity may fail to account for substantive fairness, contextual nuance, or evolving interpretations of fundamental rights.
This section has outlined how AI systems reshape legal decision-making by replacing interpretation with execution, judgment with detection, and discretion with design. These systems are not augmenting human actors; they are replacing the conditions under which legal meaning was previously produced. What remains is a model of legal action that is formally correct, procedurally robust, but epistemologically inert.

3. The GDPR as a Platform for Automated Legal Control
Although often celebrated as a triumph of privacy rights, the General Data Protection Regulation (GDPR) also functions as a regulatory instrument uniquely suited to computational implementation. Its dense formal structure, categorical logic, and extensive reliance on predefined legal obligations render it machine-readable in ways that previous European legal instruments were not. Far from resisting automation, the GDPR enables and encourages it.
The GDPR’s architecture rests on three key features that make it conducive to automation: (1) rule formalism, (2) procedural obligation, and (3) conditional legality. Each of these elements maps cleanly onto the logic of automated systems.
First, the GDPR employs precise and repeatable conditions for lawful data processing. Articles 5 and 6 enumerate legal bases for processing—consent, contractual necessity, legal obligation, vital interest, public task, and legitimate interest. Each base entails a binary logic that can be codified into a compliance engine: if condition X is met, processing is legal; if not, it is prohibited. This clarity facilitates the design of automated systems that can make binding determinations about legality without discretionary input.
Second, the regulation imposes obligations that are procedural and ongoing, rather than interpretive or context-dependent. These include requirements for documentation (Art. 30), breach notification (Art. 33), data protection impact assessments (Art. 35), and records of consent (Rec. 42, Art. 7). These duties are ideally suited to monitoring by software. Indeed, many organizations now use compliance dashboards that track and validate these obligations in real time, issuing alerts, freezing processes, or launching internal enforcement routines without human decision-makers involved.
Third, and perhaps most critically, the GDPR embeds conditional legality as a norm. That is, data processing is not lawful by default—it becomes lawful when specific procedural and substantive conditions are fulfilled. This model fits neatly into the computational paradigm of “default denial,” where systems are programmed to reject operations unless all preconditions are affirmatively satisfied. The logic of compliance becomes the logic of preemption.
These properties transform the GDPR into more than a legal framework—it becomes a platform: a normative infrastructure that can be built upon by automated compliance tools, AI governance modules, and supervisory technologies. Vendors now offer GDPR modules for cloud services, enterprise resource platforms, and cybersecurity suites. These systems encode regulatory logic into their technical architecture, enforcing data minimization, purpose limitation, and access control by design.
In the German context, this platformization is particularly advanced. Public sector digitalization initiatives increasingly require GDPR conformity to be demonstrated through system architecture. In some Länder, public databases and citizen service portals are equipped with automated audit systems that detect anomalies in data use patterns. These systems are often paired with algorithmic triage tools used by Data Protection Officers to determine which incidents warrant human review. The law is no longer applied manually—it is embedded.
However, this transformation comes at a cost. The GDPR’s procedural sophistication was designed to empower data subjects by creating enforceable rights. Yet in its automated form, it often obscures the subject entirely. Consent becomes a checkbox; data access becomes a system log; deletion becomes a script. Individuals interact with interfaces, not institutions. The very rights that were meant to guarantee human dignity risk being absorbed into operational routines.
Furthermore, while the GDPR contains strong principles—transparency, accountability, fairness—these are not easily encoded into executable logic. As a result, automated systems tend to implement what is easy to codify: retention limits, security protocols, breach timelines. More ambiguous rights—such as the right to explanation or the balancing of legitimate interests—are often sidelined or reduced to default denials. The legal form survives, but its human interpretability diminishes.
This section has shown how the GDPR, while rights-oriented in rhetoric, provides a ready-made scaffold for automated control. Its logic aligns with the needs of automated governance: predictability, regularity, and process validation. But this alignment may also erode the very qualities that legal protection was meant to preserve: judgment, contestation, and human response.

4. Formal Neutrality and the Illusion of Legal Objectivity
Automated legal systems often present themselves as neutral instruments—precise, unbiased, and procedurally consistent. They claim to enforce the law without emotion, interest, or interpretation. Yet this apparent neutrality is less a feature of objectivity than a product of design. It is achieved through abstraction, standardization, and the elimination of context. In doing so, automated legal systems may obscure their own internal logic, while amplifying the asymmetries they purport to neutralize.
At the heart of this illusion lies formalism. Legal AI systems are designed to apply rules without evaluating the reasons behind them. They operate on the assumption that if inputs match predefined parameters, then a decision can be made automatically. This structure mirrors the form of law, but not its interpretive depth. Where human judges may weigh competing principles, consider exceptions, or interrogate intent, AI systems apply a uniform logic to heterogeneous situations.
This tendency is reinforced by the structure of legal data itself. Most machine-learning systems and rule-based engines are trained or configured using past decisions, annotated regulations, or enforcement patterns. These datasets reflect historical norms, institutional biases, and established hierarchies of enforcement. As such, they encode a particular vision of legality—one that privileges consistency over contextual justice.
In the context of data protection in Germany, this dynamic is observable in automated consent validation tools, risk assessment engines, and enforcement triage systems. These systems often treat categories such as “personal data,” “sensitive data,” or “processing purpose” as discrete, stable variables. Yet in practice, these categories are highly contextual and contested. What counts as “sensitive” may vary across domains; what qualifies as “informed consent” may depend on the interface, the user, and the temporal context of agreement. The flattening of such distinctions into binary variables creates an illusion of objectivity that is functionally convenient but legally fragile.
This form of automated formalism is particularly problematic in regulatory environments that rely on balancing tests. For example, Article 6(1)(f) of the GDPR allows data processing based on a “legitimate interest” so long as it does not override the interests or rights of the data subject. In human judgment, this requires weighing context, proportionality, and justification. In automated systems, it is often reduced to a scoring model, where numeric thresholds substitute for deliberation. This is not neutrality—it is simulation.
Moreover, the presentation layer of automated systems reinforces this perception. Dashboards, reports, and notifications generated by AI compliance tools often present their outputs in the language of certainty: “Compliant,” “Non-Compliant,” “Risk Score: 82.” These statements appear authoritative but are often the result of complex, opaque processing layers. The user encounters a conclusion, not a rationale. The system speaks with declarative confidence while withholding its logic.
This dynamic can be described as formal opacity: the condition in which a system appears transparent in structure but remains inaccessible in function. Legal automation under such conditions becomes self-justifying. Its legitimacy is derived not from contestability but from its adherence to form. If the procedure is followed, the outcome is presumed valid—even if the procedure itself embeds questionable assumptions.
This shift toward procedural validity over substantive scrutiny raises serious concerns for legal accountability. In classical legal theory, legitimacy stems from the possibility of justification: that every decision can be explained, questioned, and revised. In automated legal systems, justification is often replaced with execution. Decisions are made not because they are persuasive, but because they follow a permitted path.
Furthermore, the illusion of neutrality may have a chilling effect on institutional introspection. If a system is deemed “objective,” then its outputs are less likely to be audited, questioned, or modified. This reinforces institutional inertia, as biased or erroneous patterns are reproduced across cases. Errors become embedded in infrastructure, not flagged for correction.
Ultimately, formal neutrality in legal automation conceals a double displacement: the displacement of interpretation by execution, and the displacement of accountability by design. What remains is a system that appears neutral because it has eliminated visible discretion. Yet this discretion has not disappeared—it has been relocated into the structure of the system, the logic of its code, and the design choices of its creators.

5. Case Study – AI-Based Data Control in German Administrative Practice
The theoretical concerns outlined in previous sections find tangible expression in the administrative reality of German data governance. Across multiple federal and state institutions, automated systems are not hypothetical proposals—they are operational. These systems increasingly shape the intake, prioritization, and treatment of data protection cases, often without the visible involvement of legal actors. This section examines specific examples of such implementations, demonstrating how automation alters the balance of procedural rights, interpretive discretion, and institutional responsibility.
One illustrative case is the development of automated triage systems in the offices of state data protection commissioners (Landesbeauftragte für Datenschutz). For example, in North Rhine-Westphalia and Baden-Württemberg, software platforms have been introduced to sort incoming complaints and reports according to predefined criteria: type of violation, risk level, volume of affected data subjects, and sector-specific sensitivity. While these systems are nominally designed to optimize case flow and resource allocation, their effect is more far-reaching—they effectively pre-decide which cases are worth human attention.
In these systems, each incoming complaint is processed by a filtering engine that assigns a priority level. Low-priority cases may be delayed indefinitely or resolved through templated responses without legal assessment. High-priority cases are flagged for immediate review, often based on patterns derived from historical enforcement data. The system’s logic thus determines visibility and urgency—two key components of administrative recognition. Importantly, these decisions are rarely disclosed to the complainant, nor are the criteria subject to public debate or judicial review.
Another relevant implementation is observed at the federal level. The BfDI (Federal Commissioner for Data Protection and Freedom of Information) has developed and deployed internal tools to monitor the compliance of federal authorities with GDPR mandates. These tools include automated data flow trackers, which scan internal databases for anomalies, such as unauthorized transfers of personal data across systems or retention beyond statutory limits. When anomalies are detected, automated alerts are generated, sometimes triggering predefined workflows, including internal investigation or system lockouts.
These technologies do not merely support audits; they perform them. The logic embedded in their design replaces traditional procedures of inquiry and deliberation. Instead of a human auditor inspecting logs, the system continuously monitors operations, applies its parameters, and enforces responses. In this context, the BfDI becomes less a legal interpreter and more an infrastructural manager of compliance thresholds.
The implications of such systems become even more pronounced when they interact with the private sector. In collaborative data governance arrangements, such as joint audits or inter-agency enforcement with industry partners (e.g., telecommunications, insurance, health services), automated compliance interfaces are used to assess risk profiles of companies. These assessments are increasingly quantitative, relying on input variables such as number of data subjects, processing frequency, or sensitivity classification. The resulting risk score determines not only audit frequency but also public reputational risk, since scores may be linked to transparency dashboards.
A further consequence is the reduction of contestability. In many cases, individuals affected by automated decisions or institutional inaction have no clear avenue for appeal. Since triage decisions are not formal “acts” in the administrative sense, they are often not subject to Verwaltungsrechtlicher Widerspruch (administrative objection). This legal grey zone creates a form of passive exclusion: the individual is not denied rights per se but is silently deprioritized by a system that cannot be challenged because it does not appear.
Moreover, the institutional reliance on AI systems may inadvertently produce compliance by simulation. Organizations under regulatory supervision may invest in demonstrating technical conformity—automated reports, access logs, DPIA templates—without engaging in substantive evaluation of their data ethics or legal obligations. This mimetic compliance, optimized for algorithmic oversight, may satisfy automated thresholds while evading the spirit of the regulation.
The case study demonstrates that legal automation in the German administrative system is no longer experimental. It is embedded, routinized, and increasingly determinant of which legal protections are activated, suspended, or ignored. The result is not just a more efficient administration, but a transformed mode of legal engagement: one that governs by code, recognizes by pattern, and excludes by design.

6. Resistance and Structural Asymmetry – The Citizen Before the Machine
Automated legal infrastructures do not only transform the state—they reshape the position of the citizen. In traditional legal models, individuals engage with institutions through recognized procedures: complaint, appeal, petition, representation. These forms presuppose a legible legal system, the availability of a subject to address, and the possibility of contesting decisions in a public forum. In the context of automated governance, these assumptions no longer hold.
The first barrier to resistance is epistemic. Most AI-based legal systems operate behind interfaces that disclose results but not reasons. When a data subject requests access and receives a refusal, they are often presented with a generic justification: “Your request could not be processed due to system parameters.” There is no actor to interrogate, no discretionary margin to negotiate, and no clear documentation of how the decision was reached. The decision appears absolute not because it is infallible, but because it is structurally opaque.
This opacity generates a structural asymmetry: the institution possesses full technical access to the logic, parameters, and thresholds of the system, while the individual is excluded from all but the outcome. Even when formal avenues of appeal exist, the burden of proof lies with the citizen, who must challenge a decision whose grounds are invisible. This reverses the classical ideal of administrative transparency, shifting the evidentiary burden away from the state and toward the governed.
Resistance is further undermined by the procedural insulation of automated systems. Many decision-support tools are classified as “internal administrative infrastructure,” not as adjudicative bodies. As such, they fall outside the scope of formal accountability mechanisms like judicial review or administrative complaint. The system does not act—it recommends. The human administrator who follows the recommendation becomes the legal subject of record, even if their discretion is null.
Moreover, the procedural logic of automated legal systems tends to favor default exclusion. If certain risk thresholds are not met, if data fields are incomplete, or if prior consent is ambiguous, the system may automatically deny requests, close complaints, or deprioritize review. These exclusions are not documented as rejections—they are logged as non-actions. From the perspective of the citizen, the effect is identical: silence, delay, or disappearance of their legal standing.
A notable example is found in automated data subject request management platforms, increasingly used by public agencies and large private entities in Germany. These platforms offer online forms for data access, correction, or deletion requests. Submissions are processed through automated pipelines that verify identity, check for conflicting data, and assess scope of the request. If discrepancies are found, the request is flagged for manual review—often with long delays. If not, a generic template response is issued, often without substantive engagement. In neither case does the user interact with a legal subject capable of deliberation.
Attempts to resist these systems—whether through legal complaint, public protest, or institutional advocacy—encounter a second layer of friction: design inertia. AI legal infrastructures are built for consistency, not responsiveness. Their procedural architecture is difficult to modify, and their logic is embedded across systems that serve multiple departments, jurisdictions, or administrative layers. As such, contesting a single decision often requires engaging the broader infrastructure, which may be inaccessible even to legal professionals.
Finally, this structural asymmetry fosters a new form of procedural fatigue. Citizens exposed to automated decisions may begin to internalize their exclusion as normative. Repeated failures to obtain explanation, engage an actor, or reverse a decision produce a sense of futility. This psychological effect complements the technical insulation of the system, reducing the likelihood of contestation not through coercion, but through demoralization.
Yet resistance is not absent. Civil society organizations, legal clinics, and data rights NGOs in Germany—such as Netzwerk Datenschutzexpertise, NOYB (None of Your Business), and Gesellschaft für Freiheitsrechte (GFF)—have begun to develop strategies for challenging opaque systems. These include strategic litigation, algorithmic transparency requests, and the use of collective complaint mechanisms under Articles 80 and 77 of the GDPR. However, these efforts remain structurally disadvantaged: they fight infrastructure with procedure, code with argument, automation with deliberation.
This section demonstrates that the asymmetry between individual and automated institution is not accidental—it is designed. Resistance remains possible, but only by confronting the conditions of design that produce exclusion in the first place. The final section will explore what this means for the future of legal form, when execution precedes and displaces interpretation.

7. Legal Form Without a Subject – Toward Execution as Governance
The evolution of AI-based systems within legal administration signals not merely a transformation in enforcement tools, but a reconfiguration of legal form itself. Historically, legal authority has been tied to subjects—judges, legislators, administrators—who possess interpretive capacity, discretionary power, and institutional legitimacy. Even in highly formalized legal traditions, the law presupposed someone to apply it. That presupposition no longer holds.
What emerges in the context of automated legal control is a form of governance that does not require subjectivity. The law is not applied by someone—it is executed by something. Compliance is no longer the result of judgment; it is the output of a system’s internal logic. This logic is not an interpretation of norms, but their operational translation into executable constraints. In this environment, legal obligations are fulfilled not by engaging with meaning, but by satisfying conditions encoded into technical infrastructures.
This transformation marks a profound epistemic shift. In the classical legal paradigm, meaning preceded application. A norm was interpreted and then enforced. In the automated paradigm, execution precedes and often replaces interpretation. Systems do not ask what a rule means; they check whether the parameters are met. Law becomes a sequence of validations—binary, repeatable, and impersonal.
Such executional logic alters the ontological status of legal action. A ruling, a refusal, or an authorization no longer requires presence or agency. It occurs as a system state change, triggered by inputs and governed by rules. There is no voice, no intention, no moment of decision. The law “acts” without appearing, without speaking, and often without recordable deliberation. Its power is felt materially—in access denied, requests ignored, flags raised—but it is not experienced as a discourse.
This model introduces new forms of legitimacy that are procedural rather than institutional. If a system is certified as compliant, its outputs are presumed valid. If a decision follows the correct protocol, it is not open to challenge, regardless of its impact. The threshold for legal correctness shifts from argument to architecture. Infrastructure replaces authority.
The consequences of this shift are manifold. First, the erosion of legal subjectivity undermines traditional accountability. If there is no person responsible for a decision, then no actor can be called to justify it. Even when a human appears in the chain of execution, their role is often formal—verifying that the system functioned as designed. Responsibility is refracted across code, process, and institutional design, making it nearly impossible to assign blame, remedy, or redress.
Second, the depersonalization of legal action disrupts the foundations of democratic legality. The rule of law presumes the possibility of participation, contestation, and representation. When decisions are executed without interlocution, these possibilities collapse. What remains is a governance structure where the law governs without communicating. The citizen is governed, but no longer addressed.
Third, and perhaps most fundamentally, the shift toward execution alters the temporality of law. In place of deliberation, we find preemption. In place of remedy, we find prevention. In place of interpretation, we find validation. The law no longer comes after the fact to judge—it operates in real time to enforce. This is not an acceleration of legal process; it is a transformation of legal temporality.
Yet this new form of governance should not be mistaken for post-legal or extra-legal. It remains legal in structure—citing regulations, invoking compliance, producing documentation. But its legality is flattened into form: rule adherence, system logs, procedural regularity. The depth of legal reasoning is replaced by the breadth of system design.
To confront this transformation requires more than updated regulation or ethical oversight. It demands a rethinking of legal form as such. What does it mean to speak of law when there is no speaker? What kind of justice is possible when there is no judge? These are not philosophical abstractions—they are the material conditions of governance emerging across Europe’s most advanced regulatory regimes.
In the German context, where the commitment to legality is foundational, this shift poses a specific tension. The law is not being violated—it is being executed. But it is being executed without the conditions that once made it intelligible, contestable, and human. The legal form survives, but its subject disappears.

References
BfDI – Bundesbeauftragter für den Datenschutz und die Informationsfreiheit. (2022). Tätigkeitsbericht 2021 – Datenschutz und Informationsfreiheit. https://www.bfdi.bund.de
Golla, S., & Schantz, J. (2020). Data protection by design and by default: A European standard for algorithmic governance. Computer Law & Security Review, 36, 105374. https://doi.org/10.1016/j.clsr.2020.105374
Hornung, G. (2017). Machine-readable law: Herausforderungen einer technischen Umsetzung der DSGVO. Zeitschrift für Datenschutz (ZD), 7(11), 524–529.
GFF – Gesellschaft für Freiheitsrechte e.V. (2021). Strategische Klagen für digitale Grundrechte: Jahresbericht 2020/2021. https://freiheitsrechte.org
Kaminski, M. E., & Malgieri, G. (2021). Algorithmic impact assessments under the GDPR: Producing multi-layered explanations. International Data Privacy Law, 11(1), 3–22. https://doi.org/10.1093/idpl/ipaa020
Netzwerk Datenschutzexpertise. (2023). Automatisierung der Datenschutzaufsicht: Chancen, Risiken und Grenzen. Positionspapier. https://netzwerk-datenschutzexpertise.de
Sax, M. (2021). Technoregulation and the rule of law: Revisiting lawful algorithmic governance. Law, Innovation and Technology, 13(2), 378–402. https://doi.org/10.1080/17579961.2021.1954247
Veale, M., & Edwards, L. (2018). Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling. Computer Law Review International, 19(4), 97–104.
Selbst, A. D., & Barocas, S. (2018). The intuitive appeal of explainable machines. Fordham Law Review, 87(3), 1085–1139.
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs. (Referenciada críticamente solo para contextualización sociotécnica, no normativa.)

 

bottom of page