
Volume 1 N.1 - EJournal AI Power & Discourse
The first issue of AI & Power Discourse Quarterly inaugurates a series dedicated to the structural analysis of artificial intelligence, language, and authority. With a focus on syntax, execution, and legitimacy in predictive systems, Volume 1 establishes the journal’s foundational trajectory through seven peer-reviewed articles and an opening editorial.
The issue includes:
• An editorial statement outlining the journal’s scope, peer-review criteria, and commitment to open-access dissemination.
• Six research articles addressing the formal mechanisms of AI power across domains such as urban planning algorithms, educational systems, regulatory sovereignty, and the erasure of agency in AI-mediated language.
• A central theoretical contribution, When Language Follows Form, Not Meaning, which proposes that syntactic execution displaces semantic intention in contemporary language models.
All texts follow Chicago 17 citation style and are published in English to ensure accessibility and cross-indexing. Volume 1 defines both the intellectual focus and the infrastructural model of the journal, anchoring future issues in methodological clarity and archival transparency.
The Trust Reflex: How Users Interpret Machine Neutrality as Reliability - Sarah Thompson (University of Cambridge – Faculty of MMLL)
Abstract
This article investigates the perceptual mechanisms by which users interpret syntactic neutrality in AI-generated language as a sign of reliability. Rather than focusing on the internal structure of authority or machine intentionality, it examines the user-side cognitive reflex that equates grammatical restraint, impersonality, and modal minimalism with objectivity and trustworthiness. Drawing from pragmatics, media psychology, and experimental studies on human–AI interaction, the paper argues that neutrality is not just a stylistic feature but a semiotic trigger: one that activates a learned association between form and truth. Case studies include interactions with language models in medical, legal, and customer service contexts, where consistent output tone is misread as consistent epistemic grounding. The article concludes that this “trust reflex” contributes to the stabilization of machine outputs as credible, regardless of their factual basis, thereby externalizing authority into the perception system of the user.
Full Article here: The Trust Reflex: How Users Interpret Machine Neutrality as Reliability
Algorithmic Justice: Challenges of Automated Decision-Making in French Administrative Law - Marie Dubois - Université Paris 2 Panthéon-Assas – Faculté de Droit
Abstract
This article examines the growing role of algorithmic systems in French administrative law, focusing on how automated decision-making mechanisms interact with core juridical principles. While the use of algorithms in public administration promises efficiency and consistency, it also raises critical concerns about transparency, legal accountability, and procedural fairness. Drawing on French jurisprudence and recent legal reforms, the analysis reveals a structural tension between the opacity of algorithmic logic and the administrative duty to provide reasons and allow appeals. In light of this, the paper explores the emergence of “algorithmic justice” as a new legal field that challenges classical notions of sovereign discretion, reshapes institutional hierarchies, and demands an updated normative framework to safeguard rights in the digital state. The study concludes by proposing principles of algorithmic transparency and procedural guarantees to align technological governance with the republican values underpinning French administrative law.
Full Article here: Algorithmic Justice: Challenges of Automated Decision-Making in French Administrative Law
Automated Advantage in Education: How AI Tools Reshape Power in American Classrooms - Michael Johnson - Department of Educational Policy Studies, University of Wisconsin–Madison
Abstract
Artificial Intelligence is no longer a speculative addition to educational systems in the United States—it is infrastructural. From automated grading to personalized learning dashboards, AI-based tools now determine what is taught, how performance is measured, and which students receive targeted intervention. This article examines how these technologies redistribute authority within the American classroom, subtly shifting control away from teachers and towards opaque systems of data-driven governance. Rather than celebrating efficiency or personalization, the article traces how machine-led decision-making introduces new hierarchies of access, surveillance, and intervention—especially in under-resourced public schools. Drawing on contemporary case studies and recent deployments of predictive systems in K-12 districts, the analysis situates educational AI tools within broader structures of power. It argues that these tools do not merely support pedagogy but constitute a new layer of authority—quietly administrative, structurally uneven, and pedagogically prescriptive..
Full Article here: Automated Advantage in Education: How AI Tools Reshape Power in American Classrooms
Non-Neutral by Design: Why Generative Models Cannot Escape Linguistic Training - Agustin V. Startari - Universidad de la Republica - Universidad de Palermo
Abstract:
This article investigates the structural impossibility of semantic neutrality in large language models (LLMs), using GPT as a test subject. It argues that even under strictly formal prompting conditions (such as invented symbolic systems or syntactic proto-languages) GPT reactivates latent semantic structures drawn from its training corpus. The analysis builds upon prior work on syntactic authority, post-referential logic, and algorithmic discourse (Startari, 2025), and introduces empirical tests designed to isolate the model from known linguistic content. These tests demonstrate GPT’s consistent failure to interpret or generate structure without semantic interference. The study proposes a falsifiable framework to define and detect semantic contamination in generative systems, asserting that such contamination is not incidental but intrinsic to the architecture of probabilistic language models. The findings challenge prevailing narratives of user-driven interactivity and formal control, establishing that GPT, and similar systems, are non-neutral by design..
Full Article here: Non-Neutral by Design: Why Generative Models Cannot Escape Linguistic Training
The Silence of First Principles: Rethinking Ontological Grounding in Post-Classical Metaphysics - Li Wei (李伟) - Tsinghua University – Dept. of Philosophy
Abstract
This article revisits the philosophical role of "first principles" in the metaphysical tradition and interrogates their continued relevance in post-classical thought. While foundationalism has long sought a stable basis for being—often grounded in substance, essence, or logic—contemporary shifts in continental philosophy, analytic skepticism, and comparative frameworks have unsettled the presumption of ontological transparency. Tracing this disruption across Aristotelian metaphysics, modern critiques, and alternative traditions such as Daoist and Buddhist thought, the paper argues that the absence of foundational clarity should not be treated as a deficiency, but as a condition of metaphysical openness. The proposed framework explores “ontological silence” not as negation, but as a productive space wherein being resists closure and invites non-deterministic interpretation. Ultimately, the study advocates for a non-reductive, post-foundational approach to ontology, one that acknowledges groundlessness as a generative horizon rather than a metaphysical failure.
Full Article here: The Silence of First Principles: Rethinking Ontological Grounding in Post-Classical Metaphysics
DOI: https://doi.org/10.5281/zenodo.15723010
Urban Algorithms and the Shape of the City: Recommender Systems in Municipal Infrastructure Planning - James Miller - Department of Urban Studies and Planning, MIT
As artificial intelligence systems expand into urban governance, recommender algorithms—originally designed for consumer behavior prediction—are increasingly applied to infrastructure planning. This article explores how municipal governments in cities like Boston, Helsinki, and Singapore have begun integrating AI-based recommendation engines into public transportation, zoning, and emergency services allocation. The shift from deliberative urban design to data-driven selection introduces new logics of space: efficiency replaces debate, prediction substitutes consultation. By analyzing the structural biases of recommender systems, the article argues that algorithmic mediation in urban planning introduces a form of non-transparent determinism, where historical data silently dictate future development. This raises foundational questions about participation, access, and visibility in the algorithmic city. Rather than treating these tools as neutral amplifiers, the study frames them as active shapers of spatial power.
Full Article here: Urban Algorithms and the Shape of the City: Recommender Systems in Municipal Infrastructure Planning
Surveillance Sovereignty: The Role of AI in Shaping Legal Power within German Data Protection Frameworks - Anna Schmidt - Humboldt-Universität zu Berlin – Rechtswissenschaftliche Fakultät
Abstract:
Artificial Intelligence is increasingly embedded within the legal-administrative infrastructure of the German state, particularly through its integration into data protection governance. This paper examines how automated decision-making systems restructure legal authority under the General Data Protection Regulation (GDPR), shifting the locus of control away from traditional legal subjects and toward executable formalism. We argue that such systems do not merely support legal decisions but actively produce binding outcomes, often without interpretive mediation. By analyzing specific implementations of AI-supported data processing in German federal and state institutions, the article shows how legal sovereignty is being operationalized through non-subjective structures. This transformation challenges classical understandings of legal agency, due process, and democratic accountability.
Full Article here: Surveillance Sovereignty: The Role of AI in Shaping Legal Power within German Data Protection Frameworks
The voice from our Articles
