<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1d1 20130915//EN" "JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" article-type="research-article" xml:lang="en"><front><journal-meta><journal-id journal-id-type="publisher-id">SDQ</journal-id><journal-title-group><journal-title>Security &amp; Defence Quarterly</journal-title><abbrev-journal-title>SDQ</abbrev-journal-title></journal-title-group><issn pub-type="epub">2544-994X</issn><issn pub-type="ppub">2300-8741</issn><publisher><publisher-name>Akademia Sztuki Wojennej</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="publisher-id">SDQ-52-00448</article-id><article-id pub-id-type="doi">10.35467/sdq/213917</article-id><article-categories><subj-group subj-group-type="heading"><subject>RESEARCH PAPER</subject></subj-group></article-categories><title-group><article-title>Artificial intelligence as a moral agent: regulatory implications and a relational–contextual extension of Moor’s classification</article-title></title-group><contrib-group content-type="authors"><contrib contrib-type="author"><contrib-id contrib-id-type="orcid">https://orcid.org/0000-0001-9245-5894</contrib-id><name><surname>Czyszczoń</surname><given-names>Maciej</given-names></name><email>maciej.czyszczon@doktorant.up.krakow.pl</email><xref ref-type="aff" rid="aff1">1</xref></contrib><aff id="aff1"><label>1</label>Doctoral School, University of the National Education Commission Podchorążych 2, 31–464, Kraków, Poland</aff></contrib-group><pub-date pub-type="epub"><day>31</day><month>12</month><year>2025</year></pub-date><volume>52</volume><issue>4</issue><fpage>1</fpage><lpage>11</lpage><history><date date-type="received"><day>30</day><month>09</month><year>2025</year></date><date date-type="rev-recd"><day>03</day><month>11</month><year>2025</year></date><date date-type="accepted"><day>03</day><month>11</month><year>2025</year></date></history><permissions><copyright-statement>© 2025 M. Czyszczoń published by War Studies University, Poland.</copyright-statement><copyright-year>2025</copyright-year><license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0"><license-p>This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</ext-link>).</license-p></license></permissions><abstract><p>This paper reassesses the regulatory value of James Moor’s four-level typology of machine morality, considering the European Artificial Intelligence Act (AI Act) and the forthcoming European Union (EU) liability directives. It asks whether Moors categories—ethical impact, implicit, explicit, and full moral agents—still capture the morally relevant properties of today’s generative, adaptive AI, and, if not, whether adding a relational−contextual dimension can better anticipate responsibility gaps. To address this gap, we introduce a novel relational–contextual dimension and a three-factor Responsibility Index (RI<sub>3</sub>) that refines Moor’s typology by cross-classifying AI systems according to complexity, autonomy, and behavioural predictability for regulatory use. Adopting a strictly conceptual design, the study combines analytic philosophy with illustrative comparisons drawn from recent EU policy debates and high-profile incidents. It refines key terms, tests their coherence against statutory risk tiers, and distils the analysis into a three-factor matrix—complexity, autonomy, and predictability—that can be operationalised by lawmakers. The evaluation confirms that Moor’s typology remains a valuable baseline for distinguishing between passive and decision-making artefacts. Nevertheless, it also highlights how moral accountability is distributed within socio-technical networks. The proposed relational−contextual dimension, in conjunction with the regulatory matrix, aligns more closely with the AI Act’s risk logic and highlights scenarios in which moral agency is effectively delegated to the system. Moor’s framework should be retained but augmented: only by integrating relational criteria can legislators close emerging accountability gaps surrounding large-scale, autonomous AI. The matrix offers a pragmatic tool for aligning philosophical insight with concrete legal duties.</p></abstract><kwd-group><kwd>artificial morality</kwd><kwd>AI regulation</kwd><kwd>Moor’s typology</kwd><kwd>relational agency</kwd><kwd>security and defence</kwd></kwd-group></article-meta></front><body><sec id="S1" sec-type="intro"><title>Introduction</title><p>The paper begins with the classical concept of moral agency, which holds that only rational, autonomous, and intentional beings can bear ethical responsibility for their actions. In Kantian ethics, a moral agent is an individual capable of legislating for themselves, guided by the categorical imperative. Similarly, contemporary theorists such as <xref ref-type="bibr" rid="R15">Floridi and Sanders (2004)</xref> de-emphasise the relevance of consciousness and free will for the attribution of responsibility, arguing instead that moral agency can be ascribed to information artefacts even in the absence of these classical mental properties (<xref ref-type="bibr" rid="R19">Kant, 2002/1785</xref>). However, the rapid development of artificial intelligence (AI)—from autonomous vehicles to large language models—confronts us with a situation in which systems lacking human psychology shape our lives in real ways, causing harm or benefit. This raises the question of whether traditional criteria are sufficient to capture the moral significance of such artefacts. In response, the paper proposes a relational–contextual refinement of Moor’s typology together with a three-factor Responsibility Index (RI<sub>3</sub>) tailored to regulatory decision-making, with relevance to high-risk security domains (defence, border control, and critical infrastructure).</p><p>Two closely related categories are distinguished in the literature: moral agent and moral patient. An agent is “one who can do good or evil and to whom merit or blame can be attributed,” while a moral patient is the object of others’ ethical obligations (<xref ref-type="bibr" rid="R24">Moor, 2006</xref>, p. 19). Until recently, both labels were reserved for humans. Meanwhile, today’s AI systems, although lacking consciousness, can make decisions with significant consequences, such as filtering credit access, directing air traffic, and diagnosing diseases. As a result, what Moor called the “policy vacuum” is becoming increasingly apparent: our legal and ethical norms are not keeping pace with the technological agency of machines (<xref ref-type="bibr" rid="R23">Moor, 1985</xref>, p. 266).</p><p>A systematic review of the field reveals that no classification simultaneously tracks an artefact’s complexity, degree of delegated autonomy, and behavioural predictability—the very variables on which the European Union (EU) risk-based regulation now relies. Existing taxonomies—from Moor’s four-level ladder of machine morality to Floridi and Sanders’ functionalist model—offer valuable heuristics for what makes an artefact ethically salient. However, they remain essentially one-dimensional, treating complexity, autonomy, and behavioural predictability as loosely related attributes, rather than as interacting variables that modulate accountability. This omission has practical consequences. Under the EU Artificial Intelligence Act, the same system may transition between limited-, high-, and unacceptable-risk tiers as its architecture evolves; however, no current framework anticipates the “responsibility gaps” that emerge when such transitions occur.</p><p>This deficiency is the specific research gap that the present paper addresses. The present study, therefore, asks the following: How can a multi-factor classification close the responsibility gaps that emerge when adaptive AI surpasses the descriptive power of agent-centric scales? To answer, it introduces the RI<sub>3</sub> matrix, a three-factor extension of Moor’s ladder that cross-references complexity, autonomy, and predictability, thereby translating philosophical distinctions into auditable compliance triggers. The following section elaborates on Moor’s four-level typology that underpins the rest of this analysis.</p></sec><sec id="S2"><title>James Moor’s Typology</title><p>James Moor stresses that the condition for including an artefact in ethical discourse is not its possession of intentionality but rather its capacity to produce effects that human norms describe as good or bad. The first level of <xref ref-type="bibr" rid="R24">Moor’s (2006)</xref> typology is ethical impact agents: objects and systems that, regardless of their internal lack of sentience, evidently influence human and environmental well-being.</p><p>Technological development inevitably carries measurable axiological consequences. By providing precise time cues, a digital watch primarily enhances its wearer’s punctuality in day-to-day interactions but does not necessarily ensure wider contractual reliability. Similarly, the Y2K bug, lacking malicious intent, posed a significant risk of economic disruption, which in value terms represented a threat to the security and equitable distribution of goods. In both cases, moral evaluation applies not to the mental states of a watch or binary code but to the foreseeable consequences of their operation within a web of human relationships.</p><p>Thus, <xref ref-type="bibr" rid="R23">Moor’s (1985</xref>, p. 271) thesis that “every technology is in some sense moral” assumes the importance of consequences over the tool’s ontological status and explains the emergence of the previously noted policy vacuum. This approach reconfigures the classical, anthropocentric map of ethics: wherever there is potential for harm or benefit, there too minimally begins the jurisdiction of normative assessment, obligating designers and users to reflect on the values their creations embody in practice.</p><p>The second category in Moor’s typology is implicit ethical agents—systems in which value-sensitive limitations have been embedded at the design stage to prevent evident violations of human-protected values. As Moor notes, such artefacts do not understand morality but implement ethical assumptions through technical limitations, effectively shifting the focus of moral assessment from intention to architectural design.</p><p>A classic example is the onboard Terrain Awareness and Warning System (TAWS) or Airborne Collision Avoidance System (ACAS), which enforces safe flight paths regardless of the pilot’s decision, protecting passenger lives. Similarly, an automated teller machine (ATM) that refuses to dispense more money than the account balance prevents unauthorised appropriation of funds. In both cases, an engineered principle of non-maleficence becomes integral to the machine’s operational logic. The user—sometimes against their intentions—is guided towards ethically correct action.</p><p>Modern ethics-by-design practices further develop this intuition by equipping systems with multi-layered safeguards, including sensor redundancy, anomaly detection algorithms, and emergency “kill switches.” The Big Red Button concept, studied by the Google DeepMind, illustrates attempts to ensure human override of harmful AI decisions. However, as <xref ref-type="bibr" rid="R27">Russell (2019</xref>, pp. 160–161, 196–197) warns, the tool must be resilient against circumvention by learning systems. <xref ref-type="bibr" rid="R18">IEEE Standards Association (2021)</xref> and EU Trustworthy AI guidelines also call for auditable safety protocols. For this reason, most current AI applications, from autonomous vehicles to content-filtering systems, are better viewed as implicit, rather than explicit, ethical agents: their moral behaviour is hard-coded as safety constraints, rather than derived from deliberative value reasoning.</p><p>A system that can reliably prevent norm-violating behaviours is no longer a passive artefact but an operative entity of delegated agency: designers transfer part of their normative authority to the machine, allowing it to act as their replacement in real time (<xref ref-type="bibr" rid="R15">Floridi and Sanders, 2004</xref>; <xref ref-type="bibr" rid="R10">Coeckelbergh, 2015</xref>). Because the machine selects and enforces behavioural restrictions without further human intervention, observers begin to treat its outputs as intentional—although limited—acts. It is precisely this shift from causal effectiveness to normative authorship that justifies speaking of moral agency, even if the system lacks consciousness or moral understanding.</p><p><xref ref-type="bibr" rid="R24">Moor (2006)</xref> defines an explicit ethical agent, the third category of his typology, as a system capable of apparent moral reasoning: a machine that not only follows programmed safety boundaries but can identify value conflicts and select solutions based on ethical norms. At this level, epistemic difficulties arise. Ethical theories must be formalised into operational rules for AI to make decisions like humans. Attempts to modularise the canonical paradigms—deontology and utilitarianism—show that their principles are both heterogeneous and mutually competitive. Systems such as MoralDM (Moral Decision-Making) and two well-known prototype agents often referred to as “Jeremy” and “W.D.”—developed in machine ethics research to test hybrid rule- and learning-based approaches—address this conflict by contextually switching between utility calculation and applying absolute obligations. However, in practice, they require a fully specified “moral ontology” of the world, which has yet to be successfully developed (<xref ref-type="bibr" rid="R5">Cervantes et al., 2019</xref>, pp. 501–532). Critics further point out that any programmed hierarchy of rules inevitably reflects the beliefs of its designers, raising the risk of hidden bias and ethnocentrism (<xref ref-type="bibr" rid="R1">Allen et al., 2006</xref>).</p><p>Computational issues are equally complex. Resolving ethical dilemmas requires exploring a decision space that grows exponentially with the number of stakeholders and potential outcomes. In time-sensitive scenarios, such as an autonomous vehicle collision, the algorithm must reach a result within milliseconds, even though computing a “minimal harm” scenario may be nondeterministic polynomial-time (NP)-complete (<xref ref-type="bibr" rid="R16">Goodall, 2014</xref>, pp. 93–102). NP-complete problems scale exponentially, meaning that no polynomial-time solution is known; therefore, real-time vehicles must rely on heuristics that may sacrifice optimality. Hybrid architectures (e.g. GenEthand LIDA) attempt to circumvent this barrier by combining top-down rules with bottom-up learning. However, they require constant monitoring to ensure that learning does not undermine deontological constraints (<xref ref-type="bibr" rid="R6">Cervantes et al., 2020</xref>, pp. 117–125). As a result, Moor’s vision of explicit ethical agents largely remains a research programme, as it still lacks a strong ethical formalism and efficient algorithms capable of ensuring consistency, speed, and cultural relevance in machine decision-making.</p><p>The highest level in Moor’s taxonomy is the full ethical agent—a being empowered with all the attributes traditionally attributed to a mentally healthy adult human: self-awareness, the ability to distinguish right from wrong, intentionality, and free will (<xref ref-type="bibr" rid="R24">Moor, 2006</xref>). Moor acknowledges that no existing AI system meets these strict criteria, and the concept remains, at least for now, a speculative construct serving to define the upper limit of the debate.</p></sec><sec id="S3"><title>Critical Perspective on Moor’s Typology</title><p>The debate on full ethical agents questions the necessity of consciousness and free will for full moral agency. Advocates of functionalism, like <xref ref-type="bibr" rid="R12">Dennett (1992)</xref>, argue that replicating the right cognitive processes is sufficient: if a system can consistently generate “drafts” of events, recognise values, and update its states based on feedback, it meets the functional criteria for moral agency, despite lacking a classical “self.”<sup><xref ref-type="fn" rid="fn1">1</xref></sup> From this perspective, Moor’s insistence on free will is a remnant of anthropocentric intuition, rather than a logical necessity.</p><p><xref ref-type="bibr" rid="R7">Chalmers (1996)</xref>, on the other hand, challenges the opposing view: without phenomenal consciousness—the ability to experience qualitative mental states—true moral responsibility is impossible, as valuation and feeling form the irreducible core of ethical decision-making. If <italic>qualia</italic> are inaccessible to machines, full moral agents will remain in the realm of fiction.<sup><xref ref-type="fn" rid="fn2">2</xref></sup></p><p>An additional critical view is raised by <xref ref-type="bibr" rid="R4">Bryson (2010</xref>, pp. 63–74), who warns that granting machines the status of autonomous individuals poses a risk of offsetting human responsibility. In her view, granting machines the status of agents may undermine human responsibility by shifting blame from designers to the supposed autonomy of the algorithm. <xref ref-type="bibr" rid="R3">Bryson (2009</xref>, pp. 5–12) compares AI to a car or a computer: tools may be highly complex, but that does not make them moral persons. AI systems should remain tools, not moral partners. As a result, Moor’s concept of full ethical agent primarily functions as an experimental framework today, revealing how far contemporary technologies fall short of the ideal and forcing reflection on which human traits truly constitute the minimal threshold for moral agency, and which are merely cultural baggage embedded in our expectations.</p><p>Echoing Bryson’s warning against anthropomorphising machines, subsequent critiques of Moor’s typology focus mainly on the charge of anthropocentrism. <xref ref-type="bibr" rid="R15">Floridi and Sanders (2004</xref>, p. 349) argue that the four-tiered scale overly ties moral agency to the categories of consciousness and free will. In contrast, in real socio-technical networks, responsibility is distributed and may also apply to ‘mind-less’ artefacts (hence, mind-less morality).<sup><xref ref-type="fn" rid="fn3">3</xref></sup> Their concept of “levels of abstraction” enables a contextual analysis of AI agency as a function of the system-user-institution relationship, rather than being solely based on the machine’s internal properties. The dispute between Bryson and Floridi reveals a deeper tension between the ambition to expand the category of agency and the fear of prematurely “dehumanising” ethics. <xref ref-type="bibr" rid="R8">Coeckelbergh (2009</xref>, pp. 181–189) takes this debate even further by proposing a theory of virtual moral agency, where the key factor is the social perception of the robot, not its ontology; an artefact may thus become morally relevant even without possessing any mental states.</p><p>Critics, regardless of their ontological stance, point to three practical challenges. First, the responsibility gap: autonomous learning systems may act unpredictably, making it challenging to identify a responsible party, as shown in examples discussed by <xref ref-type="bibr" rid="R22">Matthias (2004</xref>, pp. 175–183) and <xref ref-type="bibr" rid="R2">Asaro (2007</xref>, pp. 18–24). Second, interpretability: research into the minimal level of transparency required for moral agents shows that without comprehensible explanations of decision rules, systems cannot be effectively controlled or certified (<xref ref-type="bibr" rid="R30">Vijayaraghavan and Badea, 2024</xref>, pp. 1–22). Third, value alignment: the more an algorithm adapts to data, the higher the risk that its criteria for good and evil will diverge from social norms, necessitating dynamic mechanisms of oversight and rule updates.</p><p>As a result, even if Moor’s typology provides a practical unifying framework, the contemporary debate has shifted emphasis from ontology to operational issues and questions: Who bears responsibility for system failures? How can the processes of a system be made transparent? How can we ensure that machine learning does not erode fundamental ethical values? Without addressing these concerns, the concept of AI moral agency remains both theoretically debatable and practically unsafe.</p></sec><sec id="S4"><title>Illustrative Case Studies</title><p>The academic debate surrounding the notion of artificial moral agency has intensified in recent years, offering valuable refinement to Moor’s original typology. Among the most influential voices, <xref ref-type="bibr" rid="R17">Gunkel (2023)</xref> revisits the ontological assumptions behind machine morality. He contends that the growing entanglement of human and artificial actors necessitates a shift in how agency and responsibility are conceptualised. This perspective lends weight to the claim that a strictly agent-centric taxonomy overlooks morally relevant contexts in which accountability is co-constituted by both human stakeholders and adaptive artefacts.</p><p><xref ref-type="bibr" rid="R9">Coeckelbergh (2014</xref>, pp. 61–77), in his contribution, extends this relational turn by suggesting that moral status should not depend on internal attributes, such as consciousness and intentionality, but instead on the nature and structure of interactions with human agents. His relational conception of artificial agency aligns with the complexity–autonomy–predictability matrix (hereinafter referred to as the RI<sub>3</sub> matrix) proposed here, which frames machine morality as an emergent property of socio-technical systems, rather than a function of internal capacities alone.</p><p>This theoretical foundation can be extended to large language models deployed in high-stakes environments (e.g., medicine or finance), where probabilistic and context-sensitive outputs complicate attribution and can generate responsibility gaps (<xref ref-type="bibr" rid="R22">Matthias, 2004</xref>). These gaps underscore the need for regulatory mechanisms that can track accountability throughout the system’s entire lifecycle, an objective reflected in the layered obligations outlined in the Artificial Intelligence Act adopted by the European Parliament and Council of the <xref ref-type="bibr" rid="R14">European Union (2024)</xref> (<xref ref-type="bibr" rid="R14">Regulation (EU) 2024/1689</xref>; hereinafter referred to as the AI Act).</p><p>Two illustrative cases emphasise the urgency of achieving this objective. In October 2023, a Cruise autonomous vehicle fatally injured a pedestrian in San Francisco after its self-learning perception module failed to recognise the human figure under dim lighting. According to preliminary media reports, over-the-air software updates, deployed after initial regulatory clearance, had altered the vehicle’s behaviour, highlighting the structural inadequacy of traditional agency-based taxonomies when applied to self-adaptive systems (<xref ref-type="bibr" rid="R9">Coeckelbergh, 2014</xref>; <xref ref-type="bibr" rid="R17">Gunkel, 2023</xref>).</p><p>A second illustrative case concerns the clinical deployment of a large language model as a decision-support interface, in which hallucinated recommendations and undue deference to fluent outputs can undermine professional judgement unless robust validation and meaningful human oversight are maintained. Even when oversight is formally assigned, the perceived authority of language-based outputs may shape decision-making and contribute to responsibility gaps within the broader socio-technical network (<xref ref-type="bibr" rid="R22">Matthias, 2004</xref>).</p><p>These scholarly and empirical developments support the paper’s central thesis: Moor’s typology, while structurally sound, must be extended with relational parameters to remain normatively and analytically robust. By integrating moral responsibility as a distributed and dynamic phenomenon, one shaped by ongoing interactions, rather than static properties, the proposed RI<sub>3</sub> matrix provides a more accurate and operationalisable model for AI governance.</p></sec><sec id="S5"><title>Normative and Regulatory Implications</title><p>The normative and regulatory implications for the presented typology confirm that even implicit AI systems lacking self-awareness are now subject to strict legal and technical requirements. At the level of <italic>lex lata</italic>, the cornerstone is the AI Act, which is gradually introducing prohibitions, transparency obligations, and a risk-assessment regime, becoming fully applicable in 2026 (Digital Strategy). This act classifies most contemporary systems as “high-risk,” mandating algorithmic audits, documentation of the design process, and mechanisms for meaningful human control, thereby embedding Moor’s implicit ethical agents within a legal framework of accountability.</p><p>Based on these obligations, algorithm designers now bear an extensive <italic>ex ante</italic> duty of care. The AI Act couples risk management and human oversight with lifecycle monitoring, while the draft AI Liability Directive (AILD) and the revised Product Liability Directive (PLD) promise a strict and fault-based liability for defective code (<xref ref-type="bibr" rid="R14">European Parliament and Council of the European Union, 2024</xref>). Proposals for professional licencing and an “AI Hippocratic Oath” would expose individual engineers to malpractice sanctions (<xref ref-type="bibr" rid="R29">Sharma, 2024</xref>). Meanwhile, legal theorists advocate for the imposition of fiduciary duties of loyalty and care whenever information asymmetry arises (<xref ref-type="bibr" rid="R11">Custers et al., 2025</xref>). Since autonomy increases harm-forecasting uncertainty, scholars anticipate the use of complementary tools, including mandatory third-party audits (<xref ref-type="bibr" rid="R26">Remolina, 2025</xref>, pp. 51–70), compulsory insurance pools (<xref ref-type="bibr" rid="R28">Saul Ewing LLP, 2025</xref>), and burden-shifting rules when designers withhold evidence (<xref ref-type="bibr" rid="R20">Kennedys Law, 2024</xref>). Despite discussions of distributed responsibility, designers remain the primary focus; they must deliver transparent, explainable, and value-aligned systems or face escalating regulatory and financial exposure (<xref ref-type="bibr" rid="R25">Novelli et al., 2025</xref>). The following section demonstrates how these heightened designers’ obligations transform both ethics-by-design standards and liability architecture.</p></sec><sec id="S6" sec-type="discussion"><title>Discussion</title><p>As part of this preventive infrastructure, the revised PLD 2024/2853 came into force on 8 December 2024. Member states must transfer the directive into national law by 9 December 2026. Unlike the AI Act, the PLD is rooted in ex-post liability, maintaining the EU’s strict, no-fault model while adapting it to the realities of digital products. Its innovations include an expanded concept of defect, presumption of causation for complex technologies, and enhanced disclosure requirements. In this respect, the PLD offers a harmonised remedy for victims of AI-related harm, particularly in cases where the AI Act’s <italic>ex ante</italic> controls prove insufficient.</p><p>Another regulatory element, the proposed AILD, was designed to harmonise national tort law by introducing rebuttable presumption of fault and mandatory disclosure obligations when petitioners encounter evidentiary asymmetry caused by the opacity of AI systems. While the European Parliament had targeted adoption for February 2026, the Commission’s updated work programme, released in February 2025, unexpectedly flagged the proposal for potential withdrawal due to a lack of political consensus. At the time of writing, the AILD remains in legislative limbo—formally active but procedurally suspended. If adopted, it would fill the intermediate space between prevention and compensation by refining fault-based liability for AI-specific harm. If withdrawn, member states will revert to their tort doctrines, likely reintroducing the very kind of fragmentation that Moor characterised as a normative “policy vacuum.”</p><p>Together, these instruments outline a layered liability architecture. The AI Act imposes forward-looking obligations aimed at preventing harm; the PLD provides a harmonised ex-post remedy for when harm occurs; and the AILD, if introduced, would tailor non-contractual fault-based liability to the epistemic challenges posed by AI systems. For regulators and system designers, understanding this interplay is essential for operationalising the paper’s proposed RI<sub>3</sub> matrix—complexity, autonomy, and predictability—as each factor corresponds to distinct legal triggers across the evolving EU framework. This point is especially critical for defence and security deployments, including autonomous weapon platforms, border control analytics, and cyber defence early warning, where high autonomy and low predictability trigger the strictest oversight and assurance regimes.</p><p>The findings of this study reaffirm the lasting but limited diagnostic value of James Moor’s four-level typology when applied to contemporary generative and self-learning AI systems. The comparative analysis confirms that the distinction between ethical impact, implicit, explicit, and full moral agents remains illuminated in legacy contexts, such as expert systems and rule-based robotics (<xref ref-type="bibr" rid="R24">Moor, 2006</xref>, pp. 18–21). However, it weakens in the face of anomalies characteristic of probabilistic, generative systems deployed in high-stakes decision-support, where outputs may exceed predefined boundaries and give rise to “responsibility gaps” that Moor’s schema was not designed to interpret (<xref ref-type="bibr" rid="R17">Gunkel, 2023</xref>; <xref ref-type="bibr" rid="R22">Matthias, 2004</xref>).</p><p>Further study led to a relational−contextual extension of Moor’s typology. This perspective, inspired by relational ethics (<xref ref-type="bibr" rid="R9">Coeckelbergh, 2014</xref>, pp. 61–77) and interactionist epistemology, shifts moral analysis away from the system’s internal architecture and towards the socially embedded practices in which that architecture is deployed. Case-based comparisons illustrate the hypothesis that AI systems embedded in dense human feedback loops, such as GPT-4 in therapeutic contexts, exhibit the most intense accountability ambiguities. These instances demonstrate that moral salience emerges not merely from system design but also from its situated use, trust dynamics, and institutional context.</p><p>To operationalise this insight, the paper introduced an RI<sub>3</sub> matrix encompassing complexity, autonomy, and predictability, each corresponding to distinct regulatory obligations outlined in the AI Act. Complexity aligns with systemic risk management for high-risk systems; autonomy is balanced with obligations for meaningful human oversight; and predictability is supported by interpretability and traceability requirements. This matrix provides a pragmatic terminology for translating philosophical distinctions into regulatory frameworks and appears compatible with upcoming liability instruments, such as the AILD and PLD.</p><p>Moreover, the discussion resonates with long-standing debates over moral agency and moral patiency. While <xref ref-type="bibr" rid="R15">Floridi and Sanders (2004</xref>, pp. 349–379) argue for the possibility of attributing moral agency to information artefacts, and <xref ref-type="bibr" rid="R4">Bryson (2010</xref>, pp. 63–74) advises against anthropomorphising technological systems, the relational−contextual approach reframes the debate. It posits agency not as an intrinsic, metaphysical property but as a role that emerges from socio-technical entanglements. This perspective clarifies why accountability may shift—from developers to end-users or institutional deployers—as AI systems migrate from isolated environments to embedded, real-world applications.</p><p>These findings contribute to the field in three key ways. First, they demonstrate that Moor’s typology, although still foundational, necessitates relational augmentation to accommodate the dynamics of modern AI. Second, they present a scalable regulatory framework that can be embedded within risk-based governance. Third, they reorient ontological debates towards actionable questions of relational responsibility, providing ethicists and lawmakers with a nuanced yet applicable utility. In this way, the study positions Moor’s legacy as a living conceptual resource—one that evolves in response to the changing topography of artificial moral agency.</p></sec><sec id="S7" sec-type="conclusion"><title>Conclusion and Future Directions</title><p>This paper critically reviews the regulatory utility of James Moor’s four-level typology of machine morality in the context of increasingly autonomous AI systems. While the typology remains conceptually robust, our analysis suggests that it insufficiently captures the relational and contextual dynamics at play in contemporary socio-technical systems. The typology’s explanatory power decreases when it is set against the anticipatory logic of the EU’s AI Act, as detailed earlier in the Normative and Regulatory Implications section.</p><p>To address this gap, an RI<sub>3</sub> matrix—complexity, autonomy, and predictability—is proposed as an operational bridge between philosophical classification and normative governance. This matrix not only complements Moor’s levels of machine morality but also enables regulators to more precisely identify responsibility gaps that emerge in environments of epistemic opacity and socio-technical entanglement. By moving beyond an exclusively agent-centric approach, the framework accommodates both structural and relational modes of ethical risk. The same three-dimensional lens can also guide decision-makers in security-critical arenas, from autonomous weapon platforms and cyber-defence early-warning systems to biometric border-control gates, where high complexity, delegated autonomy, and limited behavioural predictability jointly demand the most stringent oversight and contingency planning (<xref ref-type="bibr" rid="R13">De Spiegeleire et al., 2017</xref>; <xref ref-type="bibr" rid="R31">Yampolskiy, 2018</xref>).</p><p>Nonetheless, several limitations restrict the scope of this inquiry. The selected case studies are geographically and regionally narrow; further cross-jurisdictional and longitudinal research is required to assess the generalisation of the proposed matrix across legal and cultural contexts. Moreover, while the matrix aligns qualitatively with emerging liability doctrines, its thresholds for actionable unpredictability or autonomy require empirical adjustment in dialogue with computer scientists and risk evaluators. A final ontological requirement remains: the analysis assumes a relatively stable boundary between moral agency and moral patiency (<xref ref-type="bibr" rid="R9">Coeckelbergh, 2014</xref>, pp. 61–77; <xref ref-type="bibr" rid="R15">Floridi and Sanders, 2004</xref>, pp. 349–379). Should future AI systems evolve to possess self-modification or goal-generation capacities, the foundational assumptions of Moore’s law may require radical revision.</p><p>Future research should proceed along three converging paths. Conceptually, scholars should elaborate the relational metrics of moral salience by incorporating insights from social robotics and interactionist epistemologies. Also, large-N studies of diverse AI deployments, including in mobility, healthcare, and generative media, should test whether the matrix reliably predicts accountability gaps. Normatively, interdisciplinary teams should translate the heuristic into auditable compliance indicators that correspond with the AI Act’s risk tiers and the forthcoming AILD. Advancing along these trajectories will both validate and refine the framework presented here, ensuring that ethical theory remains both philosophically rigorous and practically responsive to the accelerating pace of artificial agency.</p></sec></body><back><sec id="S8"><title>Funding</title><p>The research received no external funding.</p></sec><sec id="S9"><title>Disccloser Statement</title><p>No potential conflict of interest was reported by the author. The author read and agreed to the published version of the manuscript</p></sec><sec id="S10"><title>Data Availability Statement</title><p>Not applicable.</p></sec><fn-group><fn id="fn1"><label>1</label><p>Functionalism holds that mental states are constituted by their causal roles.</p></fn><fn id="fn2"><label>2</label><p>Qualia denote the subjective, phenomenal aspects of experience.</p></fn><fn id="fn3"><label>3</label><p>Mindless morality refers to assigning moral relevance to systems that lack phenomenal consciousness.</p></fn></fn-group><ref-list><ref id="R1"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Allen</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Wallach</surname>, <given-names>W.</given-names></string-name> and <string-name><surname>Smit</surname>, <given-names>I.</given-names></string-name></person-group> (<year>2006</year>) ‘<article-title>Why machine ethics?</article-title>’, <source>IEEE Intelligent Systems</source>, <volume>21</volume>(<issue>4</issue>), pp. <fpage>12</fpage>–<lpage>17</lpage>. doi: <pub-id pub-id-type="doi">10.1109/MIS.2006.62</pub-id>.</mixed-citation></ref><ref id="R2"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Asaro</surname>, <given-names>P.M.</given-names></string-name></person-group> (<year>2007</year>) ‘<article-title>Robots and responsibility from a legal perspective</article-title>’, in <source>Proceedings of the IEEE international conference on intelligent robots and systems</source>. <publisher-loc>San Diego, CA</publisher-loc>: <publisher-name>IEEE</publisher-name>, pp. <fpage>18</fpage>–<lpage>24</lpage>.</mixed-citation></ref><ref id="R3"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Bryson</surname>, <given-names>J.J.</given-names></string-name></person-group> (<year>2009</year>) ‘<article-title>Building persons is a choice</article-title>’, <source>Ethics and Information Technology</source>, <volume>11</volume>(<issue>1</issue>), pp. <fpage>5</fpage>–<lpage>12</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10676-009-9189-3</pub-id>.</mixed-citation></ref><ref id="R4"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Bryson</surname>, <given-names>J.J.</given-names></string-name></person-group> (<year>2010</year>) ‘<article-title>Robots should be slaves</article-title>’, in <person-group person-group-type="editor"><string-name><surname>Wilks</surname>, <given-names>Y.</given-names></string-name></person-group> (ed.) <source>Close engagements with artificial companions: Key social, psychological, ethical and design issues</source>. <publisher-loc>Amsterdam</publisher-loc>: <publisher-name>John Benjamins</publisher-name>, pp. <fpage>63</fpage>–<lpage>74</lpage>. doi: <pub-id pub-id-type="doi">10.1075/nlp.8.11bry</pub-id>.</mixed-citation></ref><ref id="R5"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Cervantes</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>López</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Castro-Sánchez</surname>, <given-names>L.</given-names></string-name> and <string-name><surname>Ramos</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2019</year>) ‘<article-title>Artificial moral agents: A survey of the current status</article-title>’, <source>Science and Engineering Ethics</source>, <volume>26</volume>(<issue>2</issue>), pp. <fpage>501</fpage>–<lpage>532</lpage>, doi: <pub-id pub-id-type="doi">10.1007/s11948-019-00151-x</pub-id>.</mixed-citation></ref><ref id="R6"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Cervantes</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>López</surname>, <given-names>S.</given-names></string-name> and <string-name><surname>Cervantes</surname>, <given-names>J-A.</given-names></string-name></person-group> (<year>2020</year>) ‘<article-title>Toward ethical cognitive architectures for the development of artificial moral agents</article-title>’, <source>Cognitive Systems Research</source>, <volume>64</volume>, pp. <fpage>117</fpage>–<lpage>125</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.cogsys.2020.08.010</pub-id>.</mixed-citation></ref><ref id="R7"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Chalmers</surname>, <given-names>D.J.</given-names></string-name></person-group> (<year>1996</year>) <source>The conscious mind: In search of a fundamental theory</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref><ref id="R8"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Coeckelbergh</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2009</year>) ‘<article-title>Virtual moral agency, virtual moral responsibility: On the moral significance of the appearance, perception, and performance of artificial agents</article-title>’, <source>AI &amp; Society</source>, <volume>24</volume>(<issue>2</issue>), pp. <fpage>181</fpage>–<lpage>189</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s00146-009-0208-3</pub-id>.</mixed-citation></ref><ref id="R9"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Coeckelbergh</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2014</year>) ‘<article-title>The moral standing of machines…</article-title>’, <source>Philosophy &amp; Technology</source>, <volume>27</volume>(<issue>1</issue>), pp. <fpage>61</fpage>–<lpage>77</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s13347-013-0133-8</pub-id>.</mixed-citation></ref><ref id="R10"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Coeckelbergh</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2015</year>) ‘<article-title>Artificial agents, good care, and modernity</article-title>’, <source>Theoretical Medicine and Bioethics</source>, <volume>36</volume>(<issue>4</issue>), pp. <fpage>265</fpage>–<lpage>277</lpage>, doi: <pub-id pub-id-type="doi">10.1007/s11017-015-9331-y</pub-id>.</mixed-citation></ref><ref id="R11"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Custers</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Lahmann</surname>, <given-names>H.</given-names></string-name> and <string-name><surname>Scott</surname>, <given-names>B.I.</given-names></string-name></person-group> (<year>2025</year>) ‘<article-title>From liability gaps to liability overlaps: shared responsibilities and fiduciary duties in AI and other complex technologies</article-title>’, <source>AI &amp; Society</source>, <volume>40</volume>, pp. <fpage>4035</fpage>–<lpage>4050</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s00146-024-02137-1</pub-id>.</mixed-citation></ref><ref id="R12"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Dennett</surname>, <given-names>D.C.</given-names></string-name></person-group> (<year>1992</year>) <source>Consciousness explained</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Penguin</publisher-name>.</mixed-citation></ref><ref id="R13"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>De Spiegeleire</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Maas</surname>, <given-names>M.</given-names></string-name> and <string-name><surname>Sweijs</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2017</year>). <source>Artificial intelligence and the future of defence: Strategic implications for small- and medium-sized force providers</source>. <publisher-loc>The Hague</publisher-loc>: <publisher-name>The Hague Centre for Strategic Studies</publisher-name>. Available at: <ext-link ext-link-type="uri" xlink:href="https://hcss.nl/report/artificial-intelligence-and-the-future-of-defense/">https://hcss.nl/report/artificial-intelligence-and-the-future-of-defense/</ext-link> (Accessed: 1 May 2025).</mixed-citation></ref><ref id="R14"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab>European Parliament and Council of the European Union</collab></person-group> (<year>2024</year>) ‘<article-title>Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)</article-title>’, <source>Official Journal of the European Union</source>, L 2024, p. <fpage>1689</fpage>.</mixed-citation></ref><ref id="R15"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Floridi</surname>, <given-names>L.</given-names></string-name> and <string-name><surname>Sanders</surname>, <given-names>J.W.</given-names></string-name></person-group> (<year>2004</year>) ‘<article-title>On the morality of artificial agents</article-title>’, <source>Minds and Machines</source>, <volume>14</volume>(<issue>3</issue>), pp. <fpage>349</fpage>–<lpage>379</lpage>. doi: <pub-id pub-id-type="doi">10.1023/B:MIND.0000035461.63578.9d</pub-id>.</mixed-citation></ref><ref id="R16"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Goodall</surname>, <given-names>N.J.</given-names></string-name></person-group> (<year>2014</year>) ‘<article-title>Machine ethics and automated vehicles</article-title>’, in <person-group person-group-type="editor"><string-name><surname>Meyer</surname>, <given-names>G.</given-names></string-name> and <string-name><surname>Beiker</surname>, <given-names>S.</given-names></string-name></person-group> (eds.) <source>Road vehicle automation</source>. <publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>, pp. <fpage>93</fpage>–<lpage>102</lpage>. doi: <pub-id pub-id-type="doi">10.1007/978-3-319-05990-7_9</pub-id>.</mixed-citation></ref><ref id="R17"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Gunkel</surname>, <given-names>D.J.</given-names></string-name></person-group> (<year>2023</year>) <source>The machine question (revisited)</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>.</mixed-citation></ref><ref id="R18"><mixed-citation publication-type="book"><person-group person-group-type="author"><collab>IEEE Standards Association</collab></person-group> (<year>2021</year>) <source>IEEE standard model process for addressing ethical concerns during system design (IEEE Std 7000-2021)</source>. <publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name>.</mixed-citation></ref><ref id="R19"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Kant</surname>, <given-names>I.</given-names></string-name></person-group> (<year>2002</year>, 1785) <source>Groundwork of the metaphysics of morals</source>. Translated by <person-group person-group-type="editor"><string-name><given-names>M.</given-names><surname>Gregor</surname></string-name></person-group>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</mixed-citation></ref><ref id="R20"><mixed-citation publication-type="web"><person-group person-group-type="author"><collab>Kennedys Law</collab></person-group> (<year>2024</year>) <source>AI liability in the EU: Burden-shifting rules explained</source>. White paper. Available at: <ext-link ext-link-type="uri" xlink:href="https://kennedyslaw.com/thought-leadership/article/ai-liability-in-the-eu-burden-shifting-rules-explained/">https://kennedyslaw.com/thought-leadership/article/ai-liability-in-the-eu-burden-shifting-rules-explained/</ext-link> (Accessed: 26 May 2025).</mixed-citation></ref><ref id="R21"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Kurzweil</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2005</year>) <source>The singularity is near: When humans transcend biology</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Viking</publisher-name>.</mixed-citation></ref><ref id="R22"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Matthias</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2004</year>) ‘<article-title>The responsibility gap: Ascribing responsibility for the actions of learning automata</article-title>’, <source>Ethics and Information Technology</source>, <volume>6</volume>(<issue>3</issue>), pp. <fpage>175</fpage>–<lpage>183</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10676-004-3422-1</pub-id>.</mixed-citation></ref><ref id="R23"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Moor</surname>, <given-names>J.H.</given-names></string-name></person-group> (<year>1985</year>) ‘<article-title>What is computer ethics?</article-title>’, <source>Metaphilosophy</source>, <volume>16</volume>(<issue>4</issue>), pp. <fpage>266</fpage>–<lpage>275</lpage>. doi: <pub-id pub-id-type="doi">10.1111/j.1467-9973.1985.tb00173.x</pub-id>.</mixed-citation></ref><ref id="R24"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Moor</surname>, <given-names>J.H.</given-names></string-name></person-group> (<year>2006</year>) ‘<article-title>The nature, importance, and difficulty of machine ethics</article-title>’, <source>IEEE Intelligent Systems</source>, <volume>21</volume>(<issue>4</issue>), pp. <fpage>18</fpage>–<lpage>21</lpage>. doi: <pub-id pub-id-type="doi">10.1109/MIS.2006.80</pub-id>.</mixed-citation></ref><ref id="R25"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Novelli</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Taddeo</surname>, <given-names>M.</given-names></string-name> and <string-name><surname>Floridi</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2024</year>) ‘<article-title>Accountability in artificial intelligence: what it is and how it works</article-title>’, <source>AI &amp; Society</source>, <volume>39</volume>, pp. <fpage>1871</fpage>–<lpage>1882</lpage>, doi: <pub-id pub-id-type="doi">10.1007/s00146-023-01635-y</pub-id>.</mixed-citation></ref><ref id="R26"><mixed-citation publication-type="web"><person-group person-group-type="author"><string-name><surname>Remolina</surname>, <given-names>N.</given-names></string-name></person-group> (<year>2025</year>) ‘<article-title>AI Governance and Algorithmic Auditing in Financial Institutions: Lessons From Singapore</article-title>’, SSRN (Research Paper). Available at: <ext-link ext-link-type="uri" xlink:href="https://ssrn.com/abstract=5199968">https://ssrn.com/abstract=5199968</ext-link> [Accessed: 31 March 2025].</mixed-citation></ref><ref id="R27"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Russell</surname>, <given-names>S.J.</given-names></string-name></person-group> (<year>2019</year>) <source>Human compatible: Artificial intelligence and the problem of control</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Viking</publisher-name>.</mixed-citation></ref><ref id="R28"><mixed-citation publication-type="web"><person-group person-group-type="author"><collab>Saul Ewing LLP</collab></person-group> (<year>2025</year>) <source>The Use of AI in the Insurance Policy Lifecycle and Legal Implications.</source> White paper, 20 February 2025. Available at: <ext-link ext-link-type="uri" xlink:href="https://www.saul.com/">https://www.saul.com/</ext-link> (Accessed: 26 May 2025).</mixed-citation></ref><ref id="R29"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Sharma</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2024</year>) <source>Toward an AI Hippocratic oath</source>. <publisher-loc>Tilburg</publisher-loc>: <publisher-name>TechReg Press</publisher-name>.</mixed-citation></ref><ref id="R30"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Vijayaraghavan</surname>, <given-names>A.</given-names></string-name> and <string-name><surname>Badea</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2024</year>) ‘<article-title>Minimum levels of interpretability for artificial moral agents</article-title>’, <source>AI &amp; Ethics</source>, <volume>4</volume>, pp. <fpage>1</fpage>–<lpage>22</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s43681-024-00536-0</pub-id>.</mixed-citation></ref><ref id="R31"><mixed-citation publication-type="book"><person-group person-group-type="editor"><string-name><surname>Yampolskiy</surname>, <given-names>R.V.</given-names></string-name></person-group> (Ed.). (<year>2018</year>) <source>Artificial intelligence safety and security</source>. <publisher-name>Chapman and Hall/CRC</publisher-name>. doi: <pub-id pub-id-type="doi">10.1201/9781351251389</pub-id>.</mixed-citation></ref></ref-list></back></article>
