https://dinastires. org/JLPH Vol. No. 2, 2025 DOI: https://doi. org/10. 38035/jlph. https://creativecommons. org/licenses/by/4. Criminal Liability For Ai-Based Cybercrime: Comparative Analysis of Common Law and Civil Law Approaches Maslihati Nur Hidayati1*. Agus Surono2. Ery Pamungkas3 Faculty of Law. Pancasila University, maslihati. nh@univpancasila. Faculty of Law. Pancasila University, agussurono@univpancasila. Faculty of Law. Pancasila University, erypamun5224004@univpancasila. Corresponding Author: maslihati. nh@univpancasila. Abstract: This study analyzes the fundamental differences between common law and civil law systems in responding to criminal liability for artificial intelligence-based cybercrime. The background of the study covers the significant escalation of crimes utilizing AI, with 87% of global organizations experiencing AI-based attacks in 2024. AI-based fraud losses predicted to reach $40 billion by 2027, and a 223% increase in the trade of deepfake tools on dark web The main problem identified by the research is a critical paradox: as AI technology becomes increasingly sophisticated in facilitating cybercrime, the gap between existing legal regulations and operational realities in the field widens, allowing criminals to exploit ambiguities in accountability to avoid responsibility. The research methodology uses a qualitative comparative legal analysis approach through analysis of primary legal documents from both systems, with case studies in four jurisdictions: the United States and the United Kingdom for common law, and Germany and France for civil law, as well as the supranational framework of the EU AI Act. The results show that the common law system has developed three models of liabilityAiperpetration-via-another, natural-probable-consequence liability, and direct liabilityAibut still faces fundamental difficulties in attributing mens rea to AI systems that lack moral consciousness. In contrast, civil law systems adopt a provider-deployer approach with the mechanisms of Organisationsverschulden in Germany and responsability pynale in France, which allow for liability based on organizational negligence, although they often lag behind in responding to technological developments. This study concludes that a hybrid approach is needed that combines the clarity of civil law codification with the adaptive flexibility of common law, as well as cross-jurisdictional harmonization to overcome the challenges of law enforcement in an increasingly autonomous AI era. Keyword: AI. Comparative . Criminal Liability. Cybercrime. Mens Rea. INTRODUCTION The rapid development of artificial intelligence (AI) technology, particularly in the form of generative AI and large language models, has brought fundamental changes to the global cybercrime landscape. Recent data indicates that in 2024, 87% of global organizations experienced AI-based cyberattacks,(Maundrill, 2. while 281 of the 500 Fortune companies 1064 | P a g e https://dinastires. org/JLPH Vol. No. 2, 2025 %) identified AI as a critical risk factor in their annual reports(Ma, 2. This phenomenon is indicative of a fundamental transformation in criminal methods, whereby perpetrators leverage artificial intelligence's automation, personalization, and scaling capabilities to dramatically increase the effectiveness of their attacks. According to projections by Deloitte, losses from AI-based fraud in the United States are expected to reach $40 billion by 2027, representing a substantial increase from the $12. 3 billion recorded in 2023. (Lalchand et al. This indicates a notable acceleration, particularly in the financial sector. The broader social impact is evident in the 223% increase in the trade of deepfake tools on dark web forums between the first quarter of 2023 and the first quarter of 2024, as well as the discovery of more than 3,512 AI-generated images of child sexual abuse in one dark web forum. These findings demonstrate that the capabilities of this technology have gone beyond mere financial threats and have crossed the threshold into the realm of crimes against humanity. (Hargreaves, 2. In response to this escalation, the international community has initiated significant legal The European Union passed the AI Act (Regulation (EU) 2024/1. on June 13, 2024, which came into force on August 1, 2024, becoming the first comprehensive legislative initiative at the supranational level. (Burri, 2. Concurrently, common law countries such as the United States, the United Kingdom. Canada, and Australia are still developing their approaches, relying on a combination of established common law doctrines and emerging statutory frameworks. The fundamental differences between common law and civil law systems engender divergent approaches to criminal liability for AI-based crimes. The common law system, originating from the Anglo-Saxon tradition and reliant on judicial precedent, case law, and the flexible interpretation of fundamental principles such as actus reus and mens rea, has developed a more established category of corporate liability. However, this system still faces methodological challenges in attributing liability to autonomous AI entities. In contrast, the civil law system, which is predicated on written codification and systematic interpretation of statutes, provides greater normative clarity. (Sachoulidou, 2. However, it frequently lags in adapting to technological innovations due to limitations in the formal legislative process. This state of affairs gives rise to a paradox that forms the crux of the research problem: as AI technology becomes more sophisticated in its facilitation of cybercrime, the discrepancy between existing regulations in legal systems and the operational reality on the ground widens, resulting in a scenario where numerous perpetrators enjoy impunity due to the ambiguity of accountability and the challenges in attributing responsibility. (Hutapea et al. , 2. According to a 2024 global report from the International Committee on Crime Problems (ICPC), 76% of chief information security officers (CISO. reported that regulatory fragmentation across jurisdictions seriously affects their organizations' ability to maintain (Jurgens & Cin, 2. Moreover. UK law enforcement has indicated that the British law enforcement community is not adequately equipped to effectively prevent, disrupt, or investigate AI-based crimes. This scenario poses a significant risk of engendering legal uncertainty, a circumstance that has the potential to be detrimental to both parties with a vested interest in the realm of AI innovation and the protection of crime victims. (Mantili et al. , 2. An investigation into how countries with common law systems have responded to this challenge reveals a heterogeneous pattern. (Basu & Dave, 2. In the United States, law enforcement entities depend on the provisions of the Computer Fraud and Abuse Act (CFAA), in addition to various statutes at the federal and state levels. (Lin, 2. These legal frameworks offer a degree of interpretive flexibility but concomitantly engender ambiguity in their implementation, particularly within the context of AI, which remains in its nascent stages within the domain of jurisprudence. The United Kingdom has endeavored to incorporate AI 1065 | P a g e https://dinastires. org/JLPH Vol. No. 2, 2025 considerations through the Computer Misuse Act of 1990 and various regulations. (Abdelaziz. The analysis of data enforcement indicates that a discrepancy exists between the regulations in place and the enforcement practices that have been implemented in both systems. This discrepancy is indicative of a significant gap that requires further investigation to ascertain its extent and implications. By the conclusion of 2024, a mere 37% of global organizations reported having a process in place to assess the security of AI tools prior to deployment. Meanwhile, 66% of organizations stated that AI would have the most significant impact on cybersecurity in the coming year. This finding suggests a dissociation between risk awareness and action in the enforcement domain. Moreover, a 2025 report from the Alan Turing Institute indicates that UK law enforcement lacks the necessary resources, coordination, and AI capability deployment to proactively disrupt criminal groups using AI. (Janjeva, 2. The challenges of attribution and causation are critical bottlenecks in this context. In the context of AI-based crime, the "problem of many hands" becomes exponentially more complex as it involves multiple actors in the developer-manufacturer-user chain, with contributions from designers, programmers, trainers, operators, and third-party users. (Simmler, 2. The predictability of AI system actions is often negated by the complexity and opacity of machine learning systems. A comparative analysis reveals that both legal systems possess distinct strengths and weaknesses in their responses to these challenges. The common law system, characterized by its adaptability in interpretation and evolution through case law, enables judicial authorities to respond expeditiously to emerging technologies. However, this also engenders uncertainty in the predictability of legal standards and inconsistency across jurisdictions. Conversely, civil law systems offer clarity and harmonization through explicit codification. however, they frequently exhibit delays in responsiveness to rapid technological change and are constrained by formal legislative processes. The EU AI Act, for instance, delineates between administrative offenses for breaches of substantive requirements . esignated as very serious, serious, and minor infraction. and traditional criminal liability, thus establishing a dual-track system that has the potential to address AI misuse at multiple levels. (Basu & Dave, 2. However, the implementation of these directive-based provisions necessitates transposition into national law, which could result in inconsistency. Concurrently, common law jurisdictions such as the UK and the US are endeavoring to establish a coherent criminal law framework that effectively captures autonomous or semi-autonomous AI behavior without violating fundamental principles, including the principle of guilt and the mens rea doctrine. (Arnal, 2. The necessity to establish explicit criminal accountability for AI-related transgressions is bolstered by numerous pivotal considerations. Firstly, the proliferation of criminal activity that incorporates artificial intelligence capabilities results in harm on an unparalleled scale and Financial crimes perpetrated through the use of deepfakes and romance fraud have reached a scale that is estimated to be in the millions of dollars. The generation of child sexual exploitation material has reached volumes that are overwhelming law enforcement (Davey & Sauerwein, 2. Phishing and social engineering attacks have increased by 82% with the augmentation of artificial intelligence. Secondly, criminal groups and even state actors have demonstrated the capability and willingness to weaponize AI tools to achieve criminal objectives. Furthermore, the development trajectory shows that adoption barriers continue to decline. Thirdly, the presence of legal uncertainty and enforcement gaps engenders a scenario in which perpetrators are able to exploit ambiguity in legal frameworks to evade accountability. (Martin, 2. Concurrently, corporations and developers operate within an environment characterized by uncertainty, a factor that impedes the responsible development of AI. Fourthly, the transnational nature of cybercrime necessitates enhanced coordination and harmonization 1066 | P a g e https://dinastires. org/JLPH Vol. No. 2, 2025 among legal systems. The fragmentation of criminal liability frameworks across jurisdictions enables criminals to exploit regulatory gaps and opt for legal venues that offer the most favorable outcomes, thereby minimizing legal consequences. (Rodrigues, 2. This research is significant because it aims to address a critical gap in the academic discourse on the harmonization of criminal liability for AI-enabled cybercrime. Existing literature has explored specific issues such as attribution problems, causation difficulties, and mens rea limitations in an abstract or theoretical manner. However, comparative analysis is limited regarding how the two major legal traditions practically respond to these challenges, where their specific strengths and weaknesses lie, and how lessons learned from one tradition can inform development in the other. Moreover, there has been a paucity of attention paid to the ramifications for civil law countries in the Global South, such as Indonesia, who are confronted with the imperative to adopt AI-related criminal legislation yet frequently possess an inadequate foundation of jurisprudence and institutional capacity to implement sophisticated legal frameworks. By analyzing comparative approaches from common law and civil law countries in regulating criminal liability for AI-based cybercrime, this research can provide practical insights on legislative drafting, enforcement mechanisms, and institutional reforms that can support a more effective and equitable response to emerging threats. At the national level of Indonesia, the findings of this comparative analysis can inform necessary amendments to the ITE Law and the development of specific criminal provisions to address AI developer liability, criteria for fault attribution, and due diligence obligations that are aligned with both international best practices and Indonesian legal principles. METHOD This study adopts a comparative legal analysis approach to analyze how common law and civil law systems respond to the challenges of criminal liability in artificial intelligencebased cybercrime. This method was chosen because of its suitability in identifying philosophical differences, implementation mechanisms, and practical effectiveness between the two major legal traditions of the world, which have fundamentally different characteristics in terms of legal sources, normative interpretation, and jurisprudential evolution. This study applies a qualitative analytical framework involving three main stages. First, the identification stage involves the collection and categorization of primary legal materials from both systems, including statutory provisions from the CFAA in the United States, the Computer Misuse Act in the United Kingdom, the Strafgesetzbuch in Germany, and the French Penal Code in France, combined with the EU AI Act as a supranational framework. Second, the comparative analysis stage involves an in-depth examination of fundamental concepts such as actus reus, mens rea, corporate liability, and related doctrines through the principles in each system to identify material similarities and differences. Third, the synthesis stage involves the formation of a conceptual model that shows how each system handles the three main liability models: perpetration-via-another, natural-probable-consequence liability, and direct liability towards the AI system itself. The research data sources are secondary and sourced from legal-academic documents covering statutory texts, relevant judicial precedents, documentation of the implementation of the EU AI Act, and peer-reviewed scientific literature from various leading legal journals and research institutions. The research design uses a cross-jurisdictional comparison by selecting two common law countries . he UK and the USA) and two civil law countries (Germany and Franc. as representative case studies, allowing for the identification of common patterns as well as local variations within each legal tradition. This research also integrates an analysis of enforcement practices by comparing the available regulatory framework with its practical implementation by law enforcement agencies, revealing a significant gap between legal norms and operational realities. This holistic approach enables the study to provide contextual and 1067 | P a g e https://dinastires. org/JLPH Vol. No. 2, 2025 relevant recommendations, especially for civil law countries in the Global South such as Indonesia, which need to adopt AI-specific legislation but are constrained by institutional capacity limitations and a developing jurisprudential foundation. RESULTS AND DISCUSSION Criminal Liability for AI-Based Crimes in the Common Law System: A Comparative Study of Implementation in the UK and USA The regulation of criminal liability for crimes committed by artificial intelligence (AI) systems in common law countries such as the United Kingdom (UK) and the United States (USA) faces fundamental challenges in applying traditional criminal principles designed specifically for human agents. The criminal law frameworks of both jurisdictions are predicated on two main constitutive elements that have served as the foundation of criminal liability for These elements, known as actus reus . riminal ac. and mens rea . alicious intent or mental faul. , undergird the legal principles that govern criminal liability. The integration of these two elements within the context of increasingly autonomous AI systems gives rise to a substantial discrepancy between the pace of technological advancement and the capacity of prevailing criminal law frameworks to adapt. (Sachoulidou, 2. In the UK context, the common law principles governing criminal responsibility have evolved through a process of judicial decisions and subsequent statutory codification. The prevailing English criminal law system is predicated on the notion that criminal liability is exclusively ascribed to human entities or legal persons . , corporation. This is predicated on the premise that these entities possess the cognitive capacity to form criminal intent and commit prohibited acts. (Sarch, 2. Conversely, in the USA, the criminal liability framework is codified in the Model Penal Code, which has been adopted by various states, although not This code also maintains the dual element requirements of actus reus and mens rea for nearly all crimes. Both systems encounter analogous challenges when confronted with autonomous AI systems capable of making decisions and executing actions without direct human intervention. (Hashmi et al. , 2. The initial element, actus reus or wrongful act, is, by definition, more readily ascribable to an AI system than mens rea. In traditional common law, actus reus is defined as an external or objective act that constitutes the material component of a crime. This encompasses not only physical movements but also omissions . when there is a legal obligation to act. systems, particularly robots or software agents, have the capacity to execute actions that satisfy the technical definition of actus reus, both in the form of commission . he act of doing somethin. and omission . he failure to ac. (R. Abbott, 2. To illustrate, if an AI robot programmed to execute operations in a factory were to identify a human worker as a threat to its mission and subsequently attack that worker using its hydraulic arm, this physical action would objectively satisfy the actus reus element of the crime of homicide or assault. In the context of AI-based cybercrimes, software agents that access computer systems without authorization or exceed authorized access fulfill the actus reus component of cybercrime offenses. However, the primary challenge in applying criminal liability to AI lies in the element of mens rea, which refers to the mental condition or state of mind of the perpetrator when committing a criminal act. According to Model Penal Code Section 2. 02 and UK common law principles, various degrees of mens rea are recognized, including knowledge, intent/purpose, recklessness, and negligence. The fundamental challenge is whether AI systems can possess mental states equivalent to the concept of mens rea developed for humans. In the context of traditional common law, the term "mens rea" is understood to denote the state of mind of the perpetrator, encompassing elements such as awareness of the material facts, intention to 1068 | P a g e https://dinastires. org/JLPH Vol. No. 2, 2025 achieve a particular outcome, and the presence of recklessness or negligence in the perpetrator's (R. Abbott & Sarch, 2. In the United Kingdom, the principles of criminal law have been codified through various legislative acts, including the Serious Crime Act of 2015, which establishes the framework for accomplice liability and participatory crimes. (Dyson, 2. When applying this framework to AI-based crimes, the question that arises is whether machine learning algorithms that process millions of data points and optimize actions based on reward functions can be said to have "knowledge" or "intention" in the legal sense. The latest generation of AI systems have the capacity to collect data from various sources, analyze it, and make probabilistic predictions about the outcomes of their alternative actions. These predictions are superficially reminiscent of how humans form intentions based on their knowledge. However, this similarity is AI lacks self-awareness, cannot comprehend the moral or legal implications of its actions, and cannot accept the deterrence effect of threatened punishment in the same way that humans do. (Nerantzi & Sartor, 2. In the United States, following the Supreme Court's decision in the Van Buren v. United States case in 2021, the criminal liability provisions for computer-based activities under the Computer Fraud and Abuse Act (CFAA, 18 U. A 1. underwent substantial elucidation with respect to the elements of mens rea and the interpretation of "authorization. " In its decision, the Supreme Court emphasized that "intentionally accessing a computer without authorization or exceeding authorized access" requires a technical understanding of what "access" means in a computing context, namely, entry into a computer system or a specific part of a computer system, rather than simply using the system for unauthorized purposes. This clarification bears significant implications for the criminal liability of AI: the CFAA stipulates the mens rea element of "intentionally," thereby necessitating that the perpetrator deliberately access a computer without authorization or exceed their authorization. The fundamental question pertains to whether an AI system operating autonomously can satisfy this "intentionally" requirement. (Thaw, 2. The interpretation of "intentionally" in Model Penal Code section 2. defines it as an action taken "with conscious object to cause such a result" or "with knowledge that such conduct is substantially certain to cause such result. " When applied to autonomous AI systems, the question arises as to whether the reward function or optimization objective programmed into the system can be qualified as a "conscious object" or "intent" in the criminal law sense. is evident that a programmer who designs artificial intelligence (AI) with the intent to steal data from a computer system by exceeding authorized access privileges undoubtedly possesses the mens rea necessary for criminal liability. However, in a scenario where AI independently and unpredictably learns to optimize its objectives in a way that results in unauthorized access, perhaps because neural networks have identified new strategies that were not designed or anticipated by the programmer, the question of mens rea becomes significantly more (Jani & Rathor, 2. In both common law jurisdictions, legal scholars have identified three primary approaches to determining liability. The initial model is the perpetration-via-another model, in which AI systems are regarded as innocent agents or tools. This is analogous to the way in which knives or screwdrivers are treated in criminal law. According to this model, criminal responsibility falls on the programmer or user who utilizes AI instrumentally to perpetrate a In order to be held liable for a crime committed by an AI, a programmer or user must have the necessary mens rea for the crime in question. That is to say, the programmer or user must have the intent to commit the crime in question. The actions of the AI are attributed to the programmer or user as if they had committed the acts themselves. This model is consistently applied in common law in both the UK and the USA when third-party instruments . oth human and non-huma. are used to commit crimes. In accordance with the tenets of criminal law in 1069 | P a g e https://dinastires. org/JLPH Vol. No. 2, 2025 both jurisdictions, an individual who causes another person . r an innocent agen. to commit a crime is considered criminally liable as a principal in the first degree. (Hallevy, 2. However, this perpetration-via-another model has significant limitations when applied to autonomous AI systems with a high level of sophistication. This model is not applicable in scenarios where artificial intelligence (AI) perpetrates crimes based on the accumulation of its own experiences and learning, as opposed to being instructed to do so by programmers or users. This leads us to the second model, the natural-probable-consequence liability model. This model is predicated on the premise that programmers or users can be held responsible for crimes committed by AI if those crimes are the natural and probable consequence of their conduct, even if they had no explicit intent to commit those crimes. This approach is predicated on the principle of criminal negligence, a tenet that has long been part of common law in the UK and USA. The principle posits that an individual can be held criminally liable for acts they did not personally commit if those acts are the natural and probable consequence of their conduct that would be foreseeable by a reasonable person. (King et al. , 2. In the United Kingdom, the principle of accomplice liability, as delineated in the Serious Crime Act of 2015, acknowledges that an individual can be held liable as an accomplice to a crime perpetrated by the principal offender if they provide assistance with the intent that the assistance will be utilized to commit a crime, or with the awareness that a specific crime will be committed. (Horder, 2. When applied to the programming and deployment of AI systems, this means that programmers or entities that deploy AI with the knowledge that the system may, as a natural and probable consequence, commit certain criminal acts can be held responsible for those crimes through the mechanisms of accomplice liability or criminal In the United States, a similar principle is applied through aiding-and-abetting liability and the accomplice doctrine, which is recognized in various federal and state courts. This structure of responsibility necessitates that programmers or users possess a reasonable understanding that their actions may result in criminal consequences. (Selbst, 2. The third approach is the direct liability model, in which the AI system itself is held criminally liable for crimes committed. The applicability of this model is contingent upon the fulfillment of two constitutive elements of criminal liability by the AI system: actus reus and mens rea. As previously discussed, the actus reus component of the crime in question can be relatively easily attributed to AI. The central question pertains to the capacity of artificial intelligence (AI) to satisfy the mens rea prerequisite as delineated by the prevailing principles of common law criminal jurisprudence. Certain legal theorists posit that the cognition and volition requirements that constitute mens rea in common law, that is, knowledge and intent, can be attributed to AI systems at a certain level of abstraction. The presence of cognition and volition in an AI system can be demonstrated by its capacity to process sensory information, both electronic and physical, analyze it to form a model of its environment, and perform calculations to predict the consequences of alternative actions before choosing the action that optimizes its objective function. At a certain level of abstraction, these cognitive and volitional processes can be said to exist. (Diamantis, 2. Nevertheless, the direct implementation of the direct liability model in autonomous AI systems encounters significant challenges in both common law jurisdictions. The United Kingdom and the United States both adhere to the fundamental principle in criminal law that culpability requires not only criminal acts but also moral agency or responsible agency. That is to say, the capacity to understand and respond to morally and legally relevant reasons. The concept of culpability, in this context, is defined as the accountability for choices made within the context of an understanding of applicable norms. While sophisticated AI systems may exhibit computational processes that superficially resemble human reasoning, they are deficient in their comprehension of moral and legal norms, a deficiency that is pertinent to the attribution of culpability. (Osmani, 2. 1070 | P a g e https://dinastires. org/JLPH Vol. No. 2, 2025 In law enforcement practice in both jurisdictions, a more conservative position has been In the UK, guidance for judicial office holders emphasizes that any use of AI by or on behalf of the judiciary must be consistent with the judiciary's overarching obligation to protect the integrity of the administration of justice. Nevertheless, the present guidance is oriented towards the utilization of AI in judicial and administrative procedures rather than towards criminal liability for AI actions in themselves. The majority of prosecutions for AIrelated crimes in the UK have targeted human actors who developed, configured, or used AI for criminal purposes, rather than the AI systems themselves. In the United States, analogous to the United Kingdom, the enforcement of the Computer Fraud and Abuse Act (CFAA) has centered on individuals who utilize computer systems, including those with artificial intelligence components, to obtain unauthorized access or exceed authorized access. The Supreme Court's decision in Van Buren v. United States did not explicitly address criminal liability for autonomous AI systems. However, its interpretation of the terms "intentionally" and "authorization" has important implications for future regulatory frameworks. (Giannini. The regulatory framework governing criminal liability for AI-related crimes in common law jurisdictions necessitates a meticulous balancing act among several competing principles and practical considerations. First, the principle of legality must be considered. This principle dictates that criminal liability can only be imposed for conduct that is clearly defined as criminal by pre-existing and specific laws. Secondly, the principle of culpability is invoked, which stipulates that criminal liability can only be attributed to agents who possess moral agency and the capacity to adhere to legal norms. Thirdly, there is the practical consideration that existing criminal statutes in both jurisdictions have been developed to regulate computer crimes, and these statutes generally assume human or corporate perpetrators. (Nerantzi & Sartor, 2. In the United Kingdom, the Computer Misuse Act of 1990 establishes legal provisions for addressing three distinct categories of offenses. These categories include unauthorized access, unauthorized access with the intent to commit further offenses, and unauthorized Each provision in this Act, whether implicit or explicit, necessitates an element of knowledge or intent, indicating that access is unauthorized. In the context of AI systems, judicial bodies will be tasked with deliberating on the question of whether autonomous AI systems engaging in unauthorized access can be held liable under the provisions of this Act. practice, the focus of liability has been on programmers or users. However, as AI systems become increasingly autonomous and unpredictable in their behavior, legislators may consider the potential for direct liability against AI systems. (Soyer & Tettenborn, 2. In the United States, the framework for determining criminal liability in cases of computer-related crimes is characterized by fragmentation, with a variety of statutes existing at both the federal and state levels. The Computer Fraud and Abuse Act (CFAA), codified in 18 U. A 1030, is the prevailing federal statute addressing this issue. However, the CFAA has been subject to substantial criticism due to its extensive scope and ambiguous language. The Van Buren Supreme Court decision established a more limited interpretation of the term "exceeds authorized access," thereby reducing the scope of criminal liability. Under this narrow interpretation, unauthorized access for improper purposes does not necessarily violate the CFAA if the user has authorization to access particular areas of the computer system. The practical effect of this decision is that prosecution for AI-based unauthorized access becomes more difficult, as prosecutors must show that the AI system accessed areas of the computer system to which the AI was not authorized to access, not just that the AI used authorized access for improper purposes. (Villasenor, 2. In both jurisdictions, the emerging approach entails the development of specific regulatory frameworks for AI that are distinct from traditional criminal law. In the UK, the 1071 | P a g e https://dinastires. org/JLPH Vol. No. 2, 2025 government has identified the governance of artificial intelligence (AI) as a priority area and has issued various guidelines for the responsible development of AI. In the United States, various federal agencies have issued guidance on the regulation of artificial intelligence (AI), including in the criminal context. However, to date, there is no comprehensive federal criminal statute that specifically addresses liability for autonomous AI systems. The question of whether existing criminal law frameworks can adequately address harms from autonomous AI systems, or whether new legislative frameworks are needed, remains the subject of ongoing debate in both jurisdictions. (Makam, 2. In the context of corporate criminal liability, as established in both the UK and the US, the principles developed may serve as a model for addressing AI liability. In both jurisdictions, corporations can be held criminally liable even in the absence of traditional human intent. The liability of corporations is typically predicated on the actions of agents operating within the scope of their employment and with the intent to benefit the corporation. This model may be adaptable to autonomous AI systems: if AI systems can be regarded as quasi-legal persons with defined interests and objectives . hrough algorithms and objective function. , then the corporate liability framework may be extended. However, the implementation of this extension would necessitate a substantial reconceptualization of the foundational principles of criminal (Mattia, 2. Criminal Liability for AI-Based Crimes in the Civil Law Law System: A Comparative Study of Implementation in the Germany and France The European Union's Artificial Intelligence Act (AI Ac. , which is set to take effect in August 2024, establishes that criminal liability cannot be directly attributed to AI, as these systems lack legal personality, moral awareness, or the capacity to comprehend the legal ramifications of their actions. Consequently, the conceptual underpinnings of criminal liability continue to be inextricably linked to conventional legal subjects, namely individuals . atural person. or legal entities, who find themselves intricately interwoven with the life cycle of AI . e Lemos Campos, 2. A foundational principle in the AI Act is the delineation between providers and According to Article 3. , the term "providers" encompasses natural persons, legal persons, public authorities, agencies, or bodies that engage in the development of AI systems or general-purpose AI models. Additionally, providers may include entities that have AI systems or models developed and placed on the market or operated under their name or trademark, with or without the provision of compensation. Conversely, deployers are defined in Article 3, paragraph 4 as natural persons, legal persons, public authorities, agencies, or bodies that use AI systems under their authority, with the exception of instances in which AI systems are employed in the context of non-professional personal activities. The allocation of criminal responsibility between these two parties determines who can be held accountable when the use of AI violates the provisions of the law, in particular Article 5 of the AI Act, which prohibits certain AI practices. (Pehlivan, 2. Article 5 of the AI Act proscribes a number of AI practices that are considered to violate the fundamental values of the European Union and human rights. The initial proscription pertains to the utilization of subliminal or manipulative techniques that surpass an individual's awareness with the objective of distorting behavior, thereby inflicting or potentially inflicting substantial harm. Secondly, the exploitation of vulnerabilities among specific groups, including those based on factors such as age, disability, or particular social and economic circumstances, is prohibited. Thirdly, the prohibition of social scoring systems that classify individuals or groups based on known, inferred, or predicted social behavior or personal characteristics with detrimental or disproportionate results. (Neuwirth, 2. 1072 | P a g e https://dinastires. org/JLPH Vol. No. 2, 2025 Fourthly, the prohibition encompasses criminal risk assessment systems that predict the likelihood of an individual committing a crime based exclusively on profiling or personality The fifth point pertains to the establishment of a prohibition on the creation of facial recognition databases through the unsystematic harvesting of internet images or CCTV Sixthly, a prohibition is to be established on the use of emotion recognition systems in occupational and educational settings, with the exception of instances pertaining to medical or safety concerns. Seventhly, a prohibition is to be established on the use of biometric categorization systems that infer race, political opinion, trade union membership, religious or philosophical beliefs, sexual life, or sexual orientation based on biometric data. The eighth directive stipulates a prohibition on the implementation of real-time remote biometric identification systems in public spaces by law enforcement entities, with limited exceptions granted for the purpose of locating victims of kidnapping, trafficking, child sexual abuse, missing persons, and the prevention of terrorist threats or specific criminal investigations, as detailed in Annex II. (Barkane, 2. From a criminal liability perspective, the AI Act establishes a multifaceted framework due to its non-explicit allocation of criminal liability, instead empowering each Member State to establish its own distinct set of criminal regulations. Article 99 of the AI Act stipulates that Member States must formulate rules on penalties and other enforcement measures applicable to violations of the AI Act Regulations by operators. These measures may include warnings and non-monetary penalties. It is imperative that penalties be effective, proportionate, and Failure to comply with the prohibitions on AI practices outlined in Article 5 can result in administrative fines reaching up to C35 million, or, in the case of an enterprise, up to 7% of its total worldwide annual turnover in the previous fiscal year, whichever is higher. However, it is imperative to acknowledge that the AI Act establishes administrative fines, not criminal penalties directly. Nevertheless, this structure provides Member States with the flexibility to allocate criminal liability based on their national criminal laws. (TATARU & CREoU, 2. The criminal liability framework for providers in the context of the AI Act is supported by two articles. The first is Article 16, which regulates the obligations of providers of high-risk AI systems. The second is Article 26, which regulates the obligations of deployers of high-risk AI systems. It is incumbent upon providers to ensure that high-risk AI systems comply with the requirements enumerated in this Regulation prior to their placement on the market. The obligations of providers include the development and maintenance of a quality management system, the provision of technical documentation, the conducting of conformity assessments, the maintenance of automatic logs, the conducting of post-market monitoring, and the reporting of serious incidents. Conversely, deployers are obligated to ensure that high-risk AI systems are utilized in accordance with the provided instructions, appoint competent human oversight, monitor system operations, maintain automatic logs, and report serious incidents to providers and market surveillance authorities. (Schuett, 2. Criminal liability may arise when providers or deployers violate these obligations intentionally or through negligence that causes harm. In such cases, the determination of whether negligence or criminal intent . ens re. can be proven is determined by the national criminal law. In civil law systems such as those in Germany and France, the concept of culpability is formulated through an understanding of intent . and negligence . the context of AI, the critical question is whether the provider or deployer could have known or could have foreseen that their AI system would be used to commit acts that violate Section 5 of the AI Act or cause harm to third parties. (Bertolini, 2. In Germany, the legal framework governing criminal liability in the context of artificial intelligence is delineated by the Strafgesetzbuch (StGB), a comprehensive code that serves as the nation's primary legislation. The StGB does not contain specific provisions for AI crimes. 1073 | P a g e https://dinastires. org/JLPH Vol. No. 2, 2025 however, general principles can be applied in this context. Section 14 StGB establishes the foundation for this principle, stipulating that an individual who perpetrates an act that results in the occurrence or non-occurrence of a criminal offense based on their own culpability can be held criminally liable. In the context of AI, the pertinent question is whether the provider or deployer can be held liable based on culpa lata or dolus . egligence or inten. Section 15 StGB confirms that criminal liability for acts committed with intent is the prevailing standard, unless criminal law explicitly provides for culpability based on negligence. (Nerantzi & Sartor, 2. Within the ambit of AI practices that are proscribed by the AI Act, the evaluation of criminal liability in Germany can be conducted through the lens of several pertinent provisions. First. Section 263 StGB on Betrug . can be applied if the provider or deployer intentionally uses an AI system that contains manipulative or deceptive elements to deceive the victim and cause material damage. Secondly. Sections 186-187 of the German Criminal Code (StGB) concerning yble Nachrede . and Verleumdung . can be applied if the AI system is used to generate content that attacks a person's honor or reputation. This could be achieved through the use of deepfakes or AI-facilitated content generation. Thirdly. Section 303a StGB concerning Datenverynderung . ata manipulatio. can be applied if an AI system is manipulated or trained with inaccurate data with the intention of causing errors in decisionmaking. Fourthly. Section 13 StGB stipulates the regulation of omission liability, which is pertinent when providers or deployers neglect to implement the requisite measures to avert harm through the AI systems under their control or supervision. (Crawford et al. , 2. In Germany, criminal liability can also arise through the concept of Organisationsverschulden . rganizational culpabilit. , a term that applies to legal entities. According to German jurisprudence, legal entities can be held criminally liable if their management or representatives commit criminal acts that benefit the legal entity or if there is negligence in supervision (Aufsichtspflich. In the context of AI, this responsibility emerges if the leadership or management of an AI system provider or deployer fails to implement adequate safeguards, does not exercise sufficient oversight of their AI systems, or does not report known serious incidents. (Allahverdiyev & Othman, 2. The German legal system also acknowledges the concepts of Kausalityt . and Vorhersehbarkeit . , which are pivotal in evaluating culpability. In the context of AI, the pertinent question is whether the provider or deployer of the AI system could have foreseen that their AI system could potentially cause harm or be utilized to perpetrate a prohibited criminal act. The absence of criminal liability is predicated on the condition that the AI system has been developed in accordance with state-of-the-art standards and best practices, and that the provider or deployer has implemented reasonable safeguards. However, if there is evidence of gross negligence in the development or deployment, or if the provider or deployer knew or should have known about significant risks, then criminal liability may arise. In France, the criminal liability of AI is governed by the French Penal Code. In contrast to the common law system, the French system, rooted in the civil law tradition, adheres to the principle of responsability pynale, as delineated in Article 121-3 of the Penal Code. The article posits that no crime or misdemeanor is committed without intent to commit it. However, when the law provides for it, a misdemeanor is deemed to exist in cases of deliberate endangerment, as well as in cases of fault, imprudence, negligence, or breach of the obligation of prudence or safety determined by law or regulation. This is contingent upon the ability to prove that the perpetrator did not exercise normal diligence by taking into account, where relevant, the nature of the missions or functions, competencies, and powers and means at their disposal. (Ghigheci. The question of criminal liability in France is addressed through a variety of mechanisms. First, if a provider or deployer deliberately uses or distributes an AI system that they know is prohibited by Article 5 of the AI Act or applicable French law, they may be held criminally 1074 | P a g e https://dinastires. org/JLPH Vol. No. 2, 2025 To illustrate, if a provider or deployer consciously utilizes an AI system designed for subliminal manipulation or exploitation of specific vulnerabilities, they may be subjected to various pertinent offenses under the Code Pynal. (Affagard & Carvys, 2. Secondly, criminal liability may arise based on culpa or negligence. According to Article 121-3 of the French Penal Code, criminal liability may arise in circumstances involving fault, imprudence, negligence, or breach of the obligation of prudence or security. This is contingent upon the demonstration that the accused party did not exercise the degree of diligence that would be considered reasonable in light of the nature of their duties, functions, competencies, and authorities. Within the domain of AI, this stipulation signifies that in the event that a provider or deployer neglects to:(Ghigheci, 2. It is imperative to implement adequate safeguards to prevent AI systems from being used in prohibited or harmful ways. Conduct appropriate conformity assessments or quality management. Subsequent to the deployment of the AI system, it is imperative to undertake a meticulous monitoring of its operational processes. It is imperative that serious incidents be reported as soon as they come to light. It is imperative to adhere to the stipulated obligations, whether they are delineated by the AI Act or by French law. Failure to comply may result in criminal liability for negligence or culpable omission. Thirdly. Article 121-3, paragraph three of the Penal Code introduces the concept of criminal liability for individuals who did not directly cause the damage but who created or contributed to creating a situation that allowed the damage to occur or who did not take measures to prevent it. In the context of corporate responsibility, this is particularly salient because the leadership or management of a provider or deployer may be held liable even if they were not directly involved in the development or deployment of the AI system, provided that it is proven that they manifestly and deliberately violated their particular obligations of prudence or security, or committed a grave fault that exposed others to a particularly serious risk that they could not ignore. (Duflot, 2. France has established specific criminal provisions that address the perpetration of AIrelated crimes. Article 227-23 of the French penal code concerning child sexual abuse material (CSAM) has been broadly interpreted by French courts to include artificial intelligence (AI)generated or AI-manipulated images of minors, not just real images of actual children. Consequently, if an individual utilizes an AI system to generate, distribute, or proffer deepfakes featuring minors within a pornographic context, they may be subject to prosecution for this offense, irrespective of the absence of any actual child involvement. Furthermore, providers or distributors of platforms that facilitate the sharing of such content may be held accountable under Article 6 of Loi 2004-575 (LCEN - Loi pour la Confiance dans l'yOconomie Numyriqu. , which stipulates that platforms must monitor and remove any illegal content. (Tual, 2. In the Clearview AI case of 2022, the CNIL (French Data Protection Authorit. imposed a C20 million fine for the unlawful processing of personal data, including the unauthorized use of facial recognition. While this is an administrative fine under the General Data Protection Regulation (GDPR), it demonstrates France's commitment to leveraging all available mechanisms to ensure compliance with data protection and AI regulations. Furthermore, criminal liability may also arise if intent or gross negligence is proven. (Cortez & Maslej, 2. In the broader context of criminal liability for AI-based crimes, both jurisdictions (Germany and Franc. also consider the concept of liability for creating or enabling infrastructure for criminal activity. This is relevant for AuDark AIAy systems, i. AI systems specifically engineered for malicious purposes such as hacking, cracking, or cyberattacks. Germany. Section 202a StGB on Ausspyhen von Daten . nauthorized access and spying on dat. can be applied if an AI system is used to conduct cyberattacks or data theft. In France, 1075 | P a g e https://dinastires. org/JLPH Vol. No. 2, 2025 various provisions in the Code Pynal on unauthorized computer access, data theft, and fraud can also be applied. (Susila & Salim, 2. At the European Union level, the Council of Europe (CoE) through the European Committee on Crime Problems (CDPC) has prepared a discussion paper in 2024 on criminal liability related to AI systems. This paper identifies four categories of situations that may require criminal liability(Gless & Peralta, 2. First, when AI systems are used as tools to commit established crimes . uch as homicide, theft, frau. In this case, existing criminal laws may be sufficient, although Member States may consider adding AI use as an aggravating circumstance. Second, when AI systems are used to cause harm in novel ways that were not previously predicted. This includes technologyfacilitated violence against women and girls . sing deepfakes for sexual exploitatio. , online sexual grooming using AI, and distribution of AuDark AIAy systems. Third, when bona fide use of AI systems results in unforeseeable harm, such as a chatbot slandering someone due to hallucination, a self-driving car causing an accident, or a highfrequency trading system engaging in market manipulation. In these situations, member states may wish to discuss a harmonized approach to establish thresholds for criminal negligence or exemption from liability. Fourth, when AI systems are designed, trained, or deployed in violation of obligations in the AI Act or Framework Convention on AI. This may include violations of transparency, accountability, or non-discrimination requirements. In the context of enforcement, significant challenges emerge in determining the parties responsible for the complex chain of development, distribution, and deployment of AI systems. This assertion is particularly salient in the context of general-purpose AI models, which have the capacity to be integrated into a myriad of downstream systems. Article 25 of the AI Act acknowledges this complexity by establishing that deployers may be regarded as providers if they make substantial modifications to high-risk AI systems that result in alterations to functionality, safety, or the intended purpose. In such a scenario, the deployer assumes responsibility for ensuring adherence to all provider obligations. Conversely, the original provider may be exonerated from specific responsibilities while maintaining the obligation to . an Bekkum, 2. In order to establish a framework for criminal liability in the context of AI, it is imperative to consider the fundamental principles of criminal law, including the legality principle . ullum crimen sine leg. , proportionality, and due process. In civil law systems, criminal liability can only be invoked if specific criminal provisions clearly establish punishable conduct. In the evolving landscape of AI regulation. Member States must ensure that criminal law provisions are sufficiently clear and precise to allow for fair trials and adequate notice to potential (Garrett, 2. Table 1. Comparison Table of Criminal Liability for Artificial Intelligence-Based Cybercrimes Dimensions of Analysis United States United Kingdom Germany France Legal Basis / Primary Legislation Computer Fraud and Abuse Act (CFAA) 18 U. A 1030. Model Penal Code (MPC). Federal Criminal Code Computer Misuse Act 1990. Criminal Law Act Fraud Act Police and Criminal Evidence Act Strafgesetzbuch (StGB) - Kode Penal. A202a202d (Unauthorized Computer Acces. Act EU Implementation Code Pynal Franyais - Kode Penal Prancis. Articles 226-16 hingga 226-24 (Computer Crime. AI Act EU Implementation Legal System Common Law (Federal & State Common Law Civil Law (Codified Syste. Civil Law (Codified Syste. 1076 | P a g e https://dinastires. org/JLPH Dimensions of Analysis Vol. No. 2, 2025 United States United Kingdom Germany France Primary Criminal Liability Model Perpetration-viaanother model. Natural-probableconsequence Rare direct liability on AI Perpetration-viaanother model. Joint enterprise Rare individual liability for AI conduct Provider/Deployer responsibility model. Organisationsverschuld en (Corporate Kausalityt & Vorhersehbarkeit Responsability pynale model. Culpa/Intent-based Corporate criminal liability for gross negligence The Concept of Legal Fault (Mens Rea / Culp. Strict requirement of mens rea (Intent/Knowledge/ Recklessnes. Difficult to establish for AImediated crimes. Focuses on programmer/user Strict requirement of mens rea for most crimes. Some strict liability offenses. Emphasis on defendantAos Vorsatz (Inten. and Fahrlyssigkeit (Negligenc. Kulpa dapat berupa omission. Gross negligence sufficient for criminal Intent/Negligencebased culpability. Negligence covers failure to Lower threshold than gross negligence for some AI-related offenses Subject of Criminal Liability Natural persons . rogrammers, users, executive. Limited corporate criminal liability. AI systems generally not held Natural persons direct/indirect Corporate entities directors/manager AI systems not held liable Natural persons . evelopers, users. Legal entities/organizations (Organisationsverschul AI systems not held liable Natural persons . evelopers, responsible partie. Legal entities. systems not held Provider Accountabil (Developer / Provide. Liability arises from intentional design to facilitate Rare unless explicit criminal intent proven. Limited duty to prevent misuse Liability based on No duty to prevent all Liability if design enables Liability for gross negligence in Duty to implement state-of-theart safeguards. Liability for failure to conduct risk assessments. Organisationsverschuld en applies Liability for culpable failure to Duty to conduct conformity Liability for gross Corporate liability Deployer Accountabil ity (User / Distributo. Direct liability for intentional misuse. Liability under CFAA if exceeding authorized access. Responsibility for intentional criminal Direct liability for Liability if knowingly Responsibility for criminal conduct Liability for failure to implement safeguards post-deployment. Duty to monitor system Liability for using high-risk AI Duty to report serious incidents Liability for failure to establish Duty to monitor operation. Liability for Responsibility for incident reporting Safeguard / Necessary Mitigation Not explicitly mandated by law. Best practices Limited safe harbor protections for reasonable efforts Reasonable explicit statutory Common law duties of care State-of-the-art safeguards required. Conformity Risk assessments (DPIA) Documentation and auditing required. Monitoring systems Adequate Conformity DPIA required for highrisk systems. Audit trail documentation Incident monitoring systems 1077 | P a g e https://dinastires. org/JLPH Dimensions of Analysis Vol. No. 2, 2025 United States United Kingdom Germany France Risk Predictabilit y Standard Foreseeability by reasonable person standard . ase-bycas. High threshold for criminal mens rea. Specificity requirement for intended misuse Natural and Foreseeability from skilled Reasonable foresight of risk Vorhersehbarkeit standard - objective Reasonable expectation by developer/deployer. Risk assessment output determines liability Reasonable foresight standard. Predictability based on technical Culpability if risks were known or should have been Corporate Accountabil ity / Legal Entity Vicarious liability Depends on respondeat superior doctrine. Limited without Corporate manslaughter/negl igence principles. Director liability. Vicarious liability in limited Strong Organisationsverschuld en framework. Corporate liability for organizational failures. Culpability through defective decisionmaking. Collective responsibility possible Corporate criminal liability under Article 121-2. Liability for Culpability through failure of Focus of Prosecution Individuals . rogrammers, users, executive. Focus on intent and AIenabled fraud cases Individual Focus on direct Executives in severe cases. Traditional Both individual and corporate prosecution. Focus on failure of Prosecution of developers AND deployers possible Individual and Focus on culpable Developer and deployer both targeted when Special Regulations on AI Crimes No specific AI crime statutes. General cybercrime laws applied. CFAA Fraud Prevention No specific AI crime statutes. Computer Misuse Act applied Case law ICO guidance on GDPR violations AI Act implementation in StGB underway. Specific penalties for AI-related violations. High-risk system misuse provisions. Strengthened corporate AI Act Specific criminal provisions for CSAM AIgenerated. system misuse Expanded privacy protections Discussion The potential for criminal liability in cybercrimes that employ artificial intelligence exemplifies the fundamental tension between two predominant legal traditions worldwide, which are confronted with the challenge of technological innovation at an unprecedented pace. The fundamental differences between common law and civil law systems result in different approaches to criminal accountability mechanisms, the concept of fault, and the allocation of responsibility in the complex AI ecosystem (Ellamey & Elwakad, 2. In the common law systems of the United Kingdom and the United States, the framework of criminal liability is predicated on two constitutive elements that have served as the foundation of criminal responsibility for centuries: actus reus . riminal ac. and mens rea . riminal inten. The initial element, actus reus, is relatively straightforward to attribute to sophisticated AI systems. AI systems, particularly in the context of cybercrime, can objectively perform acts that meet the technical definition of actus reus, whether in the form of commission or omission. To illustrate, when a software agent accesses a computer system without 1078 | P a g e https://dinastires. org/JLPH Vol. No. 2, 2025 authorization or exceeds authorized access, the act clearly fulfills the actus reus component of a cybercrime offense. However, the primary challenge in applying criminal liability to AI lies in the element of mens rea, which refers to the mental state or state of mind of the perpetrator at the time of committing the criminal act (Maskur et al. , 2. The common law systems in the United Kingdom and the United States have developed three main models for determining criminal liability for AI-based crimes. The initial model is perpetration-via-another, wherein the AI system is regarded as an innocent agent or a mere In this model, the onus of criminal responsibility falls exclusively upon the programmer or user who utilizes AI to perpetrate the crime. The individual utilizing AI must possess the requisite mens rea for the crime in question, and the AI's actions can be attributed to the programmer or user as if they had committed the act themselves. This model is consistently applied in common law in both jurisdictions when third-party instruments are used to commit crimes (Hallevy, 2. However, the perpetration-via-another model has significant limitations when applied to highly sophisticated and autonomous AI systems. This model is not applicable in scenarios where artificial intelligence (AI) commits crimes based on the accumulation of its own experience and learning, rather than explicit instructions from programmers or users. This leads us to the second model, which is the natural-probable-consequence liability model. This approach is predicated on the premise that programmers or users may be held liable for crimes committed by AI if those crimes are the natural and probable consequences of their actions, even if they did not have the explicit intent to commit those crimes. This principle finds its roots in the doctrine of criminal negligence as established within the framework of traditional common law. Under this doctrine, individuals can be held liable for actions they did not personally commit, provided that these actions are the natural and probable consequences of their actions, and that these consequences would have been foreseeable to a reasonable person (King et al. , 2. The third model is the direct liability model, in which the AI system itself is held criminally liable for crimes committed. The implementation of this model is contingent upon the fulfillment of two fundamental elements of criminal liability by the AI system: actus reus and mens rea. While actus reus can be relatively easily attributed to AI, the central question is whether AI can fulfill the prerequisites of mens rea as outlined by the applicable common law criminal jurisprudence principles. Legal theorists have posited that the requirements of cognition and volition that constitute mens rea in common law, namely knowledge and intent, can be attributed to AI systems at a certain level of abstraction. The capacity of AI to process sensory information, analyze it to form models of its environment, and perform calculations to predict the consequences of alternative actions can be said to demonstrate the presence of cognitive and volitional processes (Baeyaert, 2. Nevertheless, the direct implementation of the direct liability model encounters substantial challenges in both common law jurisdictions. The fundamental principle in criminal law that underpins the legal systems of both England and the United States is the requirement of culpability, which demands not only the commission of a criminal act but also the presence of moral agency or responsible agency. In practice, law enforcement agencies in both jurisdictions have adopted a more conservative position. In England, the majority of prosecutions for AI-related crimes target human actors who develop, configure, or use AI for criminal purposes, rather than the AI system itself. In a similar vein, the Computer Fraud and Abuse Act (CFAA) in the United States places a primary emphasis on individuals who utilize computer systems, including those with artificial intelligence components, to gain unauthorized access or exceed authorized access (Panattoni, 2. In contrast, the civil law systems implemented in Germany and France, as well as the overarching framework of the EU AI Act, adopt a philosophically distinct approach to criminal 1079 | P a g e https://dinastires. org/JLPH Vol. No. 2, 2025 liability for AI-based crimes. The EU AI Act explicitly states that criminal liability cannot be directly attributed to AI because these systems do not have legal personality, moral awareness, or the capacity to understand the legal implications of their actions. Consequently, the conceptual underpinnings of criminal liability continue to be firmly anchored in conventional legal subjects, namely individuals . atural person. or legal entities . egal entitie. implicated in the life cycle of AI systems (Staszkiewicz et al. , 2. In civil law systems, the principle of distinguishing between providers and deployers plays a crucial role in the allocation of criminal responsibility. As delineated in the AI Act, providers encompass natural persons, legal persons, public authorities, agencies, or bodies engaged in the development of AI systems or general-purpose AI models. Conversely, deployers are entities that utilize AI systems under their authority. The allocation of criminal responsibility between these two parties determines who can be held accountable when the use of AI violates the provisions of the law. In Germany, the framework for criminal liability is regulated by the Strafgesetzbuch (StGB), a comprehensive penal code that serves as the country's primary legislation. Despite the absence of explicit provisions within the StGB concerning AI crimes, the application of general principles remains a viable course of action in this particular context. The concept of Organisationsverschulden . rganizational culpabilit. enables the imposition of criminal liability on legal entities in instances where their management or representatives engage in criminal activities that benefit the legal entity, or where there is negligence in supervision. the context of AI, this liability arises if the leadership or management of the provider or deployer fails to implement adequate safeguards, does not exercise sufficient oversight, or does not report known serious incidents (Momsen, 2. The German legal system also acknowledges the notions of Kausalityt . and Vorhersehbarkeit . , which play a pivotal role in determining culpability within the context of AI. The fundamental question pertains to whether the entity providing or implementing an AI system could have foreseen the possibility that its system might result in harm or be utilized for the commission of illicit criminal acts. The absence of criminal liability is predicated on the condition that the AI system has been developed in accordance with current standards and best practices, and that the provider or deployer has implemented reasonable However, if there is evidence of gross negligence in the development or deployment, or if the provider or deployer knew or should have known of significant risks, then criminal liability may arise (Rutecka, 2. In France, the governance of AI criminal liability is dictated by the French Penal Code, which is principally governed by the principle of responsability pynale, as delineated in Article 121-3 of the Penal Code. The French legal system recognizes a form of criminal liability that is not predicated exclusively on intent, but rather encompasses a broader scope of liability that extends to culpa or negligence. Criminal liability may arise in the event that the provider or deployer neglects to implement adequate safeguards, fails to conduct appropriate conformity assessments, fails to monitor system operations after deployment, or fails to report serious France has established specific criminal provisions addressing AI-related crimes, such as the expanded interpretation of Article 227-23 on child sexual abuse material to include images generated or manipulated by AI (Ghigheci, 2. The philosophical distinctions between these two systems mirror a fundamental trade-off between flexibility and legal certainty. The common law system, characterized by its interpretive adaptability and evolution through case law, enables judicial authorities to respond to rapidly emerging technologies. However, this phenomenon engenders a concomitant uncertainty regarding the predictability of legal standards and the existence of inconsistencies across different legal jurisdictions. Conversely, the civil law system offers clarity and harmonization through explicit codification. however, it often exhibits a delayed 1080 | P a g e https://dinastires. org/JLPH Vol. No. 2, 2025 responsiveness to rapid technological change and is constrained by formal legislative processes (Paunio, 2. CONCLUSION The philosophical differences between the common law and civil law systems result in contrasting but complementary approaches to regulating criminal liability for artificial intelligence-based cybercrimes. The common law systems in the United Kingdom and the United States rely on the flexibility of judicial interpretation through case precedents and the application of traditional doctrines such as actus reus and mens rea, developing three main models of liability, namely the perpetration-via-another model, the natural-probableconsequence liability model, and the model of direct liability for AI systems. However, practical application has primarily focused on the first and second models due to the fundamental difficulty in attributing mens rea to AI systems that lack moral consciousness or the ability to understand the legal implications of their actions. Meanwhile, the civil law systems in Germany and France, reinforced by the regulatory framework of the EU AI Act, adopt a more structured and codified approach by clearly distinguishing between providers and deployers, and applying the concepts of Organisationsverschulden in Germany and responsability pynale in France, which allow for criminal liability based on organizational negligence or failure to implement adequate safeguards. Although the common law system offers greater adaptability to technological developments through the evolution of case law, it faces uncertainty in the predictability of legal standards and inconsistencies between jurisdictions. Conversely, the civil law system provides greater clarity and harmonization through explicit codification, but often shows a delay in responsiveness to rapid technological change due to the limitations of the formal legislative process. Key challenges faced by both systems include issues of attribution of responsibility in complex development-manufacturing-user chains, difficulties in proving a causal link between AI actions and resulting harm, and complexity in determining the degree of negligence or intent when AI systems act autonomously based on self-learning. The EU AI Act represents the first comprehensive attempt at the supranational level to explicitly regulate the obligations of providers and deployers, including the obligation to conduct risk assessments, maintain quality management systems, carry out post-market surveillance, and report serious incidents, with administrative sanctions of up to C35 million or 7% of global annual turnover for violations of prohibited AI practices. These findings have significant practical implications for civil law countries in the Global South such as Indonesia, which are facing the need to adopt criminal legislation related to AI but often have limited jurisprudential foundations and institutional capacity. Key recommendations include the need to amend the Electronic Information and Transactions Law to accommodate specific liability for AI developers with clear criteria for attribution of fault, the development of due diligence obligations that are aligned with international best practices while still considering national legal principles, and institutional capacity building for effective law enforcement. This study also identifies the need for cross-jurisdictional harmonization given the transnational nature of cybercrime, where fragmentation of criminal liability frameworks allows perpetrators to exploit regulatory loopholes and engage in favorable forum Going forward, a hybrid approach is needed that combines the clarity of the civil law system with the adaptive flexibility of the common law system, accompanied by the establishment of stronger international cooperation mechanisms to address the challenges of law enforcement in an increasingly autonomous and complex AI era. 1081 | P a g e https://dinastires. org/JLPH Vol. No. 2, 2025 REFERENCE