Jurnal Ilmu Hukum Kyadiren. Vol. No. January 2026. Copyrights A Authors JIHK is licensed undera Creative Commons Atribusi4. 0 Internasional license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. DOI: 10. 46924/jihk. Artificial Intelligence and Criminal Liability: A Preliminary Study within the Indonesian Legal System Novelina Mutiara Sariati Hutapea1*. Desy Kartika Caronina Sitepu2. Jenriswandi Damanik3, & Srikandi Karmeli Lusia Sianipar4 1,2,3,4Universitas Simalungun. Indonesia Correspondence Novelina Mutiara Sariati Hutapea. Universitas Simalungun. Indonesia. Jl. Sisingamangaraja Barat. Bah Kapul. Kec. Siantar Sitalasari. Kota Pematang Siantar. Sumatera Utara 21142, e-mail: hutapea@yahoo. How to cite Hutapea. Novelina Mutiara Sariati. Sitepu. Desy Kartika Caronina. Damanik. Jenriswandi. , & Sianipar. Srikandi Karmeli Lusia. Artificial Intelligence and Criminal Liability: A Preliminary Study within the Indonesian Legal System. Jurnal Ilmu Hukum Kyadiren 7. , 688-704. https://doi. org/10. 46924/jihk. Original Article Abstract The rapid development of Artificial Intelligence (AI) presents significant challenges for Indonesian criminal law, particularly in determining accountability for actions involving AI, whether as an auxiliary tool or as an indirect perpetrator. This study seeks to examine the current criminal law framework, identify deficiencies in the Criminal Code (KUHP), the Electronic Information and Transactions Law (ITE La. , and related regulations, and assess the applicability of alternative liability models, including vicarious liability and strict liability. Employing a normative juridical approachAi through the analysis of statutory provisions, legal doctrine, and international case studiesAithis research finds that national regulations remain predominantly reactive, fail to adequately anticipate the emergence of autonomous AI, and encounter technical evidentiary challenges arising from the Aoblack boxAo phenomenon. The findings suggest that alternative liability models are better suited to the distinctive characteristics of AI. The study concludes that a responsive reformulation of criminal law norms is essential to ensure legal certainty, protect victims, and facilitate effective law enforcement in the context of AI. Keywords: Artificial Intelligence. Criminal Law. Criminal Liability Abstrak Perkembangan Artificial Intelligence (AI) menimbulkan tantangan baru bagi hukum pidana Indonesia, khususnya dalam menentukan pertanggungjawaban atas tindakan yang melibatkan AI baik sebagai alat bantu maupun pelaku tidak Penelitian ini bertujuan menganalisis kerangka hukum pidana yang berlaku, mengidentifikasi kelemahan dalam KUHP. UU ITE, dan regulasi terkait, serta mengkaji relevansi model pertanggungjawaban alternatif seperti vicarious liability dan strict liability. Menggunakan metode yuridis normatif dengan analisis peraturan perundang-undangan, doktrin hukum, dan studi kasus internasional, penelitian menemukan bahwa regulasi nasional masih bersifat reaktif, belum mengantisipasi AI otonom, dan menghadapi kendala teknis pembuktian akibat fenomena black box. Model pertanggungjawaban alternatif dinilai lebih adaptif terhadap karakteristik AI. Kesimpulannya, diperlukan reformulasi norma hukum pidana yang responsif untuk memastikan kepastian hukum, perlindungan korban, dan efektivitas penegakan hukum untuk AI. Kata kunci: Kecerdasan Buatan. Hukum Pidana. Pertanggungjawaban Pidana Hutapea et al. Artificial Intelligence and Criminal Liability: A Preliminary Study within the Indonesian Legal System | 689 INTRODUCTION Over the past two decades, advancements in information and communication technology have profoundly transformed the paradigms of human interaction, economic activity, and legal systems. Among the most transformative innovations is Artificial Intelligence (AI), which has evolved far beyond simple automation to become autonomous systems capable of complex decision-making. While this development offers substantial opportunities across various sectors, including law enforcement, it also poses fundamental challenges to criminal justice systems that were historically designed to regulate human behavior. In Indonesia. AI has permeated numerous sectorsAifrom manufacturing and banking to autonomous transportation, public services, and even the judiciary. However, alongside these benefits emerge significant risks, including deepfakes, data manipulation, adaptive cyberattacks, and accidents involving autonomous vehicles. These risks raise a critical legal question: when AI engages in unlawful conduct or causes harm, who should bear criminal responsibility? This question is further complicated by the fact that AI systems, driven by self-learning algorithms . achine learnin. , may exhibit behaviors unpredictable even to their creators. The Indonesian criminal law system, as codified in the Criminal Code and various sector-specific laws such as the Electronic Information and Transactions Law, adheres to the principle of geen straf zonder schuld (Auno crime without faultA. , which requires both mens rea . riminal inten. and actus reus . riminal ac. on the part of a legally responsible However. AI lacks consciousness, volition, and moral capacity, and therefore does not meet the criteria for a criminal law subject. This creates a legal vacuum in which the existing framework does not recognize the possibility of AI functioning as a direct perpetrator of criminal acts. Several jurisdictions have attempted to address this issue. The European Union, for instance, through its European Parliament Resolution on Civil Law Rules on Robotics, has introduced the concept of Auelectronic personhoodAy for certain highly autonomous AI systems. 1 Other jurisdictions, such as the United States and Japan, favor vicarious or strict liability models, holding operators, controllers, or manufacturers Nonetheless, there is no global consensus on the most appropriate framework, and academic debate remains ongoing. In Indonesia, scholarly discussions on AI criminal liability remain relatively nascent, often focusing on conceptual and ethical considerations rather than on operational frameworks aligned with national criminal law principles. Previous research has primarily addressed the urgency of AI regulation, its potential recognition as a legal The European Parliament. AuEuropean Parliament Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on RoboticsAy (Strasbourg: The European Parliament, 2. , https://w. eu/doceo/document/TA-8-2017-0051_EN. Hutapea et al. Artificial Intelligence and Criminal Liability: A Preliminary Study within the Indonesian Legal System | 690 subject, and ethical risk assessments, without offering concrete models for legal Moreover, the rapid adoption of AI in both commercial and public domains underscores the risks of delayed legal adaptation, which could lead to legal uncertainty, enforcement challenges, and societal harm. Sulistio and Salsabilla emphasize the necessity of regulating AI as a legal entity within Indonesian criminal law, noting the increasing human-like capacities of AI to perform acts that may infringe legal interests. Employing a normative approach, their research advocates for the eventual recognition of AI as a legal subject to enable direct criminal prosecution. 2 Continuing this discourse. Setyawan highlights the absence of AI-related provisions in the new Criminal Code (Law No. 1 of 2. Through a normative and comparative legal analysis, he demonstrates that several jurisdictions have adopted AI as a legal subjectAieither directly or via vicarious liabilityAiwhile Indonesia continues to hold accountable those who control or utilize AI. Ikawati et al. examine the application of AI from a judicial perspective, particularly as a tool to assist judges and court officials in managing cases. While emphasizing efficiency and effectiveness, they underscore the risks of algorithmic bias, privacy violations, and data security breaches. Their work focuses more on ethical considerations and technology governance than on issues of direct criminal liability. Similarly. Nasman et al. adopt an ethical and regulatory approach, emphasizing four fundamental pillars of responsible AI use: transparency, accountability, fairness, and data security and privacy. Employing a normative-comparative methodology, they recommend strengthening regulatory frameworks to minimize the potential misuse of AI. Although their study does not center on criminal sanctions, it stresses the importance of developing an adaptive legal framework. Hibatulloh directs his research toward law enforcement measures against AIrelated offenses, using conceptual, legislative, and comparative approaches. concludes that AI is not yet recognized as a legal subject under Indonesian law. criminal liability can only be attributed to individuals or legal entities formally acknowledged by law. Beryl, in a similar vein, advocates for regulations that explicitly Faizin Sulistio and Aizahra Daffa Salsabilla. AuPertanggungjawaban Pada Tindak Pidana Yang Dilakukan Agen Otonom Artificial Intelegence,Ay Unes Law Review 6, no. : 5479Ae90, https://doi. org/10. 31933/unesrev. Vincentius Patria Setyawan. AuProspek Pengaturan Kecerdasan Buatan Sebagai Subjek Hukum Pidana Dan Model Pertanggungjawabannya,Ay Sultan Adam: Jurnal Hukum Dan Sosial 3, no. : 115Ae122, https://doi. org/10. 71456/sultan. Linda Ikawati. Sulaiman Sulaiman, and Muhammad Fahri Huseini. AuMasa Depan Penegakan Hukum Indonesia: Sistem Peradilan Pidana Berbasis Kecerdasan Buatan (AI),Ay in Prosiding Seminar Nasional Ilmu Hukum (Pemalang: Asosiasi Peneliti dan Pengajar Ilmu Hukum Indonesia, 2. , 1Ae18, https://doi. org/10. 62383/prosemnashuk. Nasman Nasman. Pudji Astuti, and Dita Perwitasari. AuEtika Dan Pertanggungjawaban Penggunaan Artificial Intelengence Di Indonesia,Ay Jurnal Hukum Lex Generalis 5, no. : 1Ae15, https://ojs. com/index. php/JHLG/article/view/622. Hutapea et al. Artificial Intelligence and Criminal Liability: A Preliminary Study within the Indonesian Legal System | 691 address AI as a potential criminal actor. 6 Sofian also addresses this issue, emphasizing the need for future criminal law reform . us constituendu. to incorporate AI as a legal He argues that several jurisdictions have already adopted such recognition and that Indonesia should consider following suit to address potential legal violations committed by autonomous AI systems. In the context of judicial duties. Rondonuwu et al. analyze the use of AI based on existing legal instruments, such as Law No. 19 of 2016 and the Circular Letter of the Minister of Communication and Information Technology No. 9/2023. They conclude that, under current Indonesian law. AI is regarded as an electronic agent rather than a criminal law subject. Their study focuses on integrating AI into judicial work and the administrative arrangements necessary to facilitate this integration. Wahyudi BR provides a broader examination of AI-related crimes, including adaptive cyberattacks, data manipulation, and the misuse of deepfakes. He observes that both the Electronic Information and Transactions Law and international frameworks such as the Budapest Convention do not explicitly address AI-based His recommendations include regulatory reform, capacity building for enforcement authorities, and global legal harmonization. 9 Astiti contends that AI cannot be recognized as a legal subject because it lacks the capacity for will or control over its actions. Consequently, she argues that criminal liability should be borne by AI developers and users, in line with the doctrines of strict liability and vicarious liability, without extending legal subject status to AI. Maharani et al. broaden the discussion to societal protection against AIrelated harms, including copyright infringement, plagiarism, and the malicious use of Using a normative legal approach, they call for comprehensive regulations that prioritize the protection of human values. 11 Putri et al. stress the urgency of AI implementation in law enforcement while acknowledging barriers such as legal gaps and low public awareness. They view AI as a complementary tool that enhances efficiency but cannot fully replace human decision-making. 12 Setiawan and Wijayanto provide a Beryl Helga Fredella Hibatulloh. AuUpaya Penegakan Hukum Terhadap AI (Artificial Intelligenc. Sebagai Subjek Hukum Pidana Dalam Perspektif Kriminologi,Ay Taruna Law: Journal of Law and Syariah 3, no. : 87Ae98, https://doi. org/10. 54298/tarunalaw. Ahmad Sofian. AuKonsepsi Subjek Hukum Dan Pertanggungjawaban Pidana Artificial Intellegence,Ay Halu Oleo Law Review 9, no. : 13Ae26, https://doi. org/10. 33561/holrev. Natalie Tresye Rondonuwu. Donna Okthalia Setiabudhi, and Carlo A Gerungan. AuPengaturan Penggunaan Kecerdasan Buatan Dalam Tugas Profesional Hakim Di Indonesia,Ay Lex Privatum 15, no. : 1Ae12, https://ejournal. id/index. php/lexprivatum/article/view/60761. Wahyudi BR. AuTantangan Penegakan Hukum Terhadap Kejahatan Berbasis Teknologi AI,Ay Innovative: Journal Of Social Science Research 5, no. : 3436Ae3450, https://doi. org/10. 31004/innovative. Ni Made Yordha Ayu Astiti. AuStrict Liability of Artificial Intelligence: Pertanggungjawaban Kepada Pengatur AI Ataukah AI Yang Diberikan Beban Pertanggungjawaban?,Ay Jurnal Magister Hukum Udayana 12, no. : 962Ae 80, https://doi. org/10. 24843/JMHU. Bondan Ayu Maharani et al. AuPerlindungan Hukum Masyarakat Dari Dampak Negatif Penggunaan AI,Ay Media Hukum Indonesia 3, no. : 666Ae73, https://ojs. id/index. php/MHI/article/view/1939. Feby Milenia Yahya Krisna Putri et al. AuThinking the Future Potential of Artificial Intelligence in Law Enforcement,Ay Perspektif Hukum 24, no. : 269Ae294, https://doi. org/10. 30649/ph. Hutapea et al. Artificial Intelligence and Criminal Liability: A Preliminary Study within the Indonesian Legal System | 692 sector-specific analysis of AI in the automotive industry, particularly in autonomous They recommend applying strict liability principles to drivers and product liability principles to manufacturers, while advocating regulatory reforms to address AI as a legal subject. Previous scholarship has explored AI regulation, the possibility of recognizing AI as a legal subject, and the challenges of law enforcement. However, no prior research has comprehensively integrated an analysis of alternative criminal liability modelsAi such as vicarious liability and strict liabilityAiwithin the Indonesian criminal law framework for AI-related cases. Most studies remain focused on ethical, administrative, or conceptual dimensions without offering operational formulations applicable in This study fills that gap by proposing a realistic model of criminal liability that aligns with national criminal law principles while accommodating the increasingly autonomous capabilities of AI. Based on these considerations, this study aims to: Analyze the Indonesian criminal law framework concerning liability for actions involving Artificial Intelligence, whether as an auxiliary tool or as an indirect Identify weaknesses and gaps in existing provisions within the Criminal Code, the Electronic Information and Transactions Law, and related regulations in addressing AI developments. Examine the relevance and potential applicability of alternative liability modelsAi such as vicarious liability and strict liabilityAiin the context of AI. RESEARCH METHODOLOGY This study employs a normative legal research approach, focusing on the analysis of positive legal norms, criminal law principles, and legal doctrines relevant to the issue of criminal liability for Artificial Intelligence (AI) in Indonesia. This approach was selected because the topic is closely linked to the existing legal vacuum and the need to formulate a criminal liability model consistent with national criminal law principles. The research adopts a descriptive-analytical doctrinal method, incorporating three main approaches: a statutory approach, examining the Criminal Code, the Electronic Information and Transactions Law, and AI-related regulations. a conceptual approach, analyzing the doctrines of mens rea, actus reus, vicarious liability, and strict liability. a comparative approach, exploring models of AI criminal liability in jurisdictions such as the European Union, the United States. Japan, and Singapore. The study relies exclusively on secondary data, comprising: primary legal materials . ational laws and regulations, international treaties, supranational regulations, and Ardi Dwi Setiawan and Indung Wijayanto. AuTinjauan Yuridis Pertanggungjawaban Pidana Dalam Tindak Pidana Yang Melibatkan Artificial Intelligence,Ay Yustisi: Jurnal Hukum Dan Hukum Islam 12, no. : 174Ae187, https://doi. org/10. 32832/yustisi. Hutapea et al. Artificial Intelligence and Criminal Liability: A Preliminary Study within the Indonesian Legal System | 693 court decisions pertaining to AI). secondary legal materials . cademic literature, peerreviewed journal articles, reports from international institutions, and conference and tertiary legal materials . egal dictionaries, encyclopedias, and legal Data collection was conducted through an extensive literature review of international and national academic databases, alongside the examination of legal documents, including statutory instruments, minutes of parliamentary proceedings, and government reports on AI developments. Data were analyzed using a normative qualitative method through several stages: inventory and classification of legal materials. legal interpretation employing grammatical, systematic, and teleological approaches. comparative analysis across and normative synthesis to formulate an AI criminal liability model applicable in Indonesia. The validity of the findings was ensured through source triangulation, peer review of literature, and legal justification, thereby guaranteeing that the analysis aligns with established legal principles. RESEARCH RESULT AND DISCUSSION Indonesian Criminal Law Framework on Liability for Acts Involving Artificial Intelligence This study examines the Indonesian criminal law framework governing liability for acts involving Artificial Intelligence (AI), both when AI functions as an assistive tool and when it operates as an indirect actor. Based on an analysis of primary, secondary, and tertiary legal materials, it is evident that Indonesian criminal law does not specifically recognize AI as an entity capable of bearing criminal liability. The Criminal Code (Law No. 1 of 2. and the Electronic Information and Transactions Law (UU ITE) contain general provisions applicable to unlawful acts involving technology. however, they lack explicit mechanisms for assigning liability when the immediate perpetrator is an autonomous system. Findings from this study indicate that the principle of geen straf zonder schuld (Auno punishment without faultA. remains a fundamental obstacle to positioning AI as a subject of criminal law. This is due to AIAos inability to possess mens rea, or the conscious intent required under conventional criminal law. The concept of faultAicentral to this principleAicannot be attributed to non-human entities that lack free will or the moral capacity to discern right from wrong. Strict adherence to this principle therefore renders it difficult to prosecute AI as a direct perpetrator of criminal acts. The research further reveals that, in AI-related crimes, criminal liability is generally directed toward the human or legal entity that controls, develops, or benefits from the AIAos actions. This approach aligns with the doctrines of vicarious liability and corporate criminal liability, whereby the party exercising control or deriving benefit is held While this model is considered more realistic and consistent with Hutapea et al. Artificial Intelligence and Criminal Liability: A Preliminary Study within the Indonesian Legal System | 694 IndonesiaAos prevailing legal framework, it nonetheless requires normative and procedural reinforcement. In cases where AI serves solely as an instrumental tool, the existing legal framework is relatively adequate to prosecute human perpetrators who employ AI to commit offenses. However, when AI operates as an indirect perpetrator or functions autonomously beyond human control, the applicable legal norms remain vague and lack a definitive basis. This normative gap creates potential legal loopholes that could be exploited to evade criminal responsibility. The study also finds that IndonesiaAos positive law does not yet provide mechanisms for algorithm auditability or explainable AI. The absence of such legal instruments poses significant challenges in establishing causality in AI-related offenses. Without the means to audit or transparently explain AI decision-making processes, law enforcement agencies face substantial difficulties in proving culpability and demonstrating the causal link between AI actions and their legal consequences. This underscores the urgency of comprehensive legal reforms to address the evolving nature of AI technology. comparative review of regulatory developments in the European Union. Japan, and Singapore shows that these jurisdictions have begun formulating specific legal frameworks for high-risk AI, including requirements for algorithm transparency and the application of strict liability in certain sectors, such as autonomous transportation. This study finds that IndonesiaAos criminal law framework remains largely reactive to advancements in Artificial Intelligence (AI) technology. This reactive stance contributes to legal uncertainty, particularly in cases involving autonomous AI systems capable of producing criminal outcomes. Current regulations fail to address AIAos distinctive characteristicsAisuch as its capacity for independent learning and decisionmaking without direct human interventionAithereby creating a potential legal vacuum that undermines victim protection and the pursuit of justice. The second finding underscores the centrality of the culpability principle in applying criminal law to AI. The absence of mens rea, or moral consciousness, in AI systems renders traditional fault-based doctrines inapplicable. This gap necessitates the adaptation of existing legal concepts or the development of new principles capable of encompassing non-human entities implicated in criminal acts. Third, the research highlights that alternative liability modelsAisuch as vicarious liability, strict liability, and corporate criminal liabilityAiare more practical than recognizing AI as a subject of criminal law. These models shift responsibility to individuals or legal entities that control, develop, or benefit from AI. This approach is deemed more realistic, as it places accountability on actors with the legal and moral capacity to be held liable. The fourth finding addresses the evidentiary challenges posed by the Aublack boxAy phenomenon in AI. The opacity of algorithms and AI decision-making processes Hutapea et al. Artificial Intelligence and Criminal Liability: A Preliminary Study within the Indonesian Legal System | 695 impedes the establishment of causality in criminal proceedings. In the absence of mechanisms for algorithm auditability or explainable AI, law enforcement faces substantial difficulties in tracing the causal link between AI behavior and criminal acts, thereby undermining the effectiveness of enforcement measures. Finally, the study identifies the lack of cross-jurisdictional cooperation in prosecuting AI-related crimes as a significant weakness. Given AIAos capacity to operate across national borders, the absence of effective international collaboration limits legal protection for victims. This highlights the need for a transnational legal framework to anticipate and address AI-related criminality comprehensively. These findings are consistent with the positions of Hibatulloh. Pagallo. Sofian, and Wahyudi BR, who argue that AI can act as a perpetrator, victim, or instrument of crime, and that accountability should rest with the party exercising control over the AI. 14 They also align with Muladi and Arief, who emphasize fault as a prerequisite for criminal prosecution, thereby excluding the possibility of holding AI directly liable. In contrast to studies such as Walton et al. , which focus primarily on the technical challenges of proof, this research identifies the absence of clear legal norms as an equally pressing obstacle. 16 Moreover, while international scholarship has explored the concept of granting AI the status of an Auelectronic person,Ay this study concludes that, in the Indonesian context, such recognition is premature given the current limitations in regulatory readiness and legal infrastructure. The findings suggest that applying Indonesian criminal law to AI-related cases requires adapting legal doctrines while preserving foundational principles such as legality and fault. The vicarious liability model may serve as an interim solution, ensuring that those operating or benefiting from AI bear criminal responsibility. Furthermore, the application of strict liability in high-risk sectorsAisuch as autonomous transportation and AI-driven healthcareAican enhance public protection by eliminating the need to prove fault. However, such implementation must be accompanied by robust technological oversight mechanisms, including mandatory algorithm audits. From a law enforcement standpoint, technical barriers such as AIAos Aublack boxAy phenomenon must be addressed through regulations requiring algorithmic transparency and explainability. This approach aligns with emerging international Hibatulloh. AuUpaya Penegakan Hukum Terhadap AI (Artificial Intelligenc. Sebagai Subjek Hukum Pidana Dalam Perspektif KriminologiAy. Ugo Pagallo. The Laws of Robots: Crimes. Contracts, and Torts. Law. Governance and Technology Series (Dordrecht: Springer, 2. , https://doi. org/10. 1007/978-94-007-6564-1. Sofian. AuKonsepsi Subjek Hukum Dan Pertanggungjawaban Pidana Artificial IntellegenceAy. Wahyudi BR. AuTantangan Penegakan Hukum Terhadap Kejahatan Berbasis Teknologi AI. Ay Muladi Muladi and Barda Nawawi Arief. Teori-Teori Dan Kebijakan Pidana (Bandung: Alumni, 2. Douglas Walton. Giovanni Sartor, and Fabrizio Macagno. AuAn Argumentation Framework for Contested Cases of Statutory Interpretation,Ay Artificial Intelligence and Law 24, no. : 51Ae91, https://doi. org/10. 1007/s10506-016-9179-0. Hutapea et al. Artificial Intelligence and Criminal Liability: A Preliminary Study within the Indonesian Legal System | 696 standards, in which algorithmic accountability constitutes a core element of high-risk AI Accordingly, this study concludes that: AI-related provisions in Indonesian criminal law remain general in scope and are inadequate for addressing specific issues when AI operates as an indirect actor. Adaptation of criminal law doctrines is essential, particularly through the broader application of vicarious liability and strict liability, to close existing accountability Strengthening technical capacities and fostering international cooperation are critical to effective enforcement, given the transnational reach and algorithmic complexity of AI systems. The development of sector-specific AI regulations is an urgent priority, especially in domains that present elevated risks to public safety and data security. Weaknesses and Gaps in the Criminal Code, the Electronic Information and Transactions Law, and Related Regulations Concerning the Development of Artificial Intelligence This study seeks to identify weaknesses and gaps in the Criminal Code (KUHP), the Electronic Information and Transactions Law (UU ITE), and other related regulations with respect to the development of Artificial Intelligence (AI) in Indonesia. It further examines enforcement challengesAiincluding technical barriers, jurisdictional constraints, and institutional capacityAito formulate recommendations for strengthening the regulatory framework in anticipation of future legal needs. An analysis of the Criminal Code, the ITE Law, and several sectoral regulations reveals that IndonesiaAos current legal framework remains firmly rooted in a humancentric offender paradigm. The Criminal Code adheres to the principle of geen straf zonder schuld (Auno crime without faultA. , which requires the perpetrator to possess moral awareness . ens re. In the case of AI, the absence of such awareness precludes the direct attribution of criminal liability. Although the ITE Law addresses unlawful acts in the electronic domain, it does not explicitly account for the autonomous or adaptive characteristics of AI. Similarly, sector-specific regulationsAisuch as those governing traffic, healthcare, and personal data protectionAiprovide only limited provisions for AI-specific legal responsibilities. The research identifies three prevailing patterns of liability: . user or operator liability, where individuals who operate AI systems negligently or unlawfully are held . manufacturer or developer liability, for design defects or negligent system maintenance. corporate liability, where ownership or beneficial use of AI invokes the doctrine of corporate criminal liability. However, these approaches face Hutapea et al. Artificial Intelligence and Criminal Liability: A Preliminary Study within the Indonesian Legal System | 697 substantial evidentiary challenges due to the Aublack boxAy nature of AI algorithms, which obstruct traceability and transparency in decision-making processes. Five principal findings emerge from this study. First, the Indonesian criminal law framework remains reactive to AI developments, failing to provide legal certainty in cases involving autonomous AI with criminal consequences. Second, the principle of fault . presents a fundamental barrier in addressing non-human entities, necessitating doctrinal adaptation. Third, alternative liability modelsAisuch as vicarious liability, strict liability, and corporate criminal liabilityAiare more pragmatic than treating AI as a direct perpetrator. Fourth, technical limitations in evidentiary processes, particularly the opacity of AI algorithms, significantly impede the establishment of causality in criminal proceedings. Fifth, the absence of a cross-jurisdictional law enforcement framework for AI-related crimes undermines victim protection. These results are consistent with the findings of Astiti. Kan, and Setyawan, who argue that classical criminal law doctrines are inadequate for directly prosecuting AI entities given the absence of mens rea. 17 They also align with Balasubramaniam et al. , who stress the necessity of incorporating explainable AI principles and algorithm auditability into law enforcement mechanisms. 18 However, unlike prior research that primarily addresses AI from data protection and ethical perspectives, this study emphasizes the criminal law dimension, highlighting normative gaps within the Criminal Code and the ITE Law and their intersection with international jurisdictional issues. Moreover, this research advances the existing discourse by integrating a multilayered liability frameworkAiassigning responsibility concurrently to users, producers, and corporationsAiwhich remains underexplored in the Indonesian legal context. Such an approach is particularly relevant given the complex, multi-actor structure of the AI ecosystem and the numerous stakeholders involved in the technologyAos lifecycle. Table 1. Challenges and Strategic Solutions for Addressing Criminal Liability of Artificial Intelligence (AI) in Indonesia Aspect Technical Ae Algorithmic Transparency (Black Bo. Challenge Description AI systems operate using complex algorithms that are often opaque and difficult to audit, creating significant challenges in proving causality and establishing fault. Strategic Solution Mandate algorithm audits . lgorithm auditabilit. and incorporate the principle of explainable AI into binding Astiti. AuStrict Liability of Artificial Intelligence: Pertanggungjawaban Kepada Pengatur AI Ataukah AI Yang Diberikan Beban Pertanggungjawaban?Ay. Celal Hakan Kan. AuCriminal Liability of Artificial Intelligence from the Perspective of Criminal Law: An Evaluation in the Context of the General Theory of Crime and Fundamental Principles,Ay International Journal Of Eurasia Social Sciences 15, no. : 276Ae313, https://doi. org/10. 35826/ijoess. Setyawan. AuProspek Pengaturan Kecerdasan Buatan Sebagai Subjek Hukum Pidana Dan Model Pertanggungjawabannya. Ay Nagadivya Balasubramaniam et al. AuTransparency and Explainability of AI Systems: From Ethical Guidelines to Requirements,Ay Information and Software Technology 159 . : 1Ae15, https://doi. org/10. 1016/j. Hutapea et al. Artificial Intelligence and Criminal Liability: A Preliminary Study within the Indonesian Legal System Aspect Legal Ae Absence of Mens Rea Cross-Border Jurisdiction Challenge Description AI lacks consciousness and intent, making it incompatible with the traditional principle of nulla poena sine culpa (Auno crime without faultA. AI can be controlled remotely from foreign jurisdictions, with data and servers distributed across multiple countries, complicating Law Enforcement Capacity Limited technical expertise among law enforcement personnel in understanding and analyzing AIgenerated digital evidence. Data Leakage & Privacy AIAos ability to process vast amounts of personal data increases the risk of privacy AI can be exploited for malicious purposes such as deepfakes, automated phishing, and other Potential Abuse of AI | 698 Strategic Solution Develop alternative liability frameworks, such as vicarious liability, strict liability, or corporate criminal liability. Enhance international cooperation, establish extradition treaties, and implement mutual legal assistance agreements specifically addressing AI-related crimes. Provide intensive digital forensics training, foster collaboration with technology experts, and create a dedicated investigative unit for AI-related Enforce the Personal Data Protection Law rigorously and require AI developers to obtain data security certifications. Establish a legally binding list of prohibited AI applications and implement strict oversight mechanisms for high-risk AI The findings indicate that the primary weakness of IndonesiaAos regulatory framework lies not only in the absence of explicit provisions on Artificial Intelligence (AI) but also in the incompatibility between prevailing criminal law doctrines and the autonomous, adaptive nature of AI. The principle of geen straf zonder schuld (Auno punishment without faultA. , a cornerstone of criminal law, presupposes free will and moral awarenessAiattributes that AI inherently lacks. Consequently, liability should be assigned to those who control and derive benefit from AI, consistent with the principle of risk allocation in modern law. The Aublack boxAy phenomenon in AI compounds these challenges by obscuring critical information about decision-making processes. In the absence of legally mandated algorithmic audit mechanisms and the adoption of explainable AI principles, law enforcement agencies will face substantial obstacles in establishing causality and This is not solely a technical limitation. it also implicates fundamental legal rights, including a defendantAos right to be informed of the charges and evidence against them, as well as a victimAos right to justice. The lack of cross-jurisdictional cooperation presents further practical difficulties. Given AIAos capacity to operate globally, its servers and supporting infrastructure are often located outside the jurisdiction where criminal conduct occurs. Without a dedicated mutual legal assistance framework for AI-related offenses, enforcement Hutapea et al. Artificial Intelligence and Criminal Liability: A Preliminary Study within the Indonesian Legal System | 699 effortsAiparticularly in cases of AI-enabled cybercrimes such as deepfake fraud, automated phishing, and digital market manipulationAiwill be significantly hindered. This study confirms that the Criminal Code (KUHP), the Electronic Information and Transactions Law (ITE La. , and related regulations are insufficient to address the normative and technical challenges posed by AI. The current legal framework is reactive rather than anticipatory and fails to provide legal certainty in cases involving autonomous AI. Traditional doctrines requiring mens rea must be adapted through the incorporation of alternative, layered liability models, including vicarious liability, strict liability, and corporate criminal liability. Comprehensive regulatory reforms are urgently required, encompassing mandatory algorithm audits, the integration of explainable AI principles, provisions for cross-jurisdictional liability, and enhanced capacity-building in digital forensics for law enforcement personnel. Without these measures, regulatory gaps will persist, enabling offenders to evade accountability. Accordingly, the development of a robust, adaptive, and forward-looking regulatory framework for AI is imperative to ensure that IndonesiaAos legal system remains responsive in the era of autonomous technology. Relevance and Potential Application of Alternative Liability Models. Such as Vicarious Liability and Strict Liability, in the Context of AI This study examines the relevance and potential application of alternative liability modelsAiparticularly vicarious liability and strict liabilityAiin the context of the use and operation of Artificial Intelligence (AI) in Indonesia. The inquiry is prompted by the absence of explicit statutory provisions on AI, with current legal practice relying on the interpretation of general criminal norms contained in the Criminal Code (KUHP), the Electronic Information and Transactions Law (ITE La. , the Personal Data Protection Law, and the Road Traffic and Transportation Law. The objective is to provide a comprehensive analysis of how these alternative liability models can be integrated into the national legal framework to ensure legal certainty, safeguard public interests, and promote accountability among AI-related businesses and developers. The findings indicate that IndonesiaAos legal system currently lacks a dedicated instrument establishing criminal liability for damages or offenses caused by AI. Analysis of domestic legal documents and international case studies reveals the following: Vicarious Liability has potential applicability where AI functions within a legal relationship in which a developer, operator, or corporation acts as the AuprincipalAy and the AI system functions as the Auagent. Ay Under this model, criminal or civil liability is imposed on the party exercising control and authority over the AI, even when the unlawful act is performed autonomously by the AI system. Hutapea et al. Artificial Intelligence and Criminal Liability: A Preliminary Study within the Indonesian Legal System | 700 Strict Liability is particularly relevant in contexts where AI is categorized as highrisk technology, such as autonomous vehicles. AI-based medical diagnostic systems, or biometric data processing algorithms. This model imposes liability without requiring proof of fault, thereby expediting dispute resolution and providing victims with greater legal certainty. Existing legislationAithe Criminal Code, the ITE Law, the Personal Data Protection Law, and the Road Traffic and Transportation LawAiremains fragmented and requires adaptation to address AI-specific characteristics. For instance, while the ITE Law may be applied to prosecute AI-driven data breaches, it does not address the criminal liability of parties operating autonomous AI. Comparative practices, such as the European UnionAos AI Act and Product Liability Directive, illustrate that combining strict liability for high-risk activities with vicarious liability for employment or contractual relationships can provide a balanced model that fosters both innovation and legal protection. The principal conclusion of this study is that the application of vicarious liability and strict liability offers a realistic and effective means of addressing the legal vacuum surrounding AI-related harm or criminal conduct in Indonesia. Vicarious liability serves to protect victims in situations where AI operates within an employment or contractual framework under clear instructions, while strict liability ensures maximum protection in high-risk AI applications that operate autonomously. These findings are consistent with prior scholarship, including Astiti. Pagallo, and Putri et al. , which advocate the use of vicarious liability to secure the accountability of AI controllers and strict liability for technologies with significant potential for harm. However, this study contributes a uniquely Indonesian perspective, emphasizing the reliance on general statutory instrumentsAisuch as the KUHP and the ITE LawAiand the absence of dedicated AI legislation. Unlike European jurisdictions, which are moving toward harmonized AI regulations. Indonesia remains in the preliminary stages of policy development. thus, the recommendations in this study prioritize the adaptation of existing laws as an interim measure prior to the enactment of specialized AI From a legal standpoint, the findings indicate that the adoption of vicarious liability and strict liability is not only normatively viable but also urgently needed to address the legal challenges posed by the distinctive characteristics of AIAinamely, decision-making autonomy, behavioral unpredictability, and the inherent difficulty of establishing AuintentAy or mens rea. In the absence of these alternative liability models, victims of AI- Astiti. AuStrict Liability of Artificial Intelligence: Pertanggungjawaban Kepada Pengatur AI Ataukah AI Yang Diberikan Beban Pertanggungjawaban?Ay. Pagallo. The Laws of Robots: Crimes. Contracts, and Torts. Putri et al. AuThinking the Future Potential of Artificial Intelligence in Law Enforcement. Ay Hutapea et al. Artificial Intelligence and Criminal Liability: A Preliminary Study within the Indonesian Legal System | 701 related harm are likely to face substantial evidentiary obstacles, particularly in proving fault on the part of a human perpetrator. The application of strict liability to high-risk AI technologies would compel businesses and AI developers to comply with rigorous security standards and conduct regular audit procedures, as highlighted in this studyAos findings. Conversely, implementing vicarious liability would incentivize companies to closely monitor and control the use of AI by employees or third parties operating under contractual The results underscore the need for two strategic measures in Indonesia. First, existing lawsAiincluding the Electronic Information and Transactions Law (ITE La. , the Personal Data Protection Law (PDP La. , and the Road Traffic and Transportation Law (LLAJ La. Aishould be amended to incorporate explicit provisions on AI-related liability under both vicarious and strict liability frameworks. Second, a dedicated AI statute should be enacted, establishing clear security requirements, mandatory audit mechanisms, and accountability procedures, particularly for high-risk AI systems. Adopting this alternative liability framework would help Indonesia avoid a legal vacuum that undermines victim protection, while ensuring that AI innovation continues within a well-defined and equitable legal structure. In the broader context of technological globalization, this step would also align Indonesia with jurisdictions that have already enacted comprehensive AI legislation. CONCLUSION This study seeks to analyze IndonesiaAos criminal law framework governing liability for actions involving Artificial Intelligence (AI), both when AI functions as an assistive tool and when it operates as an indirect perpetrator. It further aims to identify weaknesses and regulatory gaps in the Criminal Code, the Electronic Information and Transactions Law (ITE La. , and related legislation, as well as to assess the relevance of adopting alternative liability modelsAiparticularly vicarious liability and strict liabilityAiin the context of AI. The findings indicate that the current Indonesian criminal law framework remains centered on the concept of a human actor . atural perso. , rendering it illsuited to address the autonomous and self-learning characteristics of AI. Neither the Criminal Code nor the ITE Law explicitly provides mechanisms for attributing fault in cases where AI acts without direct human instruction. This gap risks creating legal uncertainty in establishing key elements such as mens rea and causation. Further analysis confirms that alternative liability models offer potential solutions. Vicarious liability can be applied to assign criminal responsibility to parties exercising control over or deriving benefit from AI, while strict liability is particularly relevant for high-risk AI applications, ensuring that victims are not burdened with the challenge of proving fault. These findings offer practical guidance for policymakers and law Hutapea et al. Artificial Intelligence and Criminal Liability: A Preliminary Study within the Indonesian Legal System | 702 enforcement authorities in formulating legal norms that are responsive to technological The principal limitation of this research lies in its normativeAeconceptual scope, as it does not incorporate empirical testing through concrete case studies within Indonesia. Consequently, future studies should integrate empirical methodologies, including case simulations and comparative analyses with jurisdictions that have already implemented AI-specific regulations. As a policy recommendation, revisions to the Criminal Code and the ITE LawAior the development of dedicated AI legislation incorporating a hybrid accountability framework, ensuring robust victim protection, and fostering responsible innovationAiare urgently needed. REFERENCES