Construction of Corporate Criminal Liability for Artificial Intelligence Algorithm Malpractice in the Health Sector
Keywords:
Artificial Intelligence, Health Law, Algorithmic Malpractice, Strict LiabilityAbstract
The integration of Artificial Intelligence (AI) in healthcare has shifted the paradigm of medical malpractice from mere human error to the potential systemic failure of algorithms. However, the current Indonesian criminal law framework still places responsibility on medical personnel, creating a normative vacuum for technology corporations when fatal misdiagnosis occurs due to system bias. This study aims to construct an ideal model of corporate criminal liability for the phenomenon of algorithmic malpractice. Using normative juridical research methods with statutory, conceptual, and comparative approaches, this study examines the Health Law and the National Criminal Code. The results show that the conventional doctrine of subjective fault (mens rea) is difficult to apply to the characteristics of autonomous black box algorithms. Therefore, this study recommends the limited application of the doctrine of Strict Liability. In this construction, technology providers are positioned as legal subjects who are criminally responsible for defects in algorithmic products that cause loss of life or injury to patients, without the need for prosecutors to prove the element of intent on the part of directors. The study concluded that shifting the burden of responsibility from individual physicians to corporate developers is absolutely necessary to ensure legal certainty, encourage high product safety standards, and provide substantive protection for patients in the digital age.



















