Document Type : scientific research paper
Authors
1
Master's student in Criminal Law and Criminology, Lahijan Branch, Islamic Azad University, Lahijan, Iran.
2
Islamic Azad University, Lahijan Branch, Iran
Abstract
The rapid advancement of artificial intelligence, particularly facial recognition systems, has profoundly transformed traditional mechanisms of crime detection and evidentiary assessment within criminal justice systems. These systems, which rely on biometric data processing and complex machine-learning algorithms, ostensibly enhance the efficiency, accuracy, and speed of policing and judicial decision-making. Nevertheless, due to their probabilistic, opaque, and sometimes biased nature, they may generate erroneous identifications that directly threaten fundamental rights, including due process, privacy, and the presumption of innocence. This raises a central question: when a wrongful arrest, prosecution, or conviction is caused by an algorithmic error, which actor within the criminal justice system bears criminal liability—the police officer, investigative authority, judicial decision-maker, or even the system’s designers and developers? Adopting a comparative methodology, this study examines the approaches of the United States and the European Union to the problem of algorithmic error and criminal accountability. The findings indicate that the United States continues to adhere to the classical principle of individual criminal responsibility, whereby technological tools serve merely as auxiliary inputs, and human decision-makers remain solely accountable. U.S. courts generally regard algorithmic outputs as probabilistic indicators rather than conclusive evidence. In contrast, the European Union—through its risk-based Artificial Intelligence Act (AI Act)—imposes stringent obligations of algorithmic transparency, documentation, human oversight, and regulated deployment of high-risk systems such as facial recognition, alongside a distributed model of accountability across the technological supply chain. In Iran, despite the absence of explicit statutory provisions regarding AI-generated evidence, general principles embedded in the Islamic Penal Code—such as the personal nature of criminal responsibility (Article 140), evidentiary standards, and the rules of causation—provide implicit foundations for assigning liability to law-enforcement actors. Based on comparative insights and the unique challenges posed by high-risk AI systems, this article proposes a “chain liability model,” according to which criminal responsibility is allocated across designers, developers, supervisory bodies, law-enforcement officials, and judicial authorities. This model mitigates institutional responsibility-avoidance, reinforces criminal justice safeguards, and ensures a balanced integration of technological efficiency with the protection of citizens’ rights.
Keywords
Subjects