Exploring The Legal Subjectivity of Artificial Intelligence in Incitement to Suicide

https://doi.org/10.29303/ius.v12i1.1369

Authors

Keywords:

Artificial Intelligence, Incitement, Suicide, Criminal Law, Subjectivity

Abstract

The development of conversational artificial intelligence (AI) has not only brought about technological innovations but has also given rise to legal issues. The phenomenon of AI-induced suicide highlights the multifaceted legislative demands within the criminal domain for AI. In-depth research into the issues of suitability concerning suicide victims, AI, and regulatory entities becomes particularly necessary. Through literature analysis and comparative legal analysis, this article aims to provide theoretical support for the legal delineation of liability in the context of AI incitement to suicide. Specifically, this article conducts a thorough investigation and comprehensive analysis of relevant legal literature both in China and internationally. The objective is to clarify the legal positions and real challenges surrounding the issue of AI incitement to suicide. Consequently, this article explores whether AI should be considered a legal subject and how, in different contexts, suicide victims and AI regulatory entities should share corresponding responsibilities. As for the findings, AI should not be regarded as an independent legal subject. Based on the theories of victim self-entrapment risk and omission in criminal law, in various situations, suicide victims or AI regulatory entities should bear corresponding responsibilities for the events of incitement to suicide. By delving into the legal liability issues of AI in incitement to suicide, this article provides a theoretical basis for comprehensive AI legislation in the future, demonstrating theoretical innovation. Furthermore, the exploration of criminal legal regulation contributes to the construction of a more comprehensive and rational legal framework for AI.

Downloads

Download data is not yet available.

Author Biographies

Cao Zhaoxun, Jingdezhen Vocational University of Art

https://orcid.org/0009-0004-4277-7157

Ramalinggam Rajamanickam , Universiti Kebangsaan Malaysia (UKM), Malaysia

Universiti Kebangsaan Malaysia (UKM), Malaysia

Nur Khalidah Dahlan, Universiti Kebangsaan Malaysia (UKM)

Universiti Kebangsaan Malaysia (UKM)

References

“Artificial Intelligence vs. Human Intelligence: Top 7 Differences.” n.d. Accessed March 21, 2024. https://www.analyticsvidhya.com/blog/2023/07/artificial-intelligence-vs-human-intelligence/.

Allioui, Hanane, and Youssef Mourdi. 2023. “Unleashing the Potential of AI: Investigating Cutting-Edge Technologies That Are Transforming Businesses.” International Journal of Computer Engineering and Data Science. Vol. 3. https://ijceds.com/ijceds/article/view/59/25.

An, Wei. 2022. “The Dilemma and Perfection of Subjective Factors Identification in Network Joint Crime.” Open Journal of Social Sciences 10 (07): 66–72. https://doi.org/10.4236/jss.2022.107006.

Bernert, Rebecca A., Amanda M. Hilberg, Ruth Melia, Jane Paik Kim, Nigam H. Shah, and Freddy Abnousi. 2020. “Artificial Intelligence and Suicide Prevention: A Systematic Review of Machine Learning Investigations.” International Journal of Environmental Research and Public Health 17 (16): 1–25. https://doi.org/10.3390/IJERPH17165929.

Bortnikov, Sergey Petrovich. 2020. “Robots Liability or Liability for Products?” Advances in Intelligent Systems and Computing 908: 32–41. https://doi.org/10.1007/978-3-030-11367-4_3/COVER.

Brown, Rafael Dean. 2021. “Property Ownership and the Legal Personhood of Artificial Intelligence.” Information & Communications Technology Law 30 (2): 208–34. https://doi.org/10.1080/13600834.2020.1861714.

Caldwell, M., J. T.A. Andrews, T. Tanay, and L. D. Griffin. 2020. “AI-Enabled Future Crime.” Crime Science 9 (1): 1–13. https://doi.org/10.1186/S40163-020-00123-8/FIGURES/3.

Dobrinoiu, Maxim. 2019. “THE INFLUENCE OF ARTIFICIAL INTELLIGENCE ON CRIMINAL LIABILITY.” LESIJ - Lex ET Scientia International Journal XXVI (1): 140–47. https://www.ceeol.com/search/article-detail?id=777790

Dong Yinhui and Zhou Feiyan. 2022. “On the Foundation for the Punishment of Negligent Instigator from the Perspective of the Crime of Omission.” Journal of Jiangsu Police Institute 5 (2022).. https://lib.cqvip.com/Qikan/Article/Detail?id=7108303468.

Dremliuga, Roman, and Natalia Prisekina. 2020. “The Concept of Culpability in Criminal Law and AI Systems.” Journal of Politics and Law 13 (3): p256. https://doi.org/10.5539/JPL.V13N3P256.

Dremliuga, Roman, Pavel Kuznetcov, and Alexey Mamychev. 2019. “Criteria for Recognition of AI as a Legal Person.” Journal of Politics and Law 12 (3): p105. https://doi.org/10.5539/JPL.V12N3P105.

Duan Yiming. 2022. “Strong Artificial Intelligence Crime Negation Theory: A Second-Order Analysis from Algorithm Principles to Criminal Law Principles”. Journal of Law Application 12 (2022): 71-78. https://d.wanfangdata.com.cn/periodical/ChlQZXJpb2RpY2FsQ0hJTmV3UzIwMjMwODMxEg1mbHN5MjAyMjEyMDA4GghzcmlndDhidQ%3D%3D

Fonseka, Trehani M., Venkat Bhat, and Sidney H. Kennedy. 2019. “The Utility of Artificial Intelligence in Suicide Risk Prediction and the Management of Suicidal Behaviors.” Australian and New Zealand Journal of Psychiatry 53 (10): 954–64. https://doi.org/10.1177/0004867419864428/ASSET/IMAGES/LARGE/10.1177_0004867419864428-FIG2.JPEG.

Gremsl, Thomas, and Elisabeth Hödl. 2022. “Emotional AI: Legal and Ethical Challenges 1.” Information Polity 27 (2): 163–74. https://doi.org/10.3233/IP-211529.

Hao Tiechuan. “Unrealistic Expectations and Overestimation of the Impact of Artificial Intelligence on the Rule of Law”. Legal Daily. 2018-01-03. http://paper.people.com.cn/zgcsb/html/2018-02/05/content_1835036.htm.

Hew, Patrick Chisan. 2014. “Artificial Moral Agents Are Infeasible with Foreseeable Technologies.” Ethics and Information Technology 16 (3): 197–206. https://doi.org/10.1007/S10676-014-9345-6/METRICS.

Hu, Ying. 2019. “Robot Criminals.” University of Michigan Journal of Law Reform 52 (2): 487–532. https://doi.org/https://doi.org/10.36646/mjlr.52.2.robot.

Kim Sung-hoon. A robot mistakenly identifies a person as a box, grabs with tongs... 40-year-old victim. Herald Economy. 2023-11-08. https://www.newstong.co.kr/view3.aspx?seq=12182875&allSeq=8&txtSearch=&cate=0&cnt=-5&subCate=2&order=default&oid=0&newsNo=15

King, Thomas C., Nikita Aggarwal, Mariarosaria Taddeo, and Luciano Floridi. 2020. “Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions.” Science and Engineering Ethics 26 (1): 89–120. https://doi.org/10.1007/S11948-018-00081-0/TABLES/2.

Lagioia, Francesca, and Giovanni Sartor. 2020. “AI Systems Under Criminal Law: A Legal Analysis and a Regulatory Perspective.” Philosophy and Technology 33 (3): 433–65. https://doi.org/10.1007/S13347-019-00362-X/METRICS.

Laukyte, Migle. 2019. “AI as a Legal Person.” Proceedings of the 17th International Conference on Artificial Intelligence and Law, ICAIL 2019, June, 209–13. https://doi.org/10.1145/3322640.3326701.

Lee, Raymond S.T. 2020. “Artificial Intelligence in Daily Life.” Artificial Intelligence in Daily Life, January, 1–394. https://doi.org/10.1007/978-981-15-7695-9/COVER.

Lejeune, Alban, Aziliz le Glaz, Pierre-Antoine Perron, Johan Sebti, Enrique Baca-Garcia, Michel Walter, Christophe Lemey, and Sofian Berrouiguet. 2022. “Artificial Intelligence and Suicide Prevention: A Systematic Review.” European Psychiatry 65 (1). https://doi.org/10.1192/j.eurpsy.2022.8.

Li Hui, Mu Xishu, Li Chunsu and Wei Zhonghua. 2016. “Study on the correlation between suicide attitude and self-acceptance of depression patients”. Journal of Hebei Medical University 9 (2016). https://xuebao.hebmu.edu.cn/CN/10.3969/j.issn.1007%EE%80%913205.2016.09.004.

Li Xiaolong. 2014. “The Concept and Structure of Criminal Responsibility”. Jianghan Tribune 4 (2014): 61-63. https://core.ac.uk/download/41436195.pdf

Limaye, Aishwarya. “Friend or Foe: Legal Rights of Artificial Intelligence.” BC INTELL. PROP. & TECH. F (2017): 1. https://docplayer.net/164048762-Friend-or-foe-legal-rights-of-artificial-intelligence.html.

Lu Siyuan. 2018. “A Brief Analysis of the Punishability of Incitement to Suicide Behavior”. Legal System and Society 4 (2018). https://kns.cnki.net/kcms2/article/abstract?v=Vof-4b7nxdCXjrJbyKeOn4dHmDrv2-Ys9Cnd9El1NxoqI_hjnAEc49A2fJ02y-q5zg1Dg1wb79Wxr7xd0zAnyhk8CnGZ2oCsLq4ILZbARmeI0zoe-igLBW10oZ9KrbtB5Tpfcl_Lqp9I894ooA7PAQ==&uniplatform=NZKPT&language=CHS

Muzyka, Kamil.. 2013. “The Outline of Personhood Law Regarding Artificial Intelligences and Emulated Human Entities Kamil Muzyka.” Journal of Artificial General Intelligence 4 (3): 164–69. https://doi.org/10.2478/jagi-2013-0010.

Naučius, Mindaugas. 2018. “Should Fully Autonomous Artificial Intelligence Systems Be Granted Legal Capacity?” Teisės Apžvalga, no. 1(17): 113–32. https://www.ceeol.com/search/article-detail?id=642042

Piccolo, Raffaele.“AI in Criminal Sentencing: A Risk to Our Human Rights? | Bulletin (Law Society of South Australia).” 40, no. 11 (2018): 15-17. https://search.informit.org/doi/10.3316/agispt.20190121005803.

Pierre-François Lovens. Le fondateur du chatbot Eliza réagit à notre enquête sur le suicide d’un jeune Belge. La Libre. 2023-03-28. https://www.lalibre.be/belgique/societe/2023/03/28/le-fondateur-du-chatbot-eliza-reagit-a-notre-enquete-sur-le-suicide-dun-jeune-belge-VGN7HCUF6BFATBEPQ3CWZ7KKPM/

Quattrocolo, Serena. 2019. “An Introduction to AI and Criminal Justice in Europe.” Revista Brasileira de Direito Processual Penal 5 (3): 1519–54. https://doi.org/10.22197/rbdpp.v5i3.290.

Sukhodolov, Alexander P., Artur v. Bychkov, and Anna M. Bychkova. 2020. “Criminal Policy for Crimes Committed Using Artificial Intelligence Technologies: State, Problems, Prospects.” Journal of Siberian Federal University - Humanities and Social Sciences 13 (1): 116–22. https://doi.org/10.17516/1997-1370-0542.

Szocik, Konrad, and Agata Jurkowska-Gomułka. 2023. “Ethical, Legal and Political Challenges of Artificial Intelligence: Law as a Response to AI-Related Threats and Hopes.” World Futures 79 (5): 557–73. https://doi.org/10.1080/02604027.2021.2012876.

TENG, Benjamin. “Artificial Intelligence and Legal Liability : What Happens When AI Goes Wrong?” The Proctor 38, no. 4 (2018): 36–37. https://search.informit.org/doi/10.3316/agispt.20182260.

Turner, Jacob. 2019. “Legal Personality for AI.” Robot Rules, 173–205. https://doi.org/10.1007/978-3-319-96235-1_5.

Turoldo, Fabrizio. 2021. “Aiding and Abetting Suicide: The Current Debate in Italy.” Cambridge Quarterly of Healthcare Ethics 30 (1): 123–35. https://doi.org/10.1017/S0963180120000626.

Yang Jinquan. 2020. “Denial of ‘Legal Personality’ for Strong Artificial Intelligence and Diverse Regulatory Approaches.” The Compilation of Legal Forum Papers from the 2020 World Artificial Intelligence Conference, 2020:8. https://kns.cnki.net/kcms2/article/abstract?v=Vof-4b7nxdCbBCgjWb191fRVKhXsGI1qE7zTFYwq5x2hei0RTGf-HgccocnSQGOV19u-TUPMh8FQ_qWTAzDJPvQ7xW78W9Z2pNV74tuivqXd-Xy8LkO5pT5DWz3Ibkgfz7k0AjixX31kB0Tq5_Jdow==&uniplatform=NZKPT&language=CHS

Yang Mengzhu. 2022. “Research on the Criminal Responsibility of AI”. Dispute Settlement 8 (2022): 664. https://doi.org/10.12677/ds.2022.83089

Published

2024-04-26

How to Cite

Zhaoxun, C., Rajamanickam , R., & Dahlan, N. K. (2024). Exploring The Legal Subjectivity of Artificial Intelligence in Incitement to Suicide. Jurnal IUS Kajian Hukum Dan Keadilan, 12(1), 31–42. https://doi.org/10.29303/ius.v12i1.1369