LITERATURE REVIEW ON THE DOUBLE-EDGED SWORD OF AI IN MENTAL HEALTH: A DEEP DIVE INTO CHATGPT'S CAPABILITIES AND LIMITATIONS

  • Paul Arjanto Faculty of Education, Universitas Pattimura, Indonesia
  • Feibry Feronika Wiwenly Senduk Faculty of Economic and Bussines, Universitas Negeri Manado, Indonesia
Keywords: chatGPT, mental health, artificial intelligence, ethical considerations, emotional insight, kesehatan mental, kecerdasan buatan, pertimbangan etis, wawasan emosional

Abstract

Background: This paper focuses on the increasing relevance of AI in mental health care, particularly OpenAI's ChatGPT. It investigates the changing dynamics in mental health, analyzing ChatGPT's role, its benefits, drawbacks, and ethical complexities. Purpose: The objective is to assess ChatGPT's effectiveness in mental health, highlighting its strengths and limitations, and ethical issues. The study aims to understand how AI support can be balanced with the vital human aspect in mental health care. Methods: Comprehensive literature review of 7 pieces of literature from the Scopus database in 2023 (latest). Results: ChatGPT is found to be a useful initial mental health support tool, offering immediate access. However, it falls short in delivering the emotional depth that human health professionals provide. Key ethical concerns include data privacy and accountability. Conclusion: The study recommends a balanced approach, suggesting ChatGPT as an adjunct rather than a replacement for conventional mental health services. Effective use of ChatGPT in mental health care requires strict ethical guidelines and control measures to maintain the crucial human element in this field.

Abstrak

Latar Belakang: Artikel ini berfokus pada meningkatnya relevansi AI dalam layanan kesehatan mental, khususnya ChatGPT OpenAI. Laporan ini menyelidiki dinamika perubahan dalam kesehatan mental, menganalisis peran ChatGPT, manfaat, kelemahan, dan kompleksitas etikanya. Tujuan: Tujuannya adalah untuk menilai efektivitas ChatGPT dalam kesehatan mental, menyoroti kekuatan dan keterbatasannya, serta masalah etika. Studi ini bertujuan untuk memahami bagaimana dukungan AI dapat diseimbangkan dengan aspek vital manusia dalam perawatan kesehatan mental. Metode: Tinjauan pustaka yang menyeluruh terhadap 7 literatur dari database Scopus pada tahun 2023 (terbaru). Hasil: ChatGPT terbukti menjadi alat dukungan kesehatan mental awal yang berguna, menawarkan akses langsung. Namun, hal ini gagal dalam memberikan kedalaman emosional yang diberikan oleh para profesional kesehatan manusia. Masalah etika utama mencakup privasi dan akuntabilitas data. Kesimpulan: Studi ini merekomendasikan pendekatan yang seimbang, menyarankan ChatGPT sebagai tambahan dan bukan pengganti layanan kesehatan mental konvensional. Penggunaan ChatGPT yang efektif dalam perawatan kesehatan mental memerlukan pedoman etika yang ketat dan tindakan pengendalian untuk mempertahankan elemen manusia yang penting dalam bidang ini.

References

Aditama, M. H. R., Wantah, M. E., Laras, P. B., & Suhardita, K. (2023). Epistemic justice of ChatGPT in first aid for child–adolescent mental health issues. Journal of Public Health, 45(4), pp. e816-e817. doi: 10.1093/pubmed/fdad098

Aminah, S., Hidayah, N., & Ramli, M. (2023). Considering ChatGPT to be the first aid for young adults on mental health issues. Journal of Public Health, 45(3), pp. e615–e616. doi: 10.1093/pubmed/fdad065

Amram, B., Klempner, U., Shturman, S., & Greenbaum, D. (2023). Therapists or Replicants? Ethical, Legal, and Social Considerations for Using ChatGPT in Therapy. The American Journal of Bioethics, 23(5), pp. 40–42. doi: 10.1080/15265161.2023.2191022

Arjanto, P., Nahdiyah, U., & Utami, M. S. (2023). The intersect of metaverse, education and mental health: An in-depth analysis. Journal of Public Health, 46(1), pp. fdad162–fdad162. doi: 10.1093/pubmed/fdad162

Branum, C., & Schiavenato, M. (2023). Can ChatGPT Accurately Answer a PICOT Question? Assessing AI Response to a Clinical Question. Nurse Educator, 48(5), pp. 231–233. doi: 10.1097/NNE.0000000000001436

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language Models are Few-Shot Learners. arXiv, pp. 1-75 doi: 10.48550/ARXIV.2005.14165

Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), pp. 183-186. doi: 10.1126/science.aal4230

Chagas, B. A., Pagano, A. S., Prates, R. O., Praes, E. C., Ferreguetti, K., Vaz, H., Reis, Z. S. N., Ribeiro, L. B., Ribeiro, A. L. P., Pedroso, T. M., Beleigoli, A., Oliveira, C. R. A., & Marcolino, M. S. (2023). Evaluating User Experience With a Chatbot Designed as a Public Health Response to the COVID-19 Pandemic in Brazil: Mixed Methods Study. JMIR Human Factors, 10, pp. e43135. doi: 10.2196/43135

Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538(7625), pp. 311–313. doi: 10.1038/538311a

De Bellis, N. (2009). Bibliometrics and citation analysis: From the science citation index to cybermetrics. Lanham: Scarecrow Press.

Dell, C. (2023). Letter to the editor in response to Samuel Woodnutt, Chris Allen, Jasmine Snowden, Matt Flynn, Simon Hall, Paula Libberton, ChatGPT, Francesca Purvis paper titled: Could artificial intelligence write mental health nursing care plans? Journal of Psychiatric and Mental Health Nursing, jpm.12981. doi: 10.1111/jpm.12981

Elkhatat, A. M. (2023). Evaluating the authenticity of ChatGPT responses: A study on text-matching capabilities. International Journal for Educational Integrity, 19(1), pp. 15. doi: 10.1007/s40979-023-00137-0

Elyoseph, Z., Hadar-Shoval, D., Asraf, K., & Lvovsky, M. (2023). ChatGPT outperforms humans in emotional awareness evaluations. Frontiers in Psychology, 14, pp. 1199058. doi: 10.3389/fpsyg.2023.1199058

Ennab, F. (2023). Teaching clinical empathy skills in medical education: Can ChatGPT assist the educator? Medical Teacher, 45(12), pp. 1–1. doi: 10.1080/0142159X.2023.2247144

Farhat, F. (2023). ChatGPT as a Complementary Mental Health Resource: A Boon or a Bane. Annals of Biomedical Engineering. 21 July 2023. doi: 10.1007/s10439-023-03326-7

Frankish, K., & Ramsey, W. M. (Eds.). (2014). The Cambridge handbook of artificial intelligence. Cambridge: Cambridge University Press.

Haman, M., Školník, M., & Šubrt, T. (2023). Leveraging ChatGPT for Human Behavior Assessment: Potential Implications for Mental Health Care. Annals of Biomedical Engineering, 51(11), pp. 2362–2364. doi: 10.1007/s10439-023-03269-z

He, Y., Liang, K., Han, B., & Chi, X. (2023). A digital ally: The potential roles of ChatGPT in mental health services. Asian Journal of Psychiatry, 88, pp. 103726. doi: 10.1016/j.ajp.2023.103726

Huang, C., Chen, L., Huang, H., Cai, Q., Lin, R., Wu, X., Zhuang, Y., & Jiang, Z. (2023). Evaluate the accuracy of ChatGPT’s responses to diabetes questions and misconceptions. Journal of Translational Medicine, 21(1), pp. 502. doi: 10.1186/s12967-023-04354-6

Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), pp. 15–25. doi: 10.1016/j.bushor.2018.08.004

Kettle, L., & Lee, Y.-C. (2023). User Experiences of Well-Being Chatbots. Human Factors: The Journal of the Human Factors and Ergonomics Society, pp. 001872082311624. doi: 10.1177/00187208231162453

Koptyra, B., Ngo, A., Radliński, Ł., & Kocoń, J. (2023). CLARIN-Emo: Training Emotion Recognition Models Using Human Annotation and ChatGPT. In J. Mikyška, C. De Mulatier, M. Paszynski, V. V. Krzhizhanovskaya, J. J. Dongarra, & P. M. A. Sloot (Eds.), Computational Science – ICCS 2023 Springer Nature Switzerland, 14073, pp. 365–379. doi: 10.1007/978-3-031-35995-8_26

Lee, E. E., Torous, J., De Choudhury, M., Depp, C. A., Graham, S. A., Kim, H.-C., Paulus, M. P., Krystal, J. H., & Jeste, D. V. (2021). Artificial Intelligence for Mental Health Care: Clinical Applications, Barriers, Facilitators, and Artificial Wisdom. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 6(9), pp. 856–864. doi: 10.1016/j.bpsc.2021.02.001

Malgaroli, M., Hull, T. D., Zech, J. M., & Althoff, T. (2023). Natural language processing for mental health interventions: A systematic review and research framework. Translational Psychiatry, 13(1), pp. 309. doi: 10.1038/s41398-023-02592-2

Minerva, F., & Giubilini, A. (2023). Is AI the Future of Mental Healthcare? Topoi, 42(3), pp. 809–817. doi: 10.1007/s11245-023-09932-3

Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1. doi: 10.1038/s42256-019-0114-4

Mohammad Amini, M., Jesus, M., Fanaei Sheikholeslami, D., Alves, P., Hassanzadeh Benam, A., & Hariri, F. (2023). Artificial Intelligence Ethics and Challenges in Healthcare Applications: A Comprehensive Review in the Context of the European GDPR Mandate. Machine Learning and Knowledge Extraction, 5(3), pp. 1023–1035. doi: 10.3390/make5030053

Monteith, S., Glenn, T., Geddes, J., Whybrow, P. C., Achtyes, E., & Bauer, M. (2022). Expectations for Artificial Intelligence (AI) in Psychiatry. Current Psychiatry Reports, 24(11), pp. 709–721. doi: 10.1007/s11920-022-01378-5

Picard, R. W. (2000). Affective computing (1. paperback ed). MIT Press.

Rajaei, A. (2023). Teaching in the Age of AI/ChatGPT in Mental-Health-Related Fields. The Family Journal, pp. 10664807231209721. doi: 10.1177/10664807231209721

Schwab, K. (2017). The Fourth Industrial Revolution. Penguin.

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), pp. 417–424. doi: 10.1017/S0140525X00005756

Singh, O. (2023). Artificial intelligence in the era of ChatGPT - Opportunities and challenges in mental health care. Indian Journal of Psychiatry, 65(3), pp. 297. doi: 10.4103/indianjpsychiatry.indianjpsychiatry_112_23

Solanki, P., Grundy, J., & Hussain, W. (2023). Operationalising ethics in artificial intelligence for healthcare: A framework for AI developers. AI and Ethics, 3(1), pp. 223–240. doi: 10.1007/s43681-022-00195-z

Taurah, S. P., Bhoyedhur, J., & Sungkur, R. K. (2020). Emotion-Based Adaptive Learning Systems. In S. Boumerdassi, É. Renault, & P. Mühlethaler (Eds.), Machine Learning for Networking, Springer International Publishing, 12081, pp. 273–286. doi: 10.1007/978-3-030-45778-5_18

Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, LIX(236), pp. 433–460. doi: 10.1093/mind/LIX.236.433

Vaidyam, A. N., Wisniewski, H., Halamka, J. D., Kashavan, M. S., & Torous, J. B. (2019). Chatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape. The Canadian Journal of Psychiatry, 64(7), pp. 456–464. doi: 10.1177/0706743719828977

Wang, P. S., Angermeyer, M., Borges, G., Bruffaerts, R., Tat Chiu, W., DE Girolamo, G., Fayyad, J., Gureje, O., Haro, J. M., Huang, Y., Kessler, R. C., Kovess, V., Levinson, D., Nakane, Y., Oakley Brown, M. A., Ormel, J. H., Posada-Villa, J., Aguilar-Gaxiola, S., Alonso, J., … Ustün, T. B. (2007). Delay and failure in treatment seeking after first onset of mental disorders in the World Health Organization’s World Mental Health Survey Initiative. World Psychiatry: Official Journal of the World Psychiatric Association (WPA), 6(3), pp. 177–185.

Zhou, S., Zhao, J., & Zhang, L. (2022). Application of Artificial Intelligence on Psychological Interventions and Diagnosis: An Overview. Frontiers in Psychiatry, 13(March), pp. 1–7. doi: 10.3389/fpsyt.2022.811665

Published
2024-04-05
Section
Articles