Bibliographic Details
| Title: |
AI meets psychology: an exploratory study of large language models’ competence in psychotherapy contexts |
| Authors: |
Sian Tan, Kean, Cervin, Matti, Leman, Patrick, Nielsen, Kristopher, Vasantha Kumar, Prashanth, Medvedev, Oleg N. |
| Contributors: |
Lund University, Faculty of Medicine, Department of Clinical Sciences, Lund, Section IV, Child and Adolescent Psychiatry, Innovations in pediatric mental health, Lunds universitet, Medicinska fakulteten, Institutionen för kliniska vetenskaper, Lund, Sektion IV, Barn- och ungdomspsykiatri, Innovations in pediatric mental health, Originator |
| Source: |
Journal of Psychology and AI. 1(1):1-17 |
| Subject Terms: |
Medical and Health Sciences, Clinical Medicine, Psychiatry, Medicin och hälsovetenskap, Klinisk medicin, Psykiatri, Natural Sciences, Computer and Information Sciences, Artificial Intelligence, Naturvetenskap, Data- och informationsvetenskap (Datateknik), Artificiell intelligens |
| Description: |
The increasing prevalence of mental health problems coupled with limited access to professional support has prompted exploration of technological solutions. Large Language Models (LLMs) represent a potential tool to address these challenges, yet their capabilities in psychotherapeutic contexts remain unclear. This study examined the competencies of current LLMs in psychotherapy-related tasks including alignment with evidence-informed clinical standards in case formulation, treatment planning, and implementation. Using an exploratory mixed-methods design, we presented three clinical cases (depression, anxiety, stress) and 12 therapy-related prompts to seven LLMs: ChatGPT-4o, ChatGPT-4, Claude 3.5 Sonnet, Claude 3 Opus, Meta Llama 3.1, Google Gemini 1.5 Pro, and Microsoft Co-pilot. Responses were evaluated by five experienced clinical psychologists using quantitative ratings and qualitative feedback. No single model consistently produced high-quality responses across all tasks, though different models showed distinct strengths. Models performed better in structured tasks such as determining session length and discussing goal-setting but struggled with integrative clinical reasoning and treatment implementation. Higher-rated responses demonstrated clinical humility, maintained therapeutic boundaries, and recognised therapy as collaborative. Current LLMs are more promising as supportive tools for clinicians than as therapeutic applications. This paper highlights key areas for development needed to enhance clinical reasoning abilities for effective mental health use. |
| Access URL: |
https://doi.org/10.1080/29974100.2025.2545258 |
| Database: |
SwePub |