Volume 60, Issue 4 e70088
RESEARCH REPORT

Stench of Errors or the Shine of Potential: The Challenge of (Ir)Responsible Use of ChatGPT in Speech-Language Pathology

Mytsyk Hanna

Corresponding Author

Mytsyk Hanna

Department of Applied Psychology and Speech Therapy, Berdyansk State Pedagogical University, Zaporizhzhia, Ukraine

Search for more papers by this author
Suchikova Yana

Suchikova Yana

Department of Applied Psychology and Speech Therapy, Berdyansk State Pedagogical University, Zaporizhzhia, Ukraine

Search for more papers by this author
First published: 08 July 2025

ABSTRACT

Background

Integrating large language models (LLMs), such as ChatGPT, into speech-language pathology (SLP) presents promising opportunities and notable challenges. While these tools can support diagnostics, streamline documentation and assist in therapy planning, they also raise concerns related to misinformation, cultural insensitivity, overreliance and ethical ambiguity. Current discourse often centres on technological capabilities, overlooking how future speech-language pathologists (SLPs) are being prepared to use such tools responsibly.

Aims

This paper examines the pedagogical, ethical and professional implications of integrating LLMs into SLP. It emphasizes the need to cultivate professional responsibility, ethical awareness and critical engagement amongst student SLPs, ensuring that such technologies are applied thoughtfully, appropriately and in accordance with evidence-based and contextually relevant therapeutic standards.

Methods

The paper combines a review of recent interdisciplinary research with reflective insights from academic practice. It presents documented cases of student SLPs’ overreliance on ChatGPT, analyzes common pitfalls through a structured table of examples and synthesizes perspectives from SLP, education, data ethics and linguistics.

Main Contribution

Reflective examples presented in the article illustrate challenges that arise when LLMs are used without sufficient oversight or a clear understanding of their limitations. Rather than questioning the value of LLMs, these cases emphasize the importance of ensuring that student SLPs are guided towards thoughtful, ethical and clinically sound use. To support this, the paper offers a set of pedagogical recommendations—including ethics integration, reflective assignments, case-based learning, peer critique and interdisciplinary collaboration—aimed at embedding critical engagement with tools such as ChatGPT into professional training.

Conclusions

LLMs are becoming an integral part of SLP. Their impact, however, will depend on how effectively student SLPs are trained to balance technological innovation with professional responsibility. Higher education institutions (HEIs) must take an active role in embedding responsible engagement with LLMs into pre-service training and SLP curricula. Through intentional and early preparation, the field can move beyond the risks associated with automation and towards a future shaped by reflective, informed and ethically grounded use of generative tools.

WHAT THIS PAPER ADDS

What is already known on this subject
  • Large language models (LLMs), including ChatGPT, are increasingly used in speech-language pathology (SLP) for tasks such as diagnostic support, therapy material generation and documentation. While prior research acknowledges both their utility and risks, limited attention has been paid to how student SLPs engage with these tools and how educational institutions prepare them for responsible use.
What this paper adds to existing knowledge
  • This paper identifies key challenges in how student SLPs interact with ChatGPT, including overreliance, lack of critical evaluation and ethical blind spots. It emphasizes the role of higher education in developing critical AI literacy aligned with clinical and ethical standards. The study offers specific, practice-oriented recommendations for embedding responsibility-focused engagement with LLMs into SLP curricula. These include ethics integration, reflective assignments, peer feedback and interdisciplinary dialogue.
What are the potential or actual clinical implications of this work?
  • Without structured guidance, future SLPs may misuse LLMs in ways that compromise diagnostic accuracy, cultural appropriateness or therapeutic quality. Embedding reflective, ethics-focused training into SLP curricula can reduce these risks and ensure that generative tools like ChatGPT support rather than undermine clinical decision-making and patient care.

Conflicts of Interest

The authors declare no conflict of interest. They have no financial, personal or professional relationships that could be perceived as influencing the research presented in this article.

Data Availability Statement

No datasets were generated during the current study.

The full text of this article hosted at iucr.org is unavailable due to technical difficulties.