Journal Article DKFZ-2025-01692

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Assessing ChatGPT's Educational Potential in Lung Cancer Radiotherapy From Clinician and Patient Perspectives: Content Quality and Readability Analysis.

 ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;

2025
JMIR Publications Toronto

JMIR cancer 11, e69783 - e69783 () [10.2196/69783]
 GO

This record in other databases:  

Please use a persistent id in citations: doi:

Abstract: Large language models (LLMs) such as ChatGPT (OpenAI) are increasingly discussed as potential tools for patient education in health care. In radiation oncology, where patients are often confronted with complex medical terminology and complex treatment plans, LLMs may support patient understanding and promote more active participation in care. However, the readability, accuracy, completeness, and overall acceptance of LLM-generated medical content remain underexplored.This study aims to evaluate the potential of ChatGPT-4 as a supplementary tool for patient education in the context of lung cancer radiotherapy by assessing the readability, content quality, and perceived usefulness of artificial intelligence-generated responses from both clinician and patient perspectives.A total of 8 frequently asked questions about radiotherapy for lung cancer were developed based on clinical experience from a team of clinicians specialized in lung cancer treatment at a university hospital. The questions were submitted individually to ChatGPT-4o (version as of July 2024) using the prompt: 'I am a lung cancer patient looking for answers to the following questions.' Responses were evaluated using three approaches: (1) a readability analysis applying the Modified Flesch Reading Ease (FRE) formula for German and the 4th Vienna Formula (WSTF); (2) a multicenter expert evaluation by 6 multidisciplinary clinicians (radiation oncologists, medical oncologists, and thoracic surgeons) specialized in lung cancer treatment using a 5-point Likert scale to assess relevance, correctness, and completeness; and (3) a patient evaluation during the first follow-up appointment after radiotherapy, assessing comprehensibility, accuracy, relevance, trustworthiness, and willingness to use ChatGPT for future medical questions.Readability analysis classified most responses as 'very difficult to read' (university level) or 'difficult to read' (upper secondary school), likely due to the use of medical language and long sentence structures. Clinician assessments yielded high scores for relevance (mean 4.5, SD 0.52) and correctness (mean 4.3, SD 0.65), but completeness received slightly lower ratings (mean 3.9, SD 0.59). A total of 30 patients rated the responses positively for clarity (mean 4.4, SD 0.61) and relevance (mean 4.3, SD 0.64), but lower for trustworthiness (mean 3.8, SD 0.68) and usability (mean 3.7, SD 0.73). No harmful misinformation was identified in the responses.ChatGPT-4 shows promise as a supplementary tool for patient education in radiation oncology. While patients and clinicians appreciated the clarity and relevance of the information, limitations in completeness, trust, and readability highlight the need for clinician oversight and further optimization of LLM-generated content. Future developments should focus on improving accessibility, integrating real-time readability adaptation, and establishing standardized evaluation frameworks to ensure safe and effective clinical use.

Keyword(s): ChatGPT ; LLM ; NSCLC ; artificial intelligence ; large language model ; lung cancer ; non-small cell lung cancer ; patient education ; radiation oncology ; radiotherapy

Classification:

Contributing Institute(s):
  1. DKTK Koordinierungsstelle München (MU01)
Research Program(s):
  1. 899 - ohne Topic (POF4-899) (POF4-899)

Appears in the scientific report 2025
Database coverage:
Medline ; Creative Commons Attribution CC BY (No Version) ; DOAJ ; Article Processing Charges ; Clarivate Analytics Master Journal List ; DOAJ Seal ; Emerging Sources Citation Index ; Fees ; IF < 5 ; JCR ; SCOPUS ; Web of Science Core Collection
Click to display QR Code for this record

The record appears in these collections:
Document types > Articles > Journal Article
Public records
Publications database

 Record created 2025-08-14, last modified 2025-08-17



Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)