Next Article in Journal
Towards the Generation of Medical Imaging Classifiers Robust to Common Perturbations
Next Article in Special Issue
Hearables: In-Ear Multimodal Data Fusion for Robust Heart Rate Estimation
Previous Article in Journal
Comparing ANOVA and PowerShap Feature Selection Methods via Shapley Additive Explanations of Models of Mental Workload Built with the Theta and Alpha EEG Band Ratios
Previous Article in Special Issue
Genetic Optimization in Uncovering Biologically Meaningful Gene Biomarkers for Glioblastoma Subtypes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Exploring the Role of ChatGPT in Oncology: Providing Information and Support for Cancer Patients

1
Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
2
Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Piazza Principessa Clotilde 3, 20121 Milan, Italy
3
Radiology Department, IRCCS Istituto Nazionale dei Tumori, Via Giacomo Venezian, 1, 20133 Milan, Italy
*
Author to whom correspondence should be addressed.
BioMedInformatics 2024, 4(2), 877-888; https://doi.org/10.3390/biomedinformatics4020049
Submission received: 15 January 2024 / Revised: 19 March 2024 / Accepted: 21 March 2024 / Published: 25 March 2024

Abstract

:
Introduction: Oncological patients face numerous challenges throughout their cancer journey while navigating complex medical information. The advent of AI-based conversational models like ChatGPT (San Francisco, OpenAI) represents an innovation in oncological patient management. Methods: We conducted a comprehensive review of the literature on the use of ChatGPT in providing tailored information and support to patients with various types of cancer, including head and neck, liver, prostate, breast, lung, pancreas, colon, and cervical cancer. Results and Discussion: Our findings indicate that, in most instances, ChatGPT responses were accurate, dependable, and aligned with the expertise of oncology professionals, especially for certain subtypes of cancers like head and neck and prostate cancers. Furthermore, the system demonstrated a remarkable ability to comprehend patients’ emotional responses and offer proactive solutions and advice. Nevertheless, these models have also showed notable limitations and cannot serve as a substitute for the role of a physician under any circumstances. Conclusions: Conversational models like ChatGPT can significantly enhance the overall well-being and empowerment of oncological patients. Both patients and healthcare providers must become well-versed in the advantages and limitations of these emerging technologies.

Graphical Abstract

1. Introduction

Oncological patients face numerous challenges throughout their cancer journey, ranging from emotional distress and treatment-related side effects, to navigating complex medical information.
Gone are the days when patients depended only on their doctors for medical advice. With a simple internet search, patients may educate themselves on symptoms, diseases, and treatment options, becoming more informed and proactive in decisions regarding their health [1].
The Internet has definitely transformed how patients navigate medical information, reshaping the dynamics of patient empowerment and communication with the doctor [2]. However, while access to medical knowledge can be useful under certain circumstances not all online sources are reliable, and patients may encounter incorrect or misleading information resulting in confusion or wrong self-diagnosis [3]. For this reason, misinformation or harmful information about cancer continues to be a significant concern in the online communication environment [4].
Furthermore, the ever-increasing health budget limits and heightened workloads among healthcare professionals have exacerbated the decline in doctor–patient relationships, adversely affecting healthcare accessibility and prognosis [5].
Providing cancer patients with additional tools for a better understanding of their diagnosis and treatment options and adequate emotional support is critical to ensure informed decision-making and a good outcome.
In this scenario, the advent of large language models (LLMs) like ChatGPT (San Francisco, OpenAI) and others may represent a cutting-edge innovation in oncological patient management to meet their individualized needs and concerns [6].
The use of artificial intelligence in healthcare is not new, having already demonstrated surprising results in the high-performance analysis of biomedical data through machine learning and deep learning models [7]. However, despite the great prospects, some issues related to reliability, privacy, and patient confidentiality should still be addressed when integrating these tools into healthcare routines [6,8,9,10]. This narrative review explores the potential advantages, limitations, and challenges associated with conversational models in supporting cancer patients. Our discussion includes aspects such as the accessibility of the models, the reliability of the information provided, as well as their role in patient empowerment and informed decision-making. We focus on the widely recognized large language model ChatGPT (developed by OpenAI, San Francisco) due to the availability of consistent literature on this topic [6].

Large Language Models

Large language models (LLMs) are sophisticated artificial intelligence systems designed to generate human-like text. They are trained on vast amounts of data and can understand and produce natural language across various tasks, such as translation, summarization, and conversation. Users provide a list of keywords or inquiries, and LLMs generate content about those topics. The user interface generally follows a conversational structure, which cycles between user questions or inputs and system responses or outputs. This design considers previous interactions to emulate human speech effectively [6].
In November 2022, OpenAI launched its GPT series of models (e.g., GPT-3.5, GPT-4, and GPT-5), which generate human-like text for usage in chatbot conversations using natural language processing (NLP) technology. Other notable LLMs are Google’s PaLM and Gemini, Meta’s LLaMA family of open-source models, Anthropic’s Claude models, ChatBot BARD by Google, and Llama and Llama-2 by Meta.
LLMs are AI-driven, deep neural network-based models with a remarkable ability to achieve general-purpose human language generation and understanding [6,7,8,9]. LLMs acquire these skills by learning statistical relationships from text documents in computationally intensive self-supervised and semi-supervised training processes. Generative pre-trained transformer (GPT) language models are built on a transformer architecture, which enables them to process large amounts of text data while producing coherent text outputs by learning the relationships between input and output sequences [10].
The GPT language model has been trained on large datasets of text sourced from websites, books, and online publications. After receiving human feedback and corrections, ChatGPT was trained to respond in a way that allowed it to produce more logical and contextually relevant answers [11]; this procedure is known as reinforcement learning from human feedback or reinforcement learning from human preference (RLHF/RLHP). Users can type any prompt, and ChatGPT will answer based on the data stored in its database.
Previous research demonstrated that it could produce high-quality and coherent text outputs, react to user questions with unexpectedly intelligent-sounding messages, and perform exceptionally well in question-answer tasks [12]. In the medical area, GPT-4, created by further reinforcement learning from OpenAI’s ChatGPT, recently surpassed the passing score on all steps of the US medical licensing exam [13].

2. Methods

An extensive literature search was performed on PubMed to find relevant publications on the current role and future potential of ChatGPT in cancer patients. For our search, we have the following string: “(cancer OR oncology OR oncological) AND (patients) AND ChatGPT”. Furthermore, we also carried out a careful examination of the references of the included articles to evaluate further studies worthy of mention.
Our results are presented through a narrative summary and organized as follows: potential benefits, applications in different types of cancer, limitations, and challenges.

3. Results and Discussion

The potential advantages and limitations of ChatGPT and similar LLMs are presented in Table 1.

3.1. Potential Advantages of ChatGPT and Other LLMs

3.1.1. Accessibility and Inclusivity

LLMs represent a potential game changer for individuals with limited access to healthcare resources, particularly in low-income countries, as, despite widespread access to medical information contributing to increasing the average level of health literacy and well-being expectations, health services are still very unequal and insufficient in many parts of the world [14,15].
As of the time this article was written, the basic version of ChatGPT was free of charge for the public. Given that financial difficulties have been linked to poor health outcomes [16], large language models can contribute to limiting the effects of socioeconomic inequalities in cancer treatment by giving everyone fast access to reliable medical information regardless of their location or socioeconomic background [17,18,19,20].
ChatGPT can support underprivileged communities in many ways: first, ChatGPT communicates in multiple languages, breaking down language barriers that often hinder access to healthcare information. It can remotely deliver essential health information in areas with limited access to healthcare facilities and guide essential self-care practices, including managing chronic conditions and first aid measures. In developing countries, it can help individuals understand healthcare processes, such as insurance enrollment, appointment scheduling, and medication management, improving overall access to care. Overall, ChatGPT has the potential to democratize access to healthcare information and support, ultimately improving health outcomes for underprivileged communities [17,18,19,20].
The inability of ChatGPT to generate any offensive or harmful responses is one of the security measures put in place by the developers to prevent misuse. ChatGPT can, therefore, offers a non-judgmental platform to seek information on sensitive topics as well, such as sexual health, mental health, and substance abuse, reducing stigma and cultural barriers that often deter people from seeking help.

3.1.2. Information Provision and Informed Decision-Making

ChatGPT has been trained on a large amount of data, including medical literature [7]. Even if, as discussed further, the input (training) data constitute a limiting factor for deep learning models’ accuracy and trustworthiness, these tools represent a valuable supplement for medical information retrieval and clinical decision-making, both for patients and healthcare practitioners. ChatGPT’s conversational style results in more comprehensible responses than primary official sources like guidelines and scientific articles, especially for individuals without expertise in the medical field. Additionally, it streamlines the information search process by presenting only relevant content tailored to the user’s query, thus enhancing efficiency and saving time.
LLMs are already active in different areas of clinical practice and can generate differential diagnosis lists for typical clinical scenarios with good diagnostic accuracy [12]. In oncology, by integrating this knowledge into coherent responses, ChatGPT can answer questions related to different types of cancer, including treatment options, potential side effects, and beneficial lifestyle modifications. These models can support patients asking for information about additional examinations, diagnosis, treatment plans, and prognosis, enabling them to make more informed decisions. In breast cancer imaging, for example, it performs reasonably well in recommending the next imaging steps for patients requiring a breast cancer screening and assessment of breast pain [18]. The adequacy of ChatGPT and other LLMs as a guide for patients, who are non-experts in medicine, in navigating the correct diagnostic path, remains a contentious issue in ongoing discussions.
Additionally, ChatGPT can assist in clarifying medical terminology and lexicon, ensuring that patients better comprehend the information provided in medical documents and radiological reports [19].

3.1.3. Emotional Support and Patient Empowerment

A cancer diagnosis can often lead to emotional distress, anxiety, and depression for patients and their caregivers [20,21]. Furthermore, because of time constraints, clinician–patient communication may frequently be neglected, with dramatic consequences for the clinical history and life of cancer patients.
There are good reasons to think that ChatGPT could help bridge this gap.
Research into ChatGPT’s ability to provide responses attuned to human emotions such as kindness and empathy has produced impressive results [22]. It may give the impression that generative AI can demonstrate an understanding of human emotions, generating responses and assistance suitable for those who use it. In a recent study by Elyoseph et al. [23], ChatGPT outperformed humans in assessing emotional awareness. It demonstrated the ability to improve intrapersonal and interpersonal understanding, increasing patients’ awareness of their own and their family members’ emotions. This may provide patients with comfort and help them feel less “alone” [24].
Using natural language processing capabilities, ChatGPT can engage in compassionate conversations, acknowledging patients’ emotions and providing emotional support [25]. It can also suggest coping strategies, stress management techniques, and even provide referrals to mental health professionals, when necessary.
It helps to build a framework for each individual question presented by patients and caregivers, thus increasing provider efficiency and allowing patients to become more aware about their care. As a result, by providing patients with an additional source of information, this paradigm has the potential to boost patient participation and compliance, promoting patient-centered treatment and effective shared decision-making [11].

3.1.4. Supportive Care

Beyond medical treatment, oncological patients often require support in other aspects of their lives, such as managing relationships, and making lifestyle changes to preserve their health.
By offering suggestions for healthy lifestyle modifications, including exercise routines and dietary recommendations, ChatGPT can empower patients to take an active role in their overall well-being [17].
This model has showcased significant potential in aiding home care for orthopaedic patients, suggesting that this tool can play a pivotal role in improving public health policy by providing consistent and trustworthy guidance, especially in settings where access to health services is limited [26].

3.1.5. For Healthcare Practitioners and Medical Students

Tools like ChatGPT could be helpful not only for patients but also for healthcare practitioners [27]. Conversational models can generate user-friendly explanations of medical jargon, treatment alternatives, and potential adverse effects, thereby improving patient literacy and decision-making. Of course, ChatGPT is better suited to text activities like generating summaries, treatment plans, and follow-up recommendations, which doctors may subsequently check. Furthermore, it can facilitate contact with patients from varied linguistic backgrounds by offering real-time language translation services during consultations.
Another potentially beneficial use of tools like ChatGPT is training medical students and residents by simulating patient scenarios, answering medical queries, and providing learning resources [28,29].
Advanced AI-based models could save time for oncologists by handling routine administrative tasks like scheduling appointments, sending reminders, and managing documentation. It can help with patient case documentation by creating summaries of consultations, treatment plans, and follow-up recommendations and streamlining the process of keeping complete and accurate patient records. ChatGPT can generate clinic letters with good overall correctness and humanness ratings, with a reading level roughly similar to current real-world human-generated letters, and it has been effectively used to create patient clinic letters [30].
However, much caution should likely be exercised when considering specific tasks such as information retrieval about the latest research, treatment guidelines, clinical trials related to particular types of cancer, drug interactions, side effects, and dosage information for various cancer medications. Recent studies have shown a lack of consistency when dealing with providing a threshold for decision-making or distinguishing which guidelines to follow in a specific setting [11].

3.2. Appraisal of Literature on Different Types of Cancer

So far, the literature evaluating its use in clinical practice is still limited [8] and only a few studies have evaluated the potential of ChatGPT in education and advice on the clinical path of oncology [11].

3.2.1. Head and Neck

Kuşcu et al. explored the accuracy and reliability of ChatGPT’s responses to questions related to head and neck cancer [31]. A dataset of questions was selected from commonly asked queries from reputable institutions and societies, including the American Head & Neck Society (AHNS), the National Cancer Institute, and the Medline Plus Medical Encyclopedia. These questions underwent an extensive screening process by three authors to determine their suitability for inclusion in the study, focusing primarily on patient-oriented questions to evaluate the effectiveness of the AI model in providing useful information for patients. The study revealed that the majority of ChatGPT responses were accurate, with 86.4% receiving a “complete/correct” rating on the rating scale. Significantly, none of the responses were rated “completely inaccurate/irrelevant”. Furthermore, the model showed high reproducibility across all topics and performed consistently without significant differences between them.
The authors also underlined a substantial limitation of ChatGPT: the version knowledge cutoff was only extended until September 2021, potentially impacting response precision due to the exclusion of data from the previous two years. Furthermore, the reliability of ChatGPT is determined by the quality of its training data, and the model’s secret sources raise questions about whether the training was based on the most reputable and accurate medical literature. Furthermore, the latest version of ChatGPT, which demonstrated better performance than the publicly available version, is exclusively accessible through paid membership, potentially restricting public access to more accurate knowledge [31].
A critical opinion regarding the current potential of ChatGPT in answering patient questions comes from the study by Wei et al., who compared the performance of ChatGPT and the Google browser in addressing common questions related to head and neck cancers [32]. A collection of 49 questions about head and neck cancers was chosen from a series of “People Also Ask” (PAA) question prompts using SearchResponse.io. The study found that, on average, Google sources outperformed ChatGPT responses. Both sources were assessed to be of similar readability difficulty, most likely at the college level. While ChatGPT responses were comparable in complexity to those from Google, they were rated as lower quality due to a drop in reliability and accuracy when answering questions.
According to Wei’s assessment, particularly for questions about head and neck cancer, Google sources emerged as the primary option for patient educational resources [32].

3.2.2. Prostate Cancer

Zhu et al. developed a questionnaire aligning with patient education guidelines and their clinical expertise, covering screening, prevention, treatment options, and postoperative complications related to prostate cancer [17].
The questions covered a spectrum of knowledge from the basics to advanced knowledge about prostate cancer. Their investigation involved five Large Language Models, including ChatGPT (Free and Plus versions), YouChat, NeevaAI, Perplexity, and Chatsonic. Assessments revealed that LLMs excelled in addressing most questions. For instance, they effectively clarified the significance of different PSA levels and emphasized that PSA alone is not a conclusive diagnostic test and that further examinations are recommended. LLMs also demonstrated effectiveness in detailed comparisons of treatment options, presenting pros and cons, and offering informative references to aid patients in making well-informed decisions. Most importantly, in most cases, it was consistently emphasized to consult a doctor.
The accuracy of responses from most LLMs exceeded 90%, with exceptions noted for NeevaAI and Chatsonic. Basic information questions with definite answers generally achieved high accuracy, but accuracy dipped for queries tied to specific scenarios or requiring summarization and analysis. ChatGPT exhibited the highest accuracy rate among the LLMs assessed, with the free version slightly outperforming the paid version.
Zhu et al. raised a question in their study regarding whether online LLMs would surpass ChatGPT. Notably, AI models relying on search engines like NeevaAI often presented literature content without effective summarization and explanation, resulting in poor readability. This observation suggested that model training might be more crucial than real-time Internet connectivity [17].

3.2.3. Hepatocarcinoma

Individuals with cirrhosis and hepatocellular carcinoma (HCC), as well as their caregivers, often have unmet needs and insufficient knowledge regarding the management and prevention of complications associated with the disease. It should not be disregarded that a portion of these patients have a troublesome history behind them and lack a sufficient socioeconomic support network. Previous research has demonstrated inadequate health literacy among cirrhosis and HCC patients and the favorable impacts of focused education [33].
An interesting experience comes from the work of Yeo et al., who evaluated ChatGPT’s performance in answering the most frequently asked questions regarding the management and care of patients with cirrhosis and HCC. Conversational model responses were independently scored by two transplant hepatologists and a third reviewer [11].
The study by Yeo et al. found that ChatGPT provided comprehensive or correct but inadequate answers about cirrhosis in approximately three-quarters of the responses analyzed, with better results in categories such as “basic knowledge”, “treatment”, “lifestyle”, and “other”. No answer related to cirrhosis was classified as completely incorrect. Regarding HCC, the model excels in providing detailed information on the knowledge base and potential side effects of various HCC treatments, as well as scientific evidence for lifestyle investigations. However, there were areas where the model did not respond correctly or provided outdated information, especially in diagnosis, where most information was classified as a mix of correct and incorrect or outdated data. For example, while ChatGPT correctly emphasized using abdominal ultrasound as a primary screening tool, it neglected to mention MRI and computed tomography scans for HCC surveillance in patients with ascites. However, ChatGPT accurately identified cirrhosis as an indication for HCC surveillance [11].
Overall, the results were deemed satisfactory, even though only 47.3% of cirrhosis cases and 41.1% of HCC cases were classified as comprehensive, and the system had significant shortcomings in delivering answers about oncological diagnosis. Furthermore, the system could not establish the choice limitations and treatment length, most likely due to a lack of ability to deny clinical information regarding local procedures and recommendations. This confirms the potential significance of ChatGPT and related models in providing universal access to basic medical knowledge, while simultaneously emphasizing the importance of medical consultation during the most essential stages of the diagnostic process.
Yeo et al. also evaluated ChatGPT’s responses to questions about coping with psychological stress following an HCC diagnosis. The model acknowledged the patient’s probable emotional response to the diagnosis and provided clear and actionable starting points for individuals newly diagnosed with HCC. It offered motivational responses, encouraging proactive steps in managing the diagnosis and treatment strategies [11].

3.2.4. Breast Cancer

Over the last two decades, there has been an increase in scientific research and public interest in the two most serious problems linked with breast implants. Significant progress has been made in understanding the rare T-cell lymphoma associated with textured implants.
Liu et al. investigated the suitability of ChatGPT for educating patients on breast implant-associated anaplastic large cell lymphoma (BIA-ALCL) and breast implant illness (BII). They compared the quality of responses and references offered by ChatGPT to the Google Bard service. The data demonstrated that ChatGPT outperformed Google in providing high-quality responses to frequently asked queries about BIA-ALCL and BII [34].

3.2.5. Lung Cancer

Rahsepar et al. studied the accuracy of responses provided by ChatGPT-3.5, Google Bard, Bing, and Google search engines to non-expert questions about lung cancer prevention, screening, and vocabulary in radiology reports [35]. Out of 120 questions, ChatGPT-3.5 answered 70.8% correctly and 17.5% incorrectly. Google Bard did not respond to 23 queries, and of the 97 questions it did, 62 were correct, 11 had some errors, and 24 were incorrect. Out of 120 questions, Bing gave 61.7% correct, 10.8% mostly correct, and 27.5% incorrect answers. The Google search engine answered 120 questions with 55% correct, 22.5% mostly correct, and 22.5% incorrect.
The authors concluded that ChatGPT-3.5 was more likely to give correct or partially correct responses than Google Bard.

3.2.6. Colon Cancer

Regarding colon cancer, ChatGPT was twice asked 38 questions on prevention, diagnosis, and management, and three experts rated the appropriateness. Twenty-seven answers of ChatGPT were rated as “appropriate” by the three experts; overall, at least two of three experts rated the answers appropriate for 86.8% [36]. Moreover, the ChatGPT responses were extensively concordant with those of the American Society of Colon and Rectal Surgeons.

3.2.7. Pancreatic Cancer

Another study investigated the responses provided by ChatGPT to 30 questions about pancreatic cancer and pre-surgical, surgical, and post-surgical phases [37]. The response quality was then assessed by 20 surgical oncology experts and rated as ‘poor’, ‘fair’, ‘good’, ‘very good’, and ‘excellent’. The most assigned quality grade was ‘very good’ or ‘excellent’ for most responses (n = 24/30, 80%); in total, 60% of the experts thought that ChatGPT was a reliable information source, and only 10% thought that the answers provided by ChatGPT could not be compared to those of skilled surgeons. Additionally, 90% of experts believed that ChatGPT will become the go-to source for online patient information, either completely replacing traditional search engines or at least co-existing with them.

3.2.8. Cervical Cancer

In a study by Hermann et al., when ChatGPT was challenged with questions concerning cervical cancer prevention, management, survivorship, and quality of life, its answers were rated as correct and comprehensive only in 34/64 (53.1%) questions, with the worst performance in the treatment category [38].

3.2.9. Radiotherapy

Although the authors did not use ChatGPT, the study by Chow et al. provides an instructive example of the efficiency of comparable conversational models. Their research focused on developing an AI-driven instructional chatbot for interactive learning in radiotherapy, using the IBM Watson Assistant platform [39].
The major purpose of the chatbot was to make it easier to communicate radiation knowledge to people of varied comprehension levels. The chatbot was created to be user-friendly and deliver simple explanations in response to user questions regarding radiation. According to their response, most physicians rated the RT Bot’s material positively, with 95% of users believing the information to be sufficiently complete.

4. Limitations and Perspectives

Healthcare professionals must be aware of the limitations of LLMs to ensure responsible and safe use.
Although ChatGPT is free and can benefit underprivileged communities who have difficulty accessing healthcare institutions, it is important to address constraints that remain in many parts of the world, such as limited Internet connection and low digital literacy.
Conversational models can be essential tools for physicians in providing general information and context, but they should not be relied on for medical advice. ChatGPT does not offer references (or if it does, they are not necessarily correct) [40]. Furthermore, it is limited to information available until the knowledge cutoff date. It does not have real-time updates, so it might not be aware of the latest medical breakthroughs, treatments, guidelines, or changes to regulations and laws. Since the model is trained on a diverse range of Internet texts, which may include biased or outdated information, this could lead to biased responses or recommendations that do not consider the most current and evidence-based medical practices. Different sources could reach different conclusions. This overlooks the current limitations in data accuracy, the evolving nature of medical knowledge, and the need for expert oversight.
From the patient’s perspective, one of the potentially most harmful outcomes of the inappropriate use of ChatGPT is its ability to provide confidently stated yet incorrect answers [41], and it may be susceptible to what is termed “hallucinations”, wherein information is fabricated rather than grounded in facts [42]. The average user often finds it more accessible to discern reliable sources, such as those affiliated with reputable healthcare institutions or scientific organizations. Conversely, identifying erroneous information presented by ChatGPT can pose greater challenges due to its formal and plausible language delivery, coupled with the inability to trace its source. Future research could investigate how models like ChatGPT may inadvertently deceive not only individuals lacking medical training but also doctors who are not experts in the field compared to experts in the field.
General-purpose LLMs might not guarantee the accuracy and precision required for medical inquiries, which could lead to incorrect advice or information [43]. ChatGPT does not have access to personal health information about individuals. LLMs are not able to consider an individual’s complete medical history, conduct physical examinations, or order diagnostic tests, which are essential aspects of providing advice for accurate and personalized medicine [9,44]. Any attempt to provide personalized medical advice would, therefore, be speculative and lead to inaccurate or potentially harmful recommendations.
Even while the quick availability of information helps to reduce anxiety, using LLM conversations without expert evaluation increases the danger of inaccuracy. For example, underestimating a patient’s condition could negatively influence patient care, as erroneous results reporting or treatment guideline interpretation can affect patients’ morbidity and mortality. Patients may develop a sense of comfort and trust with ChatGPT over time, contributing to enhanced emotional well-being. However, this sensation of comfort should not lead to an underestimation of the clinical state, causing the patient to make poor decisions. There is a real risk of oversimplifying complex medical situations, leading patients to believe the tool is a substitute for competent medical advice. Such a perception could undermine the crucial doctor–patient relationship founded on trust, expertise, and personalized care.
Therefore, while ChatGPT can support patient education, healthcare providers need to guide patients in using this tool as a complement to, rather than a substitute for, medical consultation. LLMs should be used carefully under the supervision of a qualified professional, oncologist, and psycho-oncologist to prevent the patient from forming incorrect beliefs about their illness.
Finally, providing medical advice involves legal and ethical considerations, and relying on a language model like ChatGPT may not comply with medical regulations, standards, or the patient’s cultural background.
In conclusion, AI in healthcare must be strictly regulated and overseen to reduce these risks [6,30]. Further research is needed to compare the performance of different AI systems and evaluate the usefulness of AI-generated responses for cancer patients in real-world clinical settings. Seeking advice from experienced healthcare professionals who can assess individual clinical histories, conduct physical examinations, and interpret diagnostic testing is critical for providing accurate and safe medical care. Furthermore, it is vital to determine the quality and composition style of input delivered to chatbots across different settings, languages, and resource capacities. Implementing such a significant technological advancement necessitates caution and proactive risk management to ensure patient safety and quality of care.

5. Conclusions

The emergence of AI-driven conversational technology, exemplified by ChatGPT, has created new opportunities to support cancer patients throughout their journey. LLMs can significantly improve patients’ well-being and empowerment by offering accurate information, guidance in treatment decisions, and emotional support. Evidence shows how these models can satisfactorily answer many questions about the symptoms, pathophysiology, treatment options, and prognosis of various types of cancer. However, these models have limitations, the main concern being their potential to produce inaccurate or unreliable information plausibly, especially when dealing with complex medical conditions or nuanced treatment options. Additionally, ChatGPT may not interpret the context accurately or understand the subtle nuances of patient questions, leading to responses that are not fully applicable or helpful. Recognizing its limitations, integrating ChatGPT into the healthcare ecosystem promises to provide personalized, accessible, and empathetic support to cancer patients.

Author Contributions

Conceptualization, M.C. (Michaela Cellina) and M.C. (Maurizio Cè); methodology, V.C., A.B.; investigation, M.C. (Maurizio Cè), M.C. (Michaela Cellina) and G.O.; resources, V.C., A.B., P.F.F. and G.I.; writing—original draft preparation, M.C. (Maurizio Cè), M.C. (Michaela Cellina), V.C., A.B., P.F.F. and G.I.; writing—review and editing, G.O., M.C. (Maurizio Cè) and G.I.; supervision, P.F.F.; project administration, M.C. (Maurizio Cè) and M.C (Michaela Cellina). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bujnowska-Fedak, M.M.; Waligóra, J.; Mastalerz-Migas, A. The Internet as a Source of Health Information and Services. In Advancements and Innovations in Health Sciences; Pokorski, M., Ed.; Advances in Experimental Medicine and Biology; Springer International Publishing: Cham, Switzerland, 2019; Volume 1211, pp. 1–16. [Google Scholar] [CrossRef]
  2. Johnson, S.B.; King, A.J.; Warner, E.L.; Aneja, S.; Kann, B.H.; Bylund, C.L. Using ChatGPT to Evaluate Cancer Myths and Misconceptions: Artificial Intelligence and Cancer Information. JNCI Cancer Spectr. 2023, 7, pkad015. [Google Scholar] [CrossRef]
  3. Yeung, A.W.K.; Tosevska, A.; Klager, E.; Eibensteiner, F.; Tsagkaris, C.; Parvanov, E.D.; Nawaz, F.A.; Völkl-Kernstock, S.; Schaden, E.; Kletecka-Pulker, M.; et al. Medical and Health-Related Misinformation on Social Media: Bibliometric Study of the Scientific Literature. J. Med. Internet Res. 2022, 24, e28152. [Google Scholar] [CrossRef] [PubMed]
  4. Cancer Misinformation and Harmful Information on Facebook and Other Social Media: A Brief Report. Available online: https://pubmed.ncbi.nlm.nih.gov/34291289/ (accessed on 21 February 2024).
  5. Schäfer, W.L.A.; Van Den Berg, M.J.; Groenewegen, P.P. The Association between the Workload of General Practitioners and Patient Experiences with Care: Results of a Cross-Sectional Study in 33 Countries. Hum. Resour. Health 2020, 18, 76. [Google Scholar] [CrossRef] [PubMed]
  6. Dave, T.; Athaluri, S.A.; Singh, S. ChatGPT in Medicine: An Overview of Its Applications, Advantages, Limitations, Future Prospects, and Ethical Considerations. Front. Artif. Intell. 2023, 6, 1169595. [Google Scholar] [CrossRef] [PubMed]
  7. Joshi, G.; Jain, A.; Araveeti, S.R.; Adhikari, S.; Garg, H.; Bhandari, M. FDA-Approved Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices: An Updated Landscape. Electronics 2024, 13, 498. [Google Scholar] [CrossRef]
  8. Li, Y.; Gao, W.; Luan, Z.; Zhou, Z.; Li, J. The Impact of Chat Generative Pre-Trained Transformer (ChatGPT) on Oncology: Application, Expectations, and Future Prospects. Cureus 2023, 15, e48670. [Google Scholar] [CrossRef] [PubMed]
  9. Wang, C.; Liu, S.; Yang, H.; Guo, J.; Wu, Y.; Liu, J. Ethical Considerations of Using ChatGPT in Health Care. J. Med. Internet Res. 2023, 25, e48009. [Google Scholar] [CrossRef] [PubMed]
  10. Chatterjee, J.; Dethlefs, N. This New Conversational AI Model Can Be Your Friend, Philosopher, and Guide … and Even Your Worst Enemy. Patterns 2023, 4, 100676. [Google Scholar] [CrossRef] [PubMed]
  11. Hirosawa, T.; Harada, Y.; Yokose, M.; Sakamoto, T.; Kawamura, R.; Shimizu, T. Diagnostic Accuracy of Differential-Diagnosis Lists Generated by Generative Pretrained Transformer 3 Chatbot for Clinical Vignettes with Common Chief Complaints: A Pilot Study. Int. J. Environ. Res. Public Health 2023, 20, 3378. [Google Scholar] [CrossRef] [PubMed]
  12. Kung, T.H.; Cheatham, M.; Medenilla, A.; Sillos, C.; De Leon, L.; Elepaño, C.; Madriaga, M.; Aggabao, R.; Diaz-Candido, G.; Maningo, J.; et al. Performance of ChatGPT on USMLE: Potential for AI-Assisted Medical Education Using Large Language Models. PLoS Digit. Health 2023, 2, e0000198. [Google Scholar] [CrossRef]
  13. DeWalt, D.A.; Berkman, N.D.; Sheridan, S.; Lohr, K.N.; Pignone, M.P. Literacy and Health Outcomes: A Systematic Review of the Literature. J. Gen. Intern. Med. 2004, 19, 1228–1239. [Google Scholar] [CrossRef] [PubMed]
  14. Budhathoki, S.S.; Pokharel, P.K.; Good, S.; Limbu, S.; Bhattachan, M.; Osborne, R.H. The Potential of Health Literacy to Address the Health Related UN Sustainable Development Goal 3 (SDG3) in Nepal: A Rapid Review. BMC Health Serv. Res. 2017, 17, 237. [Google Scholar] [CrossRef] [PubMed]
  15. Perrone, F.; Jommi, C.; Di Maio, M.; Gimigliano, A.; Gridelli, C.; Pignata, S.; Ciardiello, F.; Nuzzo, F.; De Matteis, A.; Del Mastro, L.; et al. The Association of Financial Difficulties with Clinical Outcomes in Cancer Patients: Secondary Analysis of 16 Academic Prospective Clinical Trials Conducted in Italy. Ann. Oncol. 2016, 27, 2224–2229. [Google Scholar] [CrossRef]
  16. Zhu, L.; Mou, W.; Chen, R. Can the ChatGPT and Other Large Language Models with Internet-Connected Database Solve the Questions and Concerns of Patient with Prostate Cancer and Help Democratize Medical Knowledge? J. Transl. Med. 2023, 21, 269. [Google Scholar] [CrossRef] [PubMed]
  17. Rao, A.; Kim, J.; Kamineni, M.; Pang, M.; Lie, W.; Succi, M.D. Evaluating ChatGPT as an Adjunct for Radiologic Decision-Making. MedRxiv 2023. preprint. [Google Scholar] [CrossRef]
  18. Campbell, D.J.; Estephan, L.E. ChatGPT for Patient Education: An Evolving investigation. J. Clin. Sleep Med. 2023, 19, 2135–2136. [Google Scholar] [CrossRef] [PubMed]
  19. Ryan, H.; Schofield, P.; Cockburn, J.; Butow, P.; Tattersall, M.; Turner, J.; Girgis, A.; Bandaranayake, D.; Bowman, D. How to Recognize and Manage Psychological Distress in Cancer Patients. Eur. J. Cancer Care 2005, 14, 7–15. [Google Scholar] [CrossRef] [PubMed]
  20. Dekker, J.; Graves, K.D.; Badger, T.A.; Diefenbach, M.A. Management of Distress in Patients with Cancer—Are We Doing the Right Thing? Ann. Behav. Med. 2020, 54, 978–984. [Google Scholar] [CrossRef] [PubMed]
  21. Gordijn, B.; Have, H.T. ChatGPT: Evolution or Revolution? Med. Health Care Philos. 2023, 26, 1–2. [Google Scholar] [CrossRef] [PubMed]
  22. Elyoseph, Z.; Hadar-Shoval, D.; Asraf, K.; Lvovsky, M. ChatGPT Outperforms Humans in Emotional Awareness Evaluations. Front. Psychol. 2023, 14, 1199058. [Google Scholar] [CrossRef] [PubMed]
  23. Dolan, N.C.; Ferreira, M.R.; Davis, T.C.; Fitzgibbon, M.L.; Rademaker, A.; Liu, D.; Schmitt, B.P.; Gorby, N.; Wolf, M.; Bennett, C.L. Colorectal Cancer Screening Knowledge, Attitudes, and Beliefs Among Veterans: Does Literacy Make a Difference? J. Clin. Oncol. 2004, 22, 2617–2622. [Google Scholar] [CrossRef] [PubMed]
  24. Zheng, Y.; Wu, Y.; Feng, B.; Wang, L.; Kang, K.; Zhao, A. Enhancing Diabetes Self-Management and Education: A Critical Analysis of ChatGPT’s Role. Ann. Biomed. Eng. 2023, 52, 741–744. [Google Scholar] [CrossRef] [PubMed]
  25. Yeo, Y.H.; Samaan, J.S.; Ng, W.H.; Ting, P.-S.; Trivedi, H.; Vipani, A.; Ayoub, W.; Yang, J.D.; Liran, O.; Spiegel, B.; et al. Assessing the Performance of ChatGPT in Answering Questions Regarding Cirrhosis and Hepatocellular Carcinoma. Clin. Mol. Hepatol. 2023, 29, 721–732. [Google Scholar] [CrossRef] [PubMed]
  26. Yapar, D.; Demir Avcı, Y.; Tokur Sonuvar, E.; Faruk Eğerci, Ö.; Yapar, A. ChatGPT’s Potential to Support Home Care for Patients in the Early Period after Orthopedic Interventions and Enhance Public Health. Jt. Dis. Relat. Surg. 2024, 35, 169–176. [Google Scholar] [CrossRef] [PubMed]
  27. Borkowski, A.A. Applications of ChatGPT and Large Language Models in Medicine and Health Care: Benefits and Pitfalls. Fed. Pract. 2023, 40, 170. [Google Scholar] [CrossRef]
  28. Tsang, R. Practical Applications of ChatGPT in Undergraduate Medical Education. J. Med. Educ. Curric. Dev. 2023, 10, 238212052311784. [Google Scholar] [CrossRef] [PubMed]
  29. Khan, R.A.; Jawaid, M.; Khan, A.R.; Sajjad, M. ChatGPT—Reshaping Medical Education and Clinical Management. Pak. J. Med. Sci. 2023, 39. [Google Scholar] [CrossRef] [PubMed]
  30. Kitamura, F.C. ChatGPT Is Shaping the Future of Medical Writing But Still Requires Human Judgment. Radiology 2023, 307, e230171. [Google Scholar] [CrossRef] [PubMed]
  31. Kuşcu, O.; Pamuk, A.E.; Sütay Süslü, N.; Hosal, S. Is ChatGPT Accurate and Reliable in Answering Questions Regarding Head and Neck Cancer? Front. Oncol. 2023, 13, 1256459. [Google Scholar] [CrossRef]
  32. Wei, K.; Fritz, C.; Rajasekaran, K. Answering Head and Neck Cancer Questions: An Assessment of ChatGPT Responses. Am. J. Otolaryngol. 2024, 45, 104085. [Google Scholar] [CrossRef]
  33. Shaw, J.; Patidar, K.R.; Reuter, B.; Hajezifar, N.; Dharel, N.; Wade, J.B.; Bajaj, J.S. Focused Education Increases Hepatocellular Cancer Screening in Patients with Cirrhosis Regardless of Functional Health Literacy. Dig. Dis. Sci. 2021, 66, 2603–2609. [Google Scholar] [CrossRef] [PubMed]
  34. Liu, H.Y.; Alessandri Bonetti, M.; De Lorenzi, F.; Gimbel, M.L.; Nguyen, V.T.; Egro, F.M. Consulting the Digital Doctor: Google Versus ChatGPT as Sources of Information on Breast Implant-Associated Anaplastic Large Cell Lymphoma and Breast Implant Illness. Aesthetic Plast. Surg. 2023, 48, 590–607. [Google Scholar] [CrossRef] [PubMed]
  35. Rahsepar, A.A.; Tavakoli, N.; Kim, G.H.J.; Hassani, C.; Abtin, F.; Bedayat, A. How AI Responds to Common Lung Cancer Questions: ChatGPT versus Google Bard. Radiology 2023, 307, e230922. [Google Scholar] [CrossRef] [PubMed]
  36. Emile, S.H.; Horesh, N.; Freund, M.; Pellino, G.; Oliveira, L.; Wignakumar, A.; Wexner, S.D. How Appropriate Are Answers of Online Chat-Based Artificial Intelligence (ChatGPT) to Common Questions on Colon Cancer? Surgery 2023, 174, 1273–1275. [Google Scholar] [CrossRef] [PubMed]
  37. Moazzam, Z.; Cloyd, J.; Lima, H.A.; Pawlik, T.M. Quality of ChatGPT Responses to Questions Related to Pancreatic Cancer and Its Surgical Care. Ann. Surg. Oncol. 2023, 30, 6284–6286. [Google Scholar] [CrossRef] [PubMed]
  38. Hermann, C.E.; Patel, J.M.; Boyd, L.; Growdon, W.B.; Aviki, E.; Stasenko, M. Let’s Chat about Cervical Cancer: Assessing the Accuracy of ChatGPT Responses to Cervical Cancer Questions. Gynecol. Oncol. 2023, 179, 164–168. [Google Scholar] [CrossRef] [PubMed]
  39. Chow, J.C.L.; Wong, V.; Sanders, L.; Li, K. Developing an AI-Assisted Educational Chatbot for Radiotherapy Using the IBM Watson Assistant Platform. Healthcare 2023, 11, 2417. [Google Scholar] [CrossRef] [PubMed]
  40. Stokel-Walker, C. AI Bot ChatGPT Writes Smart Essays—Should Professors Worry? Nature, 9 December 2022; d41586-022-04397-7. [Google Scholar] [CrossRef]
  41. Hopkins, A.M.; Logan, J.M.; Kichenadasse, G.; Sorich, M.J. Artificial Intelligence Chatbots Will Revolutionize How Cancer Patients Access Information: ChatGPT Represents a Paradigm-Shift. JNCI Cancer Spectr. 2023, 7, pkad010. [Google Scholar] [CrossRef] [PubMed]
  42. Nedbal, C.; Naik, N.; Castellani, D.; Gauhar, V.; Geraghty, R.; Somani, B.K. ChatGPT in Urology Practice: Revolutionizing Efficiency and Patient Care with Generative Artificial Intelligence. Curr. Opin. Urol. 2024, 34, 98–104. [Google Scholar] [CrossRef] [PubMed]
  43. Dahmen, J.; Kayaalp, M.E.; Ollivier, M.; Pareek, A.; Hirschmann, M.T.; Karlsson, J.; Winkler, P.W. Artificial Intelligence Bot ChatGPT in Medical Research: The Potential Game Changer as a Double-Edged Sword. Knee Surg. Sports Traumatol. Arthrosc. 2023, 31, 1187–1189. [Google Scholar] [CrossRef] [PubMed]
  44. Whiles, B.B.; Bird, V.G.; Canales, B.K.; DiBianco, J.M.; Terry, R.S. Caution! AI Bot Has Entered the Patient Chat: ChatGPT Has Limitations in Providing Accurate Urologic Healthcare Advice. Urology 2023, 180, 278–284. [Google Scholar] [CrossRef]
Table 1. Advantages and limitations of ChatGPT and similar LLMs for patients and doctors.
Table 1. Advantages and limitations of ChatGPT and similar LLMs for patients and doctors.
AdvantagesLimitations
Accessibility and inclusiveness (remote access to essential health information for disadvantaged communities, inability to provide harmful or offensive responses, and elimination of stigma surrounding sensitive topics)Limited Internet access and low digital literacy.
Some languages are not available.
Timely and accurate access to medical information (good for general purposes and basic medical knowledge)Limited consideration of the patient’s medical history; these models work only on inputs provided and cannot ask questions (moderate accuracy for complex cases).
Patient-friendly explanations of medical terms, treatment options, and potential side effects to improve patient understanding and informed decision-makingOversimplification.
Patients may consider this tool as a substitute for medical consultation, replacing the doctor–patient relationship.
Emotional support and patient empowerment Underestimation of disease severity.
Ethical and legal support It may not comply with local medical regulations, standards, and the patient’s cultural background.
Handling routine repetitive tasks like writing medical reports Reduced supervision of clinicians routinely relies on these tools.
Retrieval of medical literatureIt does not provide references (or if it does, they are not always true).
It is limited to information available until the knowledge cutoff date and does not have real-time updates.
It may include biased or outdated information.
Limited consideration of the specific clinical setting.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cè, M.; Chiarpenello, V.; Bubba, A.; Felisaz, P.F.; Oliva, G.; Irmici, G.; Cellina, M. Exploring the Role of ChatGPT in Oncology: Providing Information and Support for Cancer Patients. BioMedInformatics 2024, 4, 877-888. https://doi.org/10.3390/biomedinformatics4020049

AMA Style

Cè M, Chiarpenello V, Bubba A, Felisaz PF, Oliva G, Irmici G, Cellina M. Exploring the Role of ChatGPT in Oncology: Providing Information and Support for Cancer Patients. BioMedInformatics. 2024; 4(2):877-888. https://doi.org/10.3390/biomedinformatics4020049

Chicago/Turabian Style

Cè, Maurizio, Vittoria Chiarpenello, Alessandra Bubba, Paolo Florent Felisaz, Giancarlo Oliva, Giovanni Irmici, and Michaela Cellina. 2024. "Exploring the Role of ChatGPT in Oncology: Providing Information and Support for Cancer Patients" BioMedInformatics 4, no. 2: 877-888. https://doi.org/10.3390/biomedinformatics4020049

APA Style

Cè, M., Chiarpenello, V., Bubba, A., Felisaz, P. F., Oliva, G., Irmici, G., & Cellina, M. (2024). Exploring the Role of ChatGPT in Oncology: Providing Information and Support for Cancer Patients. BioMedInformatics, 4(2), 877-888. https://doi.org/10.3390/biomedinformatics4020049

Article Metrics

Back to TopTop