Abstract
Purpose of Review
To explore the intersection of chatbots and HIV prevention and care. Current applications of chatbots in HIV services, the challenges faced, recent advancements, and future research directions are presented and discussed.
Recent Findings
Chatbots facilitate sensitive discussions about HIV thereby promoting prevention and care strategies. Trustworthiness and accuracy of information were identified as primary factors influencing user engagement with chatbots. Additionally, the integration of AI-driven models that process and generate human-like text into chatbots poses both breakthroughs and challenges in terms of privacy, bias, resources, and ethical issues.
Summary
Chatbots in HIV prevention and care show potential; however, significant work remains in addressing associated ethical and practical concerns. The integration of large language models into chatbots is a promising future direction for their effective deployment in HIV services. Encouraging future research, collaboration among stakeholders, and bold innovative thinking will be pivotal in harnessing the full potential of chatbot interventions.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
Chatbots, also known as conversational agents, have gained significant attention in various fields, including education, commerce, entertainment, and healthcare [1]. These systems, which communicate with users using natural language, mimic human behaviour and are often connected to messaging services, web pages, and mobile applications [2]. They can be classified into two types: rule-based and AI-based [3••, 4•]. Rule-based chatbots use a state machine and pre-defined question-and-answer structures to control conversations. On the other hand, AI-based chatbots utilize AI techniques for natural language understanding (NLU) and natural language generation (NLG), allowing them to understand natural language, maintain different conversation contexts, and generate fluid and coherent responses [2, 5]. Building on the existing body of knowledge, this review uniquely synthesizes recent developments, advancements, and challenges in the application of chatbots and large language models (LLMs) in HIV prevention and care, along with providing an insightful perspective on future directions, thereby offering a comprehensive understanding of this rapidly evolving intersection of technology and healthcare.
Approach
For this review, Google Scholar, Semantic Scholar, PubMed, and arXiv were carefully searched using keyword combination phrases such as “chatbots and HIV prevention”, “large language models and HIV prevention”, and “AI and HIV Care” to amass studies relevant to the intersection of chatbots and HIV prevention and care. Ideas were also developed through conversations with peers working in the field. The principal criterion for the inclusion of studies was their relevance to the topic, with a primary focus on English literature published within the past 5 years to ensure the recency and pertinence of the findings. The review aimed to provide a qualitative synthesis of recent literature on the topic. Unlike systematic reviews, this review adopted a more interpretive, discursive approach, allowing for the incorporation of a broader range of materials and the elucidation of trends, themes, and emerging insights in the field of study. It provided the opportunity to explore the evolving narrative of chatbots in HIV prevention and care, surfacing the multifaceted ways in which conversational agents have been leveraged to address the challenges inherent in HIV prevention efforts and the provision of care.
HIV Prevention and Care
The use of conversational agents (chatbots) in HIV prevention is still in its infancy. Nonetheless, there has been significant investment in their development and use, including in healthcare. As evident in Table 1, chatbots have been posited as a promising tool to provide personalized health information and offer emotional support in diverse and resource-constrained contexts [6••, 7, 8•]. Notably, a systematic review of chatbot interventions in Africa found that 4 (33%) of the 12 papers meeting inclusion criteria focused on the use of chatbots in HIV prevention and care. Beyond the finding of how few papers exist on the topic in Africa, other insights from the review include infrastructure challenges (mobile phones, data costs, and electricity), the barrier of indigenous language use, a focus on user experience and design, and regulatory, ethical, and data security issues [6••]. A broader review of chatbots in healthcare reports shows that the chatbot evidence base consists primarily of descriptive, quasi-experimental, and qualitative studies. Most studies are aimed at treatment, rather than prevention, with monitoring and health care system support being prevalent. Chatbots were found to typically be delivered by SMS or app and almost always using text rather than audio, video, virtual reality, or other modalities [3••]. A third review of mHealth tools to promote HIV pre-exposure prophylaxis (PrEP) uptake and adherence found that chatbots, primarily rules-based rather than AI to date, were included among the most common mHealth interventions encouraging HIV prevention that also included gamification, medication logs, testing location maps, and notification reminders [8•].
Key Populations
Several chatbots, such as the pioneering “Amanda Selfie” in Brazil, have been developed with a focus on key populations. Amanda Selfie, a transgender chatbot used to stimulate PrEP interest in adolescents, can facilitate sensitive discussions on sex, STIs, and PrEP, and even identify individuals at higher risk for HIV [9]. Using the theoretical framework of Davis [10] qualitative work with men who have sex with men (MSM) in Malaysia found that the perceived usefulness of chatbots hinged around questions of whether the information provided by the chatbot was accurate, and whether or not it could provide emotional support. Ease of use was influenced by considerations of cost, convenience of access, and software bugs [11]. Similar results were obtained from focus group discussions with cis-gender female sex workers in South Africa who were found to be open to chatbots and other mHealth tools for HIV care. Additional barriers to accessing and using these tools emerge in low-income groups such as these women who note that inconsistent phone ownership and threats to the privacy of such tools dues to shared phone ownership [12]. Finally, the Nolwazi bot, an isiZulu-speaking conversational agent for HIV self-testing support, showed high acceptability among users, particularly men; around 80% of those engaging with the chatbot preferred the experience over talking with a human counsellor and 77% reported that they felt they were talking with a real person [13, 14]. A key benefit noted across many of these studies is the widely reported perception that chatbots are less “judgmental” compared to interacting with a person, making people more likely to share sensitive information with them [7]. Engaging directly with chatbots can also improve engagement in HIV testing and prevention for high-risk groups who may face barriers to receiving care at traditional primary healthcare facilities by providing access to curated information, increasing anonymity and reducing stigma [15].
HIV Prevention and Care Workforce
It is not only users who express interest in the application of chatbots to HIV prevention and care. Recent group interviews with HIV research assessment staff, intervention coaches, and community advisory board members uncovered interest in the use of chatbots to improve capacity, consistency, convenience, and quality of services, for example, by automatically conducting check-ins, follow-ups, referral linkages, and scheduling for their clients [16]. The integration of chatbots into a diverse range of health settings allows for personalized and accessible support, including the provision of appropriate resources and treatment recommendations based on patient responses, thereby reducing staff burden. However, the current body of research is not representative of diverse geographies, cultures, and age groups, with most studies conducted in high-income countries and focusing on the use of text-based chatbots [3••, 6••]. This lack of diversity limits the generalizability of the findings.
Current Limitations
The use of LLM or AI chatbots in HIV prevention and care raises safety, ethical, and practical concerns, particularly when it comes to providing medical advice. Rule-based chatbots are much more controlled in terms of content responses, but programming permutations in dialogue flows can become unwieldy when attempting to anticipate all possible user responses and when covering diverse topics and actions. For AI chatbots, privacy rights, confidentiality, the potential to provide biased and fabricated responses (i.e. “hallucinate”), data protection, patient engagement, costs, and staffing requirements are among the current issues that need to be addressed before chatbots become more useful and widely implemented in the field of HIV prevention [12, 15]. This is even more relevant in the aftermath of the COVID-19 pandemic accelerating the adoption of telehealth, mHealth, and chatbots in public health and healthcare [12, 15, 17]. Governance frameworks and principals, such as RESET, a set of 10 AI chatbot healthcare ethics principals (Fig. 1) encompassing topics noted above, are helpful in assisting careful consideration of the multiple challenges of ethically deploying a healthcare AI chatbot, but the proliferation of such advice is still fragmented and in some cases contradictory [18]. Alongside these guidelines, calls have been made to establish standards of comparative criteria to evaluate and validate chatbot deployments and safeguarding frameworks for user protection in public health domains [7].
Current chatbots that leverage LLMs to provide a more conversational tone require significant computational resources and energy for training, raising concerns about their environmental impact [19]. There are also concerns that the speed at which adoption of these models has occurred has left many open questions about their limitations and the potential for developing emergent capabilities that are unknown and potentially misaligned with the development team’s goals. Technical intricacies such as prompt brittleness, the inability to do simple reverse logic, e.g. A is the mother of which famous person B (can answer), who is famous person B’s mother? (unable to answer), and outdated knowledge impacts the effective utilization of these models and highlights the need for further exploration and guidelines for their use [20]. Moreover, neural models trained on large datasets can replicate and amplify negative, stereotypical, and derogatory associations in the data. Worse, inappropriate responses from conversational AI systems in emergency situations can have severe consequences, including potential life-threatening outcomes. The integration of LLMs into chatbots is also hampered by current context window limitations which effectively give chatbots very short memories and make continuity over multi-turn interactions (i.e. more than one conversation session) difficult. This may only be a short-term challenge as multiple efforts are underway to address this challenge.
From the user’s perspective, trustworthiness and accuracy of information are both important factors in people’s abandonment of consultations with diagnostic chatbots [7]. This concern is particularly noteworthy in current LLM chatbots that sometimes suffer from “hallucination” by favouring sentence coherence over factual accuracy. Co-design approaches that incorporate insights from users and health professionals have been proposed to combat some of the complexity inherent in designing chatbots. Despite these concerns, the potential of AI in transforming healthcare is undeniable. As regulatory bodies work to implement mechanisms to address AI as a medical device, and as trust, clinical safety, and reliability are prioritized, the future of AI in healthcare looks promising.
Recent Advancements and Future Directions
The advent of artificial intelligence (AI) has brought about significant transformations in various sectors, including HIV research. Sundar Pichai, CEO of Google, posits that AI will have a more profound impact on humanity than fire, electricity, and the internet [21]. The transformative potential of artificial intelligence (AI), particularly generative AI models like GPT, in healthcare is increasingly being recognized. LLMs, such as ChatGPT and Claude, have shown remarkable ability to comprehend and generate human-like text, leveraging medical data and knowledge to transform clinical decision support, patient communication, and data management in healthcare [22]. In the realm of medical education, LLMs have also shown promise, aiding learning and problem-solving, and demonstrating proficiency in medical knowledge without specialized training [22, 23]. Generative AI language models are also beginning to be applied in clinical decision support (CDS). Recent results from a CDS study analyzed decision support messages generated by ChatGPT (not the newer GPT-4; 36 suggestions) and humans (29 suggestions) for 7 alerts. Of the 20 suggestions that on review by an expert panel were ranked highest, 9 were generated by ChatGPT [24]. An HIV prevention chatbot that can aid in prevention messaging, encourage self-testing, and personalized treatment strategies would be transformational in a low-resource setting. They would have the potential to reach many individuals, especially those who may face barriers to accessing traditional healthcare services.
Future work should rigorously address the optimization of chatbot design and functionality to ensure maximal utility, safety, and user-friendliness (Table 2). This encompasses not only the aesthetic and interactive aspects but also the underlying LLMs that will certainly underpin the next generation of chatbots. Fine-tuning models to allow for better understanding and response to user inputs, particularly in a medical context, is crucial. Furthermore, the inclusion of features that enhance accessibility, such as voice recognition and multilingual support, can significantly broaden the reach and applicability of these interventions. The advancement of AI-driven chatbots in HIV prevention and care necessitates concerted collaborative efforts between healthcare providers, researchers, developers, and end-users. Through collaborative efforts, a thorough understanding of the needs, preferences, and barriers faced by the target populations can be garnered. Moreover, this collaboration can facilitate the translation of research findings into actionable interventions, thus bridging the gap between theory and practice. The potential of chatbots to transform HIV prevention and care in low-resource settings hinges on the scalability and sustainability of these interventions. Exploring innovative funding models, establishing partnerships with local health departments, and leveraging existing healthcare infrastructures are potential strategies to ensure the long-term viability and scalability of chatbot interventions.
While integration of chatbots into existing health systems and in support of health care providers are called for, this moment also asks of us the careful consideration about whether we want the future of HIV care to simply be the current system with chatbots in support of it or whether the future could and should look radically different as LLMs move towards artificial general intelligence (AGI). Healthcare is a bureaucratic fortress brimming with vested interests and a lucrative revenue model. While regulation and oversight are clearly important to protect people from harm, it may be that these instruments become used by powerful stakeholders to steer away from the disruption of existing hierarchies. Chatbots harnessing LLMs hold the potential to dismantle longstanding barriers to healthcare, ushering in an era of unprecedented accessibility and personalized support, particularly in resource-limited settings. A balanced and careful path forward is imperative, one that navigates ethical and safety concerns while keeping the transformative potential alive. Ongoing dialogue should move beyond the simplistic narrative of chatbots merely complementing human providers and venture into the truly disruptive nature of this innovation, exploring how chatbots could redefine healthcare paradigms, democratize access, and potentially dismantle existing power structures that often act as gatekeepers to quality care. To realize this future requires bold and innovative thinking alongside pilot testing, implementation science, and large-scale randomized controlled trials (RCTs) that test the full capabilities of HIV chatbot interventions, especially ones that incorporate the technical advances made in the last 18 months.
References
Taj I, Zaman N. Towards industrial revolution 5.0 and explainable artificial intelligence: challenges and opportunities. Int J Comput Digit Syst. 2022;12(1):295–320.
Kusal S, Patil S, Choudrie J, Kotecha K, Mishra S, Abraham A. AI-based conversational agents: a scoping review from technologies to future directions. IEEE Access. 2022;PP:1–1.
Tudor Car L, Dhinagaran DA, Kyaw BM, Kowatsch T, Joty S, Theng YL, et al. Conversational agents in health care: scoping review and conceptual analysis. J Med Internet Res. 2020;22(8):e17158. This scoping review and conceptual analysis is a very useful summary of the field until the beginning of 2020. The limitation is that it predates the arrival of LLMs and the rise of LLMs and more conversational AI.
Adamopoulou E, Moussiades L. Chatbots: history, technology, and applications. Mach Learn Appl. 2020;2:100006. Everything you could ever want to know about chatbots.
Kapočiūtė-Dzikienė J, Balodis K, Skadiņš R. Intent detection problem solving via automatic DNN hyperparameter optimization. Appl Sci. 2020;10(21):7426.
Phiri M, Munoriyarwa A. Health chatbots in Africa: scoping review. J Med Internet Res. 2023;25:e35573. Recent review covering the use of chatbots in Africa with a significant focus on HIV and AIDS.
Wilson L, Marasoiu M. The development and use of chatbots in public health: scoping review. JMIR Hum Factors. 2022;9(4): e35882.
LaBelle M, Strong C, Tseng YC. mHealth strategies to promote uptake and adherence to PrEP: a systematic review. In: Rau PLP, editor. Cross-cultural design applications in health, learning, communication, and creativity. Cham: Springer International Publishing; 2020. p. 99–113. (Lecture Notes in Computer Science). Although not just about chatbots gives a useful overview of the broader field of mHealth approaches to HIV prevention.
Massa P, de Souza Ferraz DA, Magno L, Silva AP, Greco M, Dourado I, et al. A transgender chatbot (Amanda Selfie) to create pre-exposure prophylaxis demand among adolescents in Brazil: assessment of acceptability, functionality, usability, and results. J Med Internet Res. 2023;23(25): e41881.
Davis F. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989;(Sept.):319–39.
Peng ML, Wickersham JA, Altice FL, Shrestha R, Azwa I, Zhou X, et al. Formative evaluation of the acceptance of HIV prevention artificial intelligence chatbots by men who have sex with men in Malaysia: focus group study. JMIR Form Res. 2022;6(10): e42055.
You WX, Comins CA, Jarrett BA, Young K, Guddera V, Phetlhu DR, et al. Facilitators and barriers to incorporating digital technologies into HIV care among cisgender female sex workers living with HIV in South Africa. mHealth. 2020;6(0). [cited 2023 Sep 29]. Available from: https://mhealth.amegroups.org/article/view/34297.
van Heerden A, Ntinga X, Vilakazi K. The potential of conversational agents to provide rapid HIV counseling and testing services. In: 2017 International Conference on the Frontiers and Advances in Data Science (FADS). 2017. p. 80–5. [cited 2023 Sep 29]. Available from: https://ieeexplore.ieee.org/document/8253198.
Ntinga X, Musiello F, Keter AK, Barnabas R, van Heerden A. The feasibility and acceptability of an mHealth conversational agent designed to support HIV self-testing in South Africa: cross-sectional study. J Med Internet Res. 2022;24(12): e39816.
Garett R, Young SD. Potential application of conversational agents in HIV testing uptake among high-risk populations. J Public Health Oxf Engl. 2022;45(1):189–92.
Comulada WS, Rezai R, Sumstine S, Flores DD, Kerin T, Ocasio MA, et al. A necessary conversation to develop chatbots for HIV studies: qualitative findings from research staff, community advisory board members, and study participants. AIDS Care. 2023;0(0):1–9.
Young SD, Schneider J. Clinical care, research, and telehealth services in the era of social distancing to mitigate COVID-19. AIDS Behav. 2020;24(7):2000–2.
Sundareswaran V, Sarkar A. Chatbots RESET a framework for governing responsible use of conversational AI in healthcare. New Delhi: World Economic Forum; 2020. Available from: http://203.90.70.117/pds/.
On the dangers of stochastic parrots | Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency [Internet]. [cited 2023 Sep 30]. Available from: https://dl.acm.org/doi/10.1145/3442188.3445922.
Kaddour J, Harris J, Mozes M, Bradley H, Raileanu R, McHardy R. Challenges and applications of large language models. 2023. arXiv preprint: arXiv:2307.10169.
Rajan A. Amol Rajan Interviews Sundar Pichai. [cited 2023 Sep 30]. Available from: https://www.sky.com/watch/title/programme/69c4ce8b-b540-492d-b7c3-8866bc0b6915/amol-rajan-interviews-sundar-pichai-69c4ce8b-b540-492d-b7c3-8866bc0b6915/episodes/season-0/episode-1.
Singhal K, Azizi S, Tu T, Mahdavi SS, Wei J, Chung HW, et al. Large language models encode clinical knowledge. Nature. 2023;620(7972):172–80.
Safranek CW, Sidamon-Eristoff AE, Gilson A, Chartash D. The role of large language models in medical education: applications and implications. JMIR Med Educ. 2023;14(9): e50945.
Liu S, Wright AP, Patterson BL, Wanderer JP, Turer RW, Nelson SD, et al. Assessing the value of ChatGPT for clinical decision support optimization. MedRxiv Prepr Serv Health Sci. 2023;2023.02.21.23286254.
Funding
Open access funding provided by Human Sciences Research Council. Comulada’s time was partially supported by the Center for HIV Identification, Prevention, and Treatment Services funded by the National Institute of Mental Health (CHIPTS; P30MH058107).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare no competing interests. GPT-04–0613 was used to brainstorm, provide bibliographic summaries of certain papers, and critique the final drafts in an interactive manner.
Human and Animal Rights and Informed Consent
This article does not contain any studies with human or animal subjects performed by any of the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
van Heerden, A., Bosman, S., Swendeman, D. et al. Chatbots for HIV Prevention and Care: a Narrative Review. Curr HIV/AIDS Rep 20, 481–486 (2023). https://doi.org/10.1007/s11904-023-00681-x
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11904-023-00681-x