Nabla , Paris , France
Saw Swee Hock School of Public Health, National University of Singapore , Singapore , Singapore
Global Health & Digital Innovation Foundation , London , United Kingdom
UCL Global Business School for Health , London , United Kingdom
The integration of artificial intelligence (AI), specifically large language models (LLMs), into healthcare continues to accelerate, necessitating thoughtful evaluation and oversight to ensure safe, ethical, and effective deployment. This editorial summarizes key perspectives from a recent panel conversation among AI experts regarding central issues around implementing LLMs for clinical applications. Key topics covered include: the potential of explainable AI to facilitate transparency and trust; challenges in aligning AI with variable global healthcare protocols; the importance of evaluation via translational and governance frameworks tailored to healthcare contexts; scepticism around overly expansive uses of LLMs for conversational interfaces; and the need to judiciously validate LLMs, considering risk levels. The discussion highlights explainability, evaluation and careful deliberation with healthcare professionals as pivotal to realizing benefits while proactively addressing risks of larger AI adoption in medicine.
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The statements, opinions and data contained in the journal are solely those of the individual authors and contributors and not of the publisher and the editor(s). We stay neutral with regard to jurisdictional claims in published maps and institutional affiliations.