Artificial intelligence (AI) integration in clinical practice has intensified in the last few years, from systems analysing and interpreting existing data to generative AI systems capable of creating new information and offering new possibilities for patient communication. However, the public’s perception of AI- generated health information remains largely unexplored. This study aimed to assess public trust in AI-generated health information, identifying influencing factors on their trust and evaluating the accuracy of AI-produced content. A mixed-method approach was employed, involving a survey distributed via social media to individuals with recent access to health information. Results revealed that while the public knew AI systems’ capabilities, their trust in AI-generated content was moderate. Key concerns included: the accuracy of the information, potential biases in AI algorithms, and ethical issues related to privacy. Results showed that transparency, healthcare professional endorsements, and clear evidence of accuracy are critical in building trust in AI-generated health information. Addressing these concerns is essential for successfully integrating AI into patient communication, to enable the reliability and use of AI as an ethical tool in healthcare.
Medical Writing. 2025;34(2):70–73. https://doi.org/10.56012/itdm3913
Editor-in-Chief
Co-Editors
Managing Editor
Victoria White
Deputy Managing Editor
Alicia Brooks Waltman
Associate Editors
Section Editors
AI/Automation
Biotechnology
Digital Communication
EMWA News
Freelancing
Gained in Translation
Getting Your Foot in the Door
Good Writing Practice
Pablo Izquierdo / Alison McIntosh
In the Bookstores
Publications
Medical Communications/Writing for Patients
Medical Devices
My First Medical Writing
News from the EMA
Pharmacovigilance
Regulatory Matters
Regulatory Public Disclosure
Louisa Ludwig-Begall / Sarah Kabani
The Crofter: Sustainable Communications
Veterinary Writing
Editors Emeritus
Layout Designer
Chris Monk