Artificial Intelligence in Science Communication

When prompted to describe some of its key functions, amongst others ChatGPT (GPT-3.5) listed “information retrieval”, “content generation” and “idea generation”. It is therefore easy to understand why AI tools such as ChatGPT are an attractive tool within any form of communications environment. A world where a simple prompt could return a distilled list of summarised articles, a fully written scientific abstract, or key strategic inputs has the potential to transform, streamline and accelerate scientific communication. However, with the speed of which AI tools are improving, concerns have arisen around ethics, privacy and liability and there are questions surrounding whether the pharmaceutical sector can keep up with these changes and efficiently harness the power AI tools have the potential to provide.
 
How is AI currently used in scientific communications?
 
AI tools such as ChatGPT are already being used in scientific communications at a basic level. They have the ability to generate outlines and rough text, generate very basic literature searches, provide ideas and concepts, and can be utilised as a surrogate user-test of readability and design. It is important that communicators acknowledge AI still has limits and always verify outputs with some degree of scepticism. However, with the speed at which AI tools are improving and growing, the possibilities for the uses of AI seem endless. 
 
How could AI tools be utilised within scientific communications?
 
Upon asking a group of medical writers how they would like to be able to use AI in future within the workplace, they listed various areas and tasks where generative AI could supplement their work. These included:

  • Generation of accurate reference lists and reference packs e.g. an AI tool with the ability to read content generated by medical writers, and then substantiate with accurate peer-reviewed literature.
  • Generation of medical diagrams e.g. an AI tool to generate copyright free medical diagrams that are editable to produce a cohesive style across the final product.
  • Draft content writing e.g. an AI tool with the ability to read clinical study reports and generate a first draft outline that aligns with CONSORT standards (i.e. standards for reporting clinical trial data). The role of the medical writer would be to edit and verify statements within the report and align it with client key strategic objectives.
  • Hypothesis and insight generation e.g. to answer key business questions and suggests strategic approaches and ideas.

What are the limitations of AI tools within scientific communications?
 
Whilst the creation of these tools may sound achievable in the current landscape, there are many hurdles to be overcome before we can solidify the role of AI within medical communications. For example, whilst ChatGPT has the ability to summarise papers and perform literature searches, it is not consistently correct and has the potential to produce convincing answers that are wrong or opaque. The most freely accessible ChatGPT-3.5 model is only trained on data up to September 2021, therefore by default will overlook the most recent data. This creates clear problems within communications, especially in volatile and fast-moving landscapes such as drug development.
 
Beyond potential inaccuracy, privacy and copyright questions are yet to be addressed. For example, how can we ensure that confidential and sensitive information (e.g. patient information from clinical trials) is adequately protected before feeding it into AI platforms?  Without comprehensive worldwide regulation, the use of AI within the pharmaceutical sector will be limited. Furthermore, if you use an AI tool to generate scientific content, should it be listed as an author? The World Association of Medical Editors recommends that chatbots, such as ChatGPT, cannot be listed as authors, nor can they hold copyright. However, they do advise that authors should be transparent when chatbots are used and provide information about how they are used. How will this be monitored and enforced? Does this create a paradox of then requiring the creation of AI tools to detect the use of AI tools? 
 
To conclude, whilst the wide utilisation of AI tools may seem superficially plausible, closer inspection reveals various issues limiting its integration into science communication. Although we may be concerned that one day AI will steal our jobs, it is clear that human input is still vital to maintaining integrity, accuracy and confidentiality. It is perhaps more accurate to say that it is not AI that will take over our jobs, but rather the people most comfortable with AI tools who will be at the forefront of science communication.
 
“First drafts can be generated, knowledge can be summarised, but human judgement and review will be required for final trust outputs……only the human mind can imagine, dream, find nuances and solve the unseen” – Vas Narsimhan

Comments

Comments
Blog post currently doesn't have any comments.
 Security code

If you are a British Pharmacological Society member, please sign in to post comments.

Back to Homepage

Published: 13 Dec 2023
By Hannah Roughley

About the author

Hannah Roughley



Hannah gained a 1st class degree in Pharmacology from the University of Leeds, including a 12 month placement in the pharmacology department at the Novartis Institute for Tropical Disease. She then went on to complete a Master’s degree focused on cancer biology at UCL. Since leaving academia, Hannah has worked as a Medical Writer in a London-based healthcare communications agency. She has a continued interest in scientific communication, particularly the dissemination of complex health concepts to lay audiences.

Related Pages