You’ve heard the stories about the “hallucinations” produced by Generative AI engines. Often they are responses to a prompt that are presented as facts and delivered so authoritatively that they seem correct, even when they are not. Well-known examples include citations pointing to publications that do not exist or stories of meetings between famous people that never actually took place. Some hallucinations have cited medical journals that do exist, though the specific articles they cite do not. These hallucinations always sound plausible, until further analysis lays bare the fact that they are not. Determining what is real and what the AI has made up becomes a distinct challenge.
Some hallucinations may arise because the large language models (LLMs) used to train an algorithm are simply not large enough. If there were more data in the LLM, the hallucinations arising from the smaller LLM might be less likely to arise because the statistical likelihood of a relationship between one data point and another would be different. A larger model might lead to a more exacting understanding of relationships that are only suggested in the smaller LLMs.
For this reason, some drug development companies have found that using Generative AI tools to re-analyze the compounds in their databases to discover where — or where else — these compounds might be useful leads to innovation. To put a fine point on this fact, the use of “made up” information that is based on real findings can help the drug development process.
This isn’t the first time these companies have performed this analysis, even with the support of AI. But past results have been filled with erroneous suggestions in addressing a variety of medical conditions. One reason for these hallucinations may be the size of the LLM training the algorithms. A database of 10,000, even 100,000 molecules might constitute a large model to an analyst accustomed to analog analytical methods, but it’s a tiny model compared to the LLMs used by Generative AI.
Pharmaceutical companies have been ingenious about increasing the LLM reference data size in order to generate useful AI. They do this by deconstructing each molecule into more expansive categories of component parts — the biomechanical characteristics, the fundamental elements, single- and double-bond structures, whether they are aromatic or aliphatic compounds, and more. As soon as one deconstructs a database of molecules into their component parts, the number of data points informing the model grows exponentially. A LLM based on millions of core components may produce insights that are much more refined than was previously possible, and what previously appeared to be an AI hallucination may in fact prove to be an insight that is relevant but previously unrecognized.
Ultimately, no AI engine produces hallucinations because they feel pressured to come up with an answer to a question. The engines produce hallucinations because there’s a statistical reason for doing so that has its roots in the LLM upon which they’ve been trained. Clearly, it may be inadvisable to act immediately upon an answer that is plainly a hallucination, but it may be equally advisable to dig deeper and discover why the AI is generating the answer it is generating. The machine is telling you that something is there, and it may be worthwhile trying to discover what it is before you discount it entirely as a hallucination.
About The Author: Terri Steinberg, MD, MBA, FACP, FAMIA
As Medecision's Chief Medical and Strategy Officer, Dr. Terri Steinberg is responsible for enhancing Medecision's analytics, clinical informatics and data intelligence capabilities. Dr. Steinberg uses her experience to guide Medecision's implementation of clinical systems for its customers to ensure they achieve optimal workflows and value from our software. As a clinician as well as a software designer, Dr. Steinberg has lectured and consulted extensively on methods to ensure successful technology adoption by physicians and nurses, on the positive impact of technology on safe medication practice, and on the use of technology to drive Population Health Management. Dr. Steinberg was previously the Chief Health Information Officer and Vice President of Population Health Informatics at ChristianaCare, a large multi-entity healthcare organization in Delaware.
More posts by Terri Steinberg, MD, MBA, FACP, FAMIA