Originally published by our sister publication Pharmacy Practice News

By Marcus A. Banks

InpharmD, which uses artificial intelligence plus human curation to answer drug information questions, seeks to overcome some of the more challenging limitations of current AI programs, not the least of which is a troubling tendency to make up fake clinical references or omit key findings that can compromise patient care.

The need for better AI systems is acute. In 2023, researchers at New York’s Long Island University found that the AI system ChatGPT incorrectly answered 75% of drug information questions researchers asked it. In one instance, ChatGPT erroneously asserted that there is no drug–drug interaction between the COVID-19 antiviral nirmatrelvir+ritonavir (Paxlovid, Pfizer) and the blood pressure–lowering medication verapamil.

That’s why having a human component to these systems is so critical, noted Ashish Advani, PharmD, the co-founder and president of InpharmD. “There is absolutely no room for error here, because patient health and lives are at stake,” he said.

It is also important for these systems to lessen the workload of end users. Hence, InpharmD’s approach includes synthesizing relevant research and then providing pharmacists with actionable advice, rather than pointing people to research studies and expecting them to ferret out the answers themselves. InpharmD uses AI to produce the first draft of these answers, but nothing is published to the InpharmD platform until a human pharmacist employed by InpharmD has reviewed the AI-generated information and corrected any errors.

image

“This is an iterative process,” Dr. Advani said. “Once we improve the answers generated by the AI model, the model incorporates this correct information into its understanding. The next time someone asks the same question, the AI will produce a response with fewer or no errors.”

InpharmD evaluates the accuracy of how well AI parses the initial search, as well as the accuracy of the summaries that AI provides. The search parsing, today, is accurate around 65% of the time—meaning that a human agrees with how the AI interpreted the question roughly 2 in 3 times. Summarization is better; 75% of the words the AI generates in response to a query are kept, on average. And the system learns from the 25% of words that it got wrong, which are removed and replaced by a pharmacist, so that its accuracy continues to climb. The goal is that search and summarization accuracy both reach 99%, Dr. Advani said.

Even then, nothing would ever just be published on InpharmD’s servers without oversight.

“I don’t ever foresee a world in which we totally remove humans from the loop,” Dr. Advani said.

InpharmD’s current clients are hospital pharmacists who care for inpatients. Since InpharmD enables pharmacists in these settings to answer questions faster than they could before, Dr. Advani said, the clinicians have begun to use their freed-up time to provide more consistent follow-up care to patients after they are discharged from the hospital.

Skepticism in the Field

“We haven’t used AI to answer drug information queries so far,” said Faria Munir, PharmD, MS, BCPS, a clinical assistant professor in the Drug Information Group at the University of Illinois at Chicago College of Pharmacy. The group answers questions from hospitals around the country on a fee-for-service basis and employs 13 to 15 drug information specialists.

The workload is manageable for Drug Information Group staff pharmacists who have expertise in answering what tend to be complex questions that defy easy answers, she noted. Dr. Munir added that she would prefer to use human expertise entirely at this stage, rather than relying on an AI system that might be inaccurate and so would need to be overseen by a human pharmacist anyway.

“This doesn’t mean AI has no uses, and maybe someday we would start to use it once it’s more reliable,” Dr. Munir said, adding that she knows some of the pharmacists behind InpharmD from journal clubs and other professional settings and has no doubt that they prioritize patient care.

Scott D. Nelson, PharmD, MS, an associate professor of biomedical informatics at Vanderbilt University Medical Center, in Nashville, Tenn., agreed that AI still has major limitations when it comes to generating reliable drug information.

For example, a human may know that a question is about a child even if the questioner does not say so; but an AI system may dole out recommendations for adults, missing the context entirely. And there’s always the chance that an AI system will create citations to articles that do not exist—a well-documented phenomenon known as “hallucination”—or provide an incorrect answer to a clinical question, with potentially severe consequences if a pharmacist trusts the advice (Healthcare [Basel] 2023;11[6]:887).

“It’s best to think of AI as augmented intelligence, not artificial intelligence,” Dr. Nelson said. Seen in that way, AI can be a boon for getting an introduction to the literature in a drug class or standard practices for treating numerous clinical conditions.

Dr. Nelson cited another AI benefit: People can use natural language for queries rather than having to learn a clunky syntax. And the systems do learn over time: Pharmacists can always correct an incorrect response generated by AI, and then use the correct response to train the AI system going forward, he noted. This is how AI systems become more accurate.


Drs. Advani and Munir reported no relevant financial disclosures beyond their stated employment. Dr. Nelson reported advisory relationships with Baxter Health and Merative Micromedex.