If the artificial intelligence text generator called ChatGPT is asked how it can help in medicine, it will answer you: “ChatGPT can be a valuable tool in various medical applications,” before providing a 10-point, and fairly detailed, explanation of its practical uses in health care.
But rather than taking ChatGPT’s word for it, some researchers at Penn, like Samiran Mukherjee, chief fellow in gastroenterology at the Perelman School of Medicine, are studying it.
“I don’t think artificial intelligence technology is going away,” says Mukherjee. “It behooves us to understand both how medical professionals can use it to support their work and how patients may choose to interact with it. I’m excited by the potential of artificial intelligence, and it could ultimately mean more accurate and efficient treatment for patients. Humanity can use AI as a tool just as it’s used cutting edge machines and technology for generations.”
Mukherjee’s latest research, published in Gastro Hep Advances, and a good place to start in describing the potential of ChatGPT and other conversational or interactive AI tools, centered around education. Mukherjee and coauthors including Michael L. Kochman, the Wilmott Family Professor in the Division of Gastroenterology, are gastroenterologists by training. They understand the importance of regular colonoscopies and colon cancer screening and wished to explore if ChatGPT could tell a patient if they needed a colonoscopy taking into account their age, their latest colonoscopy, and other health data. They decided to pose colonoscopy questions to ChatGPT and found that the software was able to apply medical standards to indicate many of the instances when a colonoscopy is recommended.
“My colleagues and I did not want to test ChatGPT to see if it could replace counseling from clinical experts but rather assess if it could be a useful tool to provide information to patients regarding cancer screening and prevention,” says Mukherjee. “It’s safe to assume that patients are going to ask ChatGPT questions about their health in the same way they ask Google or look things up on sites like WebMD. If we know if there are any inaccuracies in answers or pitfalls when asking medical questions to ChatGPT, we can mitigate that by coaching patients on effectively using AI. Physicians and researchers can also team up with software developers to make design improvements.”
Mukherjee and Kochman say that as ChatGPT learns from itself, that issue will likely become less significant, and they foresee people asking ChatGPT all sorts of questions about their health and receiving answers based on the latest available data. It’s just not there yet.
“Since this technology is still in its infancy, those who use ChatGPT to ask medical advice should always consult with their doctor,” says Mukherjee.
As generations age and technology helps us live longer, more people continue to require medical care. In the U.S., access to clinicians is not a huge issue. But long appointment wait times for specific specialty care, nurse shortages nationwide, and remote locales make it difficult for some patients to receive efficient medical care. Kochman says AI chatbots might be a solution.
“AI may help patients identify conditions where immediate medical care is called for and conversely may allow patients to try simple home and over-the-counter remedies for less concerning or dangerous conditions,” says Kochman. “All that frees medical resources for true emergencies and urgent conditions.”
There are still things that artificial intelligence can’t do … at least not yet. The issues: critical thinking and a different kind of intelligence—emotional intelligence.
Read more at Penn Medicine News.