ChatGPT and Health Care: Could the AI Chatbot Change the Patient Experience?


ChatGPT, an AI chatbot released by OpenAI in December 2022, has gained popularity for its impressive ability to provide quick and clear answers to a wide range of questions. Its usefulness has extended across various industries such as education, real estate, content creation and even healthcare.

While the chatbot has the potential to improve some aspects of the patient experience in healthcare, experts have warned of its limitations and potential risks. They emphasize that AI should never be used as a substitute for a physician’s care.

In recent years, people have been using search engines to look up medical information online, and ChatGPT has taken this a step further by allowing users to engage in what feels like an interactive conversation with a source of medical information that seems all-knowing. While the chatbot’s convenience and speed of information can be attractive, it’s important to be mindful of the accuracy of the information provided and to always seek professional medical advice when necessary.

“ChatGPT is far more powerful than Google and certainly gives more compelling results, whether [those results are] right or wrong,” Dr. Justin Norden, a digital health and AI expert who is an adjunct professor at Stanford University in California, told Fox News Digital in an interview. 

ChatGPT and Health Care: ChatGPT provides a unique experience for patients seeking health information. Unlike traditional search engines, which provide links to information that patients must then filter through, ChatGPT gives patients direct answers to their questions. However, it’s important to note that ChatGPT’s responses are sourced from the internet, which can be a source of misinformation. Therefore, it’s crucial to have a doctor vet the information provided by ChatGPT to ensure its accuracy.

Another limitation of ChatGPT is that its data is only trained up to September 2021. While it can continue to learn and improve its knowledge over time, it may not have access to the latest medical research and advancements. As a result, patients should always double-check the information provided by ChatGPT with their doctor or another trusted medical source to ensure they are receiving the most up-to-date and accurate information possible.

“I think this could create a collective danger for our society.”

Dr. Daniel Khashabi, a computer science professor at Johns Hopkins in Baltimore, Maryland, and an expert in natural language processing systems, is concerned that as people get more accustomed to relying on conversational chatbots, they’ll be exposed to a growing amount of inaccurate information.

“There’s plenty of evidence that these models perpetuate false information that they have seen in their training, regardless of where it comes from,” he told Fox News Digital in an interview, referring to the chatbots’ “training.” 

“I think this is a big concern in the public health sphere, as people are making life-altering decisions about things like medications and surgical procedures based on this feedback,” Khashabi added. 

“I think this could create a collective danger for our society.”

It might ‘remove’ some ‘non-clinical burden’

ChatGPT-based systems could revolutionize how patients interact with healthcare providers, by allowing them to easily schedule appointments and refill prescriptions without the need for lengthy phone calls or waiting on hold. This could improve the overall patient experience and make healthcare more accessible to those who may struggle with traditional methods of communication. However, it is important to note that such systems should always be used in conjunction with proper medical care and advice from trained professionals.

“I think these types of administrative tasks are well-suited to these tools, to help remove some of the non-clinical burden from the health care system,” Norden said.

“If the patient asks something and the chatbot hasn’t seen that condition or a particular way of phrasing it, it could fall apart, and that’s not good customer service,” he said. 

“There should be a very careful deployment of these systems to make sure they’re reliable.”

“It could fall apart, and that’s not good customer service.”

Khashabi also believes there should be a fallback mechanism so that if a chatbot realizes it is about to fail, it immediately transitions to a human instead of continuing to respond.

“These chatbots tend to ‘hallucinate’ — when they don’t know something, they continue to make things up,” he warned.

It might share info about a medication’s uses

“While ChatGPT cannot and should not be providing medical advice, it can be used to help explain complicated medical concepts in simple terms,” Norden said.

Patients use these tools to learn more about their own conditions, he added. That includes getting information about the medications they are taking or considering taking.

Patients can use the chatbot, for instance, to learn about a medication’s intended uses, side effects, drug interactions and proper storage.

When asked if a patient should take a certain medication, the chatbot answered that it was not qualified to make medical recommendations.

Instead, it said people should contact a licensed health care provider.

It might have details on mental health conditions

This statement highlights the limitations of ChatGPT when it comes to mental health support. Despite its ability to provide answers and information quickly, it lacks the human empathy and nuanced approach that a human therapist can provide. Mental health experts caution against replacing human therapy with ChatGPT.

Nevertheless, with the shortage of mental health providers and long wait times for appointments, some people may feel tempted to turn to AI for interim support. While ChatGPT should not replace the help of a mental health professional, it could potentially provide some helpful information and resources to those who need them.

“With the shortage of providers amid a mental health crisis, especially among young adults, there is an incredible need,” said Norden of Stanford University. “But on the other hand, these tools are not tested or proven.”

He added, “We don’t know exactly how they’re going to interact, and we’ve already started to see some cases of people interacting with these chatbots for long periods of time and getting weird results that we can’t explain.”

OpenAI ‘disallows’ ChatGPT use for medical guidance

OpenAI, the company that created ChatGPT, warns in its usage policies that the AI chatbot should not be used for medical instruction.

Specifically, the company’s policy said ChatGPT should not be used for “telling someone that they have or do not have a certain health condition, or providing instructions on how to cure or treat a health condition.”

It also stated that OpenAI’s models “are not fine-tuned to provide medical information. You should never use our models to provide diagnostic or treatment services for serious medical conditions.”

Additionally, it said that “OpenAI’s platforms should not be used to triage or manage life-threatening issues that need immediate attention.”

OpenAI has recommended that healthcare providers who use ChatGPT for health applications should provide a disclaimer to users to inform them of its potential limitations. As with any technology, ChatGPT’s role in healthcare is expected to continue to evolve over time. Some experts believe that ChatGPT has exciting potential in healthcare, while others are more cautious and suggest that the risks associated with its use need to be carefully weighed. As the use of AI in healthcare continues to grow, it will be important to consider how best to incorporate these technologies while ensuring that patient safety and quality of care remain top priorities.

As Dr. Tinglong Dai, a Johns Hopkins professor and renowned expert in health care analytics, told Fox News Digital, “The benefits will almost certainly outweigh the risks if the medical community is actively involved in the development effort.”

Thanks for reading “ChatGPT and Health Care: Could the AI Chatbot Change the Patient Experience?” from Storify News as a news publishing website from India. You are free to share this story via the various social media platforms and follow us on on; FacebookTwitterGoogle NewsGooglePinterest etc.

Erric Ravi
Erric Ravi
Erric Ravi is an entrepreneur, speaker & the founder of Storify News and Gurgaon Times of India He is the Co-Founder of The Storify News Times. He was born and raised in Gurgaon, India, where he developed an early interest in technology and the internet. After completing his Bachelor's degree in Information Technology from a reputed university, Erric began his career as an SEO specialist. He quickly made a name for himself in the industry by staying up-to-date with the latest SEO trends and techniques, and he soon gained a reputation as a skilled and knowledgeable expert in the field.


Please enter your comment!
Please enter your name here

Share post:

Subscribe US

New York
clear sky
16.6 ° C
18.9 °
11.6 °
62 %
0 %
24 °
27 °
28 °
29 °
33 °


More like this

8 Dead, 59 Injured After Huge Billboard Falls During Mumbai Dust Storm

Tragedy Strikes Mumbai as Dust Storm Triggers Billboard Collapse...

How Wicem Gindrey knows her death date?

Who is Wicem Gindrey? Wicem Gindrey is famous in France...

How to Enable Gamified Learning Using Generative AI

The experience of gamified learning has the potential to...