AI is not a replacement for your doctor, therapist, or any healthcare provider.
Written by Gillian Lao
Content warning: This blog post contains topics related to depression and eating disorders that may be triggering to some readers.
Artificial intelligence (AI) has been making its way into various industries, including both social media and healthcare. AI technology has the potential to improve patient care as a tool for health professionals, but it’s not a replacement for human professionals. This is especially true when it comes to giving medical advice. Advice from AI chatbots can be negatively impactful to those who have mental and physical health concerns, and should be taken with a grain of salt.
Chatbots have been populating many online platforms within the past year, including the popular app Snapchat. Their newest feature, Snapchat AI, is a chatbot designed to answer questions and give advice and input on whatever you ask it about. This chatbot appears on the user’s friend list and chat section in the app. The user could see this chatbot as a friend to talk to. But imagine your child in this situation. What if your child tells the chatbot that they are feeling depressed? What kind of response will they get? Is it a good thing for youth to go to a robot for mental health advice?
According to Snapchat’s website, they said “Because My AI is an evolving feature, you should always independently check answers provided by My AI before relying on any advice, and you should not share confidential or sensitive information.”
Some other companies have also implemented their own AI chatbots within their work system, some even giving physical health advice to customers. The National Eating Disorders Association (NEDA) swapped out their human staff and volunteers with a chatbot called Tessa. Their job was to give advice to those with eating disorders, however, the chatbot had to be disabled because it gave harmful advice to two users.
This is another example of how things can go wrong online. Think of how you or someone you know would feel after being given harmful advice from what you thought was a trusted and professional resource. Credibility becomes an issue when companies are now starting to incorporate AI chatbots into their staff. It’s even scarier when the entire staff is replaced by a single robot. Human interaction is something that is difficult to completely replace as our society heads in a direction where the whole robots and humans “coexisting” movie trope is becoming a reality.
Whenever you are looking at information online whether that be reading the news, scrolling through social media, or talking to chatbots, you should always check other sources. While AI technology has the potential to improve patient care, it is still developing and should not be relied upon as your only source of information. Double-checking sources and seeking advice from trusted and reliable medical professionals is recommended. According to a study published in the journal JAMA Network Open, if a user were to ask ChatGPT serious public health questions, the chatbot would provide critical emergency resources only 22% of the time.
AI chatbots are not a replacement for medical professionals when it comes to giving advice, diagnoses, etc. AI technology, though it may not be brand new, is still technically new and changing for a lot of us. It is showing up in schools, at work, and now in healthcare settings. One thing to note is that AI chatbots don’t have PhD’s so if you do decide to ask it questions related to your health, take everything it says with a grain of salt.
Instead of going to AI with your mental and physical health questions and concerns, we recommend going to trusted and reliable medical health professionals. Especially if you are seeking immediate medical help.
Sources for this blog post:
Wall Street Journal: https://www.wsj.com/articles/eating-disorder-chatbot-ai-2aecb179
CNN: https://www.cnn.com/2023/06/07/health/chatgpt-health-crisis-responses-wellness