
ChatGPT:
That Message From Your Doctor? It May Have Been Drafted by A.I.
As the volume of patient-doctor communication surges, many physicians in the U.S. are overwhelmed by the thousands of messages they receive daily through platforms like MyChart. This has led to a significant shift: the integration of artificial intelligence (A.I.) to draft responses to these patient queries. The A.I., specifically Epic’s “In Basket Art” tool, generates replies based on a patient’s previous messages and medical records, helping physicians manage the workload. However, this trend introduces several ethical and practical concerns, particularly around transparency, accuracy, and the role of A.I. in clinical communication.
⚕️ Rise of A.I. in Patient Communication
Doctors in the U.S. are using artificial intelligence tools to help draft responses to patient messages in an attempt to alleviate some of the overwhelming workloads they face. Every day, platforms like MyChart are used by patients to send their doctors detailed messages describing their symptoms, pain, or other medical concerns. The introduction of Epic’s A.I. tool has been widely adopted, with around 15,000 doctors and assistants in over 150 health systems now using it. It helps physicians by drafting messages, which doctors can then review, edit, and approve.
The use of A.I. for patient communication is part of a broader movement in healthcare toward automating administrative tasks. Traditionally, A.I. has been used to streamline processes like summarizing appointment notes or dealing with insurance claims. However, Epic’s tool marks a new shift where A.I. encroaches on the patient-doctor communication space, generating medical advice based on patient data from electronic medical records.
🏥 MyChart’s In Basket Art Tool
The A.I. technology in question, In Basket Art, developed by Epic, leverages GPT-4—the same technology behind ChatGPT—to draft patient messages. It pulls information from patients’ electronic health records (EHR) to create contextually relevant responses. Doctors don’t have to start from scratch; instead, they edit the drafts provided by the A.I. to ensure medical accuracy before sending them off.
The A.I. system doesn’t just generate medical advice in a generic form. It is tailored to respond in a specific doctor’s writing style, aiming to maintain the personal touch that patients expect from their healthcare providers. The tool helps physicians act more as editors than writers, which theoretically gives them more time to focus on other patient interactions.
Despite this, the tool has its limitations. For example, early studies have shown that while In Basket Art reduces cognitive burden and mitigates physician burnout to some extent, it doesn’t necessarily save significant time. The drafts it produces aren’t always accurate, and doctors need to remain vigilant to prevent mistakes from slipping through.
⚠️ Ethical and Safety Concerns
Many patients receiving these messages have no idea that A.I. is involved in drafting their responses. Health systems are divided over whether to disclose the use of A.I. in patient communication. Some, like U.C. San Diego Health, include a note at the bottom of messages, informing patients that the response was generated with the help of A.I. and edited by a doctor. However, others, including Stanford Health Care, U.W. Health, and N.Y.U. Langone Health, have opted not to disclose A.I. involvement, fearing that it might harm patient trust or encourage physicians to rely too much on automated drafts without properly vetting them.
A major concern among experts is the potential for “automation bias,” a phenomenon where humans, including doctors, are more likely to trust automated systems’ suggestions without critically analyzing them. This bias could lead to a reduced quality of care if doctors let their guard down and trust A.I.-generated responses too readily.
In practice, some errors have already surfaced. A case at U.N.C. Health highlighted this issue when an A.I.-generated draft incorrectly assured a patient that she had received a hepatitis B vaccine, even providing false dates. The system did not have access to her vaccination records and, instead of signaling this gap, it fabricated the information.
📊 How Reliable Are A.I. Drafts?
Several studies highlight the risks associated with relying too heavily on A.I. in patient communication. A recent study found that seven out of 116 A.I.-generated messages contained fabrications, or “hallucinations,” where the system confidently produced false information. In more serious scenarios, GPT-4, the underlying technology, posed a risk of severe harm in about 7% of hypothetical patient interactions if its responses had been left unedited. These mistakes, if unnoticed, could have real-life consequences for patient safety.
Physicians are supposed to carefully review every message the A.I. drafts. Yet, studies suggest that automation bias remains a concern, and it’s uncertain how vigilant doctors will remain as the technology improves. Currently, fewer than one-third of A.I.-generated drafts are sent to patients without any editing, suggesting doctors are still paying close attention. But as A.I. becomes more sophisticated, there’s a worry that doctors might start to let their guard down, increasing the risk of errors slipping through.
🤖 A Doctor’s Voice or A.I. Imitation?
To maintain the personal connection patients expect from their doctors, Epic’s tool is designed to mimic a doctor’s writing style. This customization seeks to make patients feel that they are receiving personalized care. Some doctors have noted that the tool’s drafts initially lacked the personal touch they wanted to convey to their patients. In response, Epic recently expanded the tool’s ability to learn from past messages, allowing it to generate drafts that sound more like the individual doctor.
While this may seem like a positive development, it raises ethical questions. When patients send messages to their doctors, they expect advice from someone who understands their medical history and personal circumstances. Many experts argue that if patients knew their message was generated by A.I., it would undermine the trust and the longstanding relationship they have with their doctors. According to one expert, patients would feel “betrayed” if they learned that their doctor was using an A.I. system to communicate.
🌍 Implications for the Future of Healthcare Communication
As A.I. becomes increasingly integrated into healthcare communication, it raises a broader question: Is this the direction we want healthcare to go? Many A.I. tools, like those used to automate administrative tasks, are meant to improve efficiency without directly affecting patient care. However, in this case, A.I. is stepping into the more personal and critical area of doctor-patient interaction. This intrusion could dilute one of the few remaining aspects of healthcare where personal communication still matters.
There’s also the issue of whether these tools will be fully trusted in the future. Experts, like Dr. Ken Holstein at Carnegie Mellon, warn that A.I. tools can lead to doctors becoming over-reliant on automation. The risk here is that doctors might begin to trust A.I. so much that they fail to question the accuracy of the responses, putting patient safety in jeopardy.
As healthcare systems balance the desire to reduce physician burnout with the need to maintain high-quality patient care, the role of A.I. in medical communication will continue to evolve. Some hospital leaders, like those at Stanford Health Care, have taken “managed risks” by removing strict programming constraints, allowing A.I. to provide more clinical input. Others remain cautious, ensuring that A.I. is primarily used for non-clinical tasks.
Ultimately, the use of A.I. in drafting messages between doctors and patients may save time and reduce burnout, but it also poses significant risks. Ensuring that these tools are used ethically and safely, with appropriate transparency, will be crucial as A.I. becomes more prevalent in healthcare.
📝 Key Takeaways
- Widespread A.I. Adoption: Over 150 U.S. health systems are using A.I. to help doctors draft responses to patient messages via platforms like MyChart.
- Lack of Transparency: Most patients are unaware that the messages they receive may have been partially generated by A.I., sparking ethical concerns.
- Risk of Automation Bias: Doctors could over-rely on A.I., trusting its drafts too readily and allowing mistakes to pass through.
- Errors and Fabrications: Studies show that A.I.-generated messages sometimes include errors or fabricated medical information, which could harm patients.
- Efficiency vs. Accuracy: While A.I. reduces cognitive burden and helps address physician burnout, it doesn’t necessarily save time or improve communication quality.
- Doctor’s Voice: A.I. systems are designed to mimic each doctor’s writing style, attempting to maintain the personal connection between doctors and patients.
- Varying Transparency Approaches: Some health systems disclose A.I. involvement in patient communication, while others avoid it, fearing loss of trust.
- Ethical Concerns: The increasing use of A.I. in doctor-patient communication raises questions about the future of personal interaction in healthcare.
- Guardrails and Risks: Despite efforts to prevent A.I. from offering clinical advice, errors continue to occur, posing safety risks.
- The Future of A.I. in Healthcare: As A.I. becomes more accurate, doctors may become less critical, potentially leading to more errors slipping through.
FAQs
1. What is MyChart, and how is A.I. used in it?
MyChart is a communication platform used by patients to send messages to their doctors about symptoms, medical concerns, or other healthcare-related queries. A.I. tools, like Epic’s “In Basket Art,” are now being used to help draft responses to these messages, pulling from a patient’s medical history and previous conversations. Doctors review and edit these drafts before sending them to patients.
2. Are patients aware when A.I. drafts responses?
In many cases, patients are not informed that their doctor’s reply was partially generated by A.I. Some healthcare systems disclose this in a note at the bottom of the message, while others choose not to, fearing it may reduce patient trust or discourage doctors from properly reviewing the messages.
3. Why are doctors using A.I. to respond to messages?
Doctors are overwhelmed by the sheer number of patient messages, especially through platforms like MyChart. A.I. is used to help reduce the cognitive and administrative burden by generating initial drafts that doctors can then edit. This is intended to free up time for other patient care tasks and reduce physician burnout.
4. Is the A.I. system used for clinical decision-making?
Although A.I. is mainly designed to assist with administrative tasks and drafting replies, there have been instances where it has provided clinical advice, sometimes incorrectly. Healthcare systems have built-in guardrails to prevent this, but errors can still occur.
5. What are the risks of using A.I. for doctor-patient communication?
The primary risks include A.I. generating incorrect or misleading medical advice, as well as doctors potentially over-relying on A.I. drafts due to automation bias. Errors, like providing false vaccination records, have already been reported in some instances.
6. How accurate is the A.I. in drafting medical responses?
While the A.I. tool is designed to pull from medical records and previous patient messages, studies show that it occasionally makes errors. One study found that 7% of the A.I.-generated drafts posed a risk of serious harm if left unedited. The drafts are meant to be reviewed by doctors to catch these mistakes.
7. Does using A.I. really save doctors time?
A.I. does reduce cognitive burden and helps doctors manage the overwhelming number of messages. However, studies show that it doesn’t necessarily save significant time, as physicians still need to carefully review and edit the drafts before sending them to patients.
8. How do health systems manage the use of A.I. for patient communication?
Each health system sets its own policies for A.I. use. Some are transparent with patients about A.I. involvement, while others refrain from disclosing it. They also decide how much freedom the A.I. has to draft messages, with some systems allowing it to provide more clinical input, while others restrict its use to administrative responses.
9. Can A.I. drafts fully replace a doctor’s input?
No, A.I. is intended to assist doctors, not replace them. Doctors still need to review and modify A.I.-generated drafts to ensure accuracy and appropriateness. The technology is not perfect and requires human oversight to prevent errors.
10. What ethical concerns are associated with A.I. in healthcare communication?
The primary ethical concerns involve transparency, accuracy, and trust. Many believe that patients should be informed if their doctor’s message was drafted by A.I. There are also concerns about the role of A.I. in replacing human interaction in healthcare, and whether it is appropriate to automate sensitive doctor-patient communication.
