Are artificial intelligence (AI)–based chatbots coming for your health care job? To answer this question, RAND hosted a conversation with two physician-AI experts from Mass General Brigham, the biggest health care system in Massachusetts and the largest hospital system–based research enterprise in America. At the event in RAND's Boston office in June 2024, we asked Dr. Rebecca Mishuris, chief medical information officer and VP of Digital at Mass General Brigham, and Dr. Bernardo Bizzo, a diagnostic radiologist and a senior director for MGB's AI business, for their insights on AI's impact on the workforce. In short, they described many ways artificial intelligence is saving time at their hospitals and were not worried about machines replacing humans.
Their main points:
- AI is already in use in health care, but there are many impactful new uses for generative AI such as ambient documentation.
- Administrative uses may be the least risky and most beneficial for health care workers experiencing burnout.
- Most jobs are expected to change, not disappear.
- Both patients and health care professionals often struggle to understand the technology.
While it may not be obvious to patients, AI is already integrated into daily health care systems, from predicting which patients are likely to develop infections to forecasting missed appointments. Other uses are not fully developed: while ChatGPT has demonstrated success on a number of medical board exams, hospitals are not outsourcing actual medical care to computers yet.
While it may not be obvious to patients, AI is already integrated into daily health care systems.
Share on TwitterMishuris, who is also a practicing primary care doctor, said she doesn't think any jobs are going to be eliminated as a result of her work (except maybe scribes, a job that emerged with the introduction of the EHR). She sees some AI applications as “life-changing.” Bizzo, a diagnostic radiologist, wondered if he were starting his career over, whether he would still choose to become a radiologist given how exciting the new front of generative AI is.
How is AI affecting health care? Right now, generative AI offers support in medicine by easing administrative burdens through automated phone answering or help paying bills, summarization of text and analysis for research, and automatic documentation.
That last one—automatic documentation—has been the focus of much of Mishuris' energy, because the impact can be so great. Her team is testing tools that can record, transcribe, organize, and produce a medical note from a clinical visit in seconds. It requires no user training. MGB is now conducting a pilot with more than 600 physicians and advanced practice practitioners.
Speaking as a patient in this system, I wouldn't want an AI program listening to my appointment, even if it saves my doctor a little time. I'd worry too much about the recording—currently stored for 60–90 days by the company. Maybe my opinion would change if it meant appointments were easier to get or let the doctor spend more time with me, instead of it just allowing hospitals to increase their patient load.
However, soon I may not have a choice. Massachusetts requires consent for the recording as part of wiretapping laws, but that is not the case in many states, and in the future the recording may not be kept as long. AI analysis and assistance in writing clinical notes do not require patient consent; they are considered tools like clinical decision aids and calculators. So it's coming, and we may not know that our doctors are using it.
But for the doctors, according to feedback Mishuris has heard so far, the ambient documentation technology is life-changing. Physicians no longer spend hours in the evening catching up on notes. One told her, “I'm going home at the end of the day with all my notes done.” Another said she used to spend her Saturday mornings on her notes, and she can now go to her kids' gymnastics meets.
Patient feedback on AI use in the doctor's office is important but harder to collect. So far patients seem curious about it, and only a handful have declined to participate. (I have yet to be asked!) Will patients be able to opt out in the future? “The jury is still out about to what extent there needs to be informed consent around the use of artificial intelligence,” Mishuris said.
MGB has been testing ambient documentation when doctor and patient are speaking a language other than English and when a translator is involved. In both cases, it still seems to work well. This may have some upsides for patient equity.
Both speakers acknowledged legitimate concerns in this era of rapid development. There are risks of errors and worries about complacency in checking AI's work. Cyber security risks could also surface. Of course, current computer systems also have problems; data breaches and security issues exist without AI.
Right now, generative AI offers support in medicine by easing administrative burden.
Share on TwitterAt the same time “the burnout problem is just untenable,” Mishuris said. Ambient documentation holds the hope of alleviating some of that burden.
During a recent protest, the president of the California Nurses Association said, “No patient should be a guinea pig and no nurse should be replaced by a robot.” As with any new technology, fears over potential unintended consequences of AI abound, but haven't been realized—for now.
Thus, jobs may not disappear yet. Questions remain: Will AI make the medical workforce more efficient at a lower cost, enable new tasks at the same cost, or increase costs? What's more important: better care or saving money? As a society, we need to decide what the goal is; AI could be one tool for getting there.