A New Way to Capture the Patient Experience

essay

A physician behind star ratings, photo by Natali_Mis/Getty Images

Photo by Natali_Mis/Getty Images

May 2, 2019

Steven Martino studies health care quality at RAND, and he knows the power of a good story. Those two facts help explain a project he's been working on that could make the American health care system a little more responsive to the patients it serves.

Martino and his research partners have developed a more effective and reliable way for patients to provide narrative feedback about the care they receive. They were inspired by consumer websites like Amazon, where people post comments by the thousands on everything from books to blenders.

Done right, they realized, short but detailed reviews could help health care providers better understand the patient experience, the good and the bad, in the patient's own words.

Putting Science Behind the Anecdotes

Dozens of websites already allow patients to rate and review their doctors. The problem is that there's no way to know who's writing those reviews or why, or whether their experiences are at all typical. One study even identified several reviews that appeared to have been written by the doctors themselves. (“Every anonymous review I've written on myself has been glowing,” said one.)

But users clearly value those reviews. Nearly a quarter of the people in one survey had run their doctor's name through one of those sites before making an appointment. Martino and colleagues, from RAND and collaborating institutions, decided to build a better review—to put some science behind the anecdotes.

“The power of narratives is that they convey emotion, they engage people at an emotional level,” Martino said. “They're vivid, they convey detailed information, and they stick with people.”

Martino's team developed a set of questions to guide patients through providing a short but detailed review of their care. They asked what patients expected from their doctor, what happened during their appointment, what they liked and disliked about the experience, and how they related to their doctor. It took around five minutes to complete.

And it worked. The researchers gave the questions to hundreds of patients, then went back and interviewed many of them to see how well their answers to the questions captured their full experience as conveyed in the interview. They found that those five-minute reviews covered a surprising amount of ground. They were especially good at conveying how patients felt about their doctor's communication style, their interactions with the front desk, and the coordination of their care.

Reading Reviews Can Lead Patients to Worse Decisions

“People have been gathering reviews for years, but no one was testing how well their methods work,” said Mark Schlesinger, a professor of public health at Yale University and member of the research team. “No one had asked the question, 'Are we doing this well?'”

The researchers envisioned a consumer website like Amazon that would show people narrative reviews alongside numerical information about doctor quality.

But here, they hit a problem. When they tested how such a website would perform with patients, they found that reading reviews actually led to worse decisions.

Those who saw reviews along with numerical ratings picked a lower-performing doctor more than half the time.

Share on Twitter

They used a mock-up of a patient-choice website designed to mimic real-world sites, down to the colorful graphics and snappy name, SelectMD. They could vary how much information they provided on the site, from higher-level star ratings to a full complement of detailed ratings and reviews. When they asked users to pick the doctor they would want to go see, those who saw reviews along with numerical ratings picked a lower-performing doctor more than half the time.

That happened even when the website highlighted links between the numerical information and the reviews. It happened even when the site gave users a sense for how typical any given comment was. In fact, when reviews were in the picture, the only thing that saved users from making bad choices was having a navigator on hand to help them make sense of it all.

“We kept getting this effect where the narratives were outweighing the numerical information,” Martino said. “We know people are very engaged by narrative information. They weigh it to a very large extent in their decisions. But we found that if consumers pay too much attention to narrative information, they lose something important in their evaluation of these doctors, hospitals, and health care providers.”

The researchers decided to turn their attention to a different audience.

Patients' Comments Help Providers Understand How to Improve

For years, doctors and hospitals have relied on numerical data and star ratings to assess how well they're meeting patient expectations. Rigorous, reliable reviews would let them supplement those quality measures with patient voices.

The survey questions could provide the data: “In the last six months, how often did this provider spend enough time with you? Never-sometimes-usually-often.” But the reviews could add some depth, from the patient's perspective: “Even though I was the doctor's last appointment and we were already running a little late, she still took extra time to address some things I was concerned about and made sure I understood what she was doing and why.”

Reviews can help providers better understand where they could improve and how.

Share on Twitter

Those reviews, in short, could help providers better understand not just where they could improve, but how, said Rick Evans, a senior vice president and the chief experience officer for NewYork–Presbyterian, an academic health care delivery network. The network is one of two, with UCLA, that has started collecting patient reviews using the researchers' five-minute protocol.

Evans calls them “de-mystifiers.”

“Put yourself in a doctor's position,” Evans said. “When you show a doctor a low rating, inevitably, the question is, 'Well, what does that mean?' These deeper comments can give them a sense of what that score means, and where they can improve.”

The researchers are now building a digital dashboard to help NewYork–Presbyterian and other providers navigate the reviews they get back and identify any areas for improvement. Data scientists at RAND are also working on algorithms that could go through the reviews and code them by keyword and topic.

The U.S. Agency for Healthcare Research and Quality now includes the review protocol as a supplement to its flagship patient survey, the Consumer Assessment of Healthcare Providers and Systems.

The researchers have not given up on finding a way to keep reviews from drowning out other performance measures. “We have powerful tendencies to be unduly persuaded by evocative commentary, and those are hard to work against,” Martino said.

He sees that in himself, every time he scrolls through those one-star product reviews on Amazon. “I like to think that I do it more carefully because I've been studying this phenomenon for so long now,” he said. “But it's hard not to think that maybe I'll be the one person who the negative thing happens to. That part of your brain that should recognize the rarity of it shuts off for a moment. The comments become the story, even though I know it's not the whole story.”

Doug Irving