Using ExpertLens for the RAND/PPMD Patient-Centeredness Method (RPM)

What is the RAND/PPMD Patient-Centeredness Method?

healthcare, medical, medecine, man, doctor, health, concept, icon, pills, safety, hand, finger, man, touch, touchscreen, healthy, medical, stethoscope, syringe, heart, blood, case, dna, pharmacy, healthcare, hospital, stethoscope, care, doctor, icons, blue

The RPM is a new modified-Delphi approach for engaging patients and their representatives in the process of clinical guideline development. It helps guideline developers determine patient-centeredness of draft recommendations developed by experts based on the systematic review of existing evidence and their clinical expertise. The input from patients and their representatives is solicited iteratively before the guidelines are finalized to ensure that recommendations are consistent with their care preferences and needs. Doing so helps implement two criteria of the GRADE Evidence to Decision Framework: outcome importance and intervention acceptability.

Designed to iteratively solicit input from large and diverse groups of patients and their representatives, the RPM is best implemented online to ensure convenience and efficiency of data collection and analysis. A data collection platform with survey, automated data analysis, and discussion functionalities — such as ExpertLens™ — is needed for the online implementation.

How is the RPM Implemented with ExpertLens?

Round 0: Idea Generation

Patients and their representatives answer a series of open-ended and close-ended questions and engage in an online discussion about reasons for, barriers to, and facilitators of seeking care for a given medical problem. This round helps generate information about care preferences, needs, and values that may not be available in published literature. Round 0 results are shared with panelists in subsequent rounds to help them rate patient-centeredness of draft recommendations.

Round 1: Assessment

Participants rate and comment on draft guideline recommendations, which are presented in an easy-to-understand format and include a brief description of the clinical rationale, the process of following the recommendation, and any relevant additional information, including treatment burden and side effects.

Using 9-point Likert scales, participants rate patient-centeredness, operationalized as the importance and acceptability of each draft recommendation.

  • Importance is defined as the extent to which a recommendation is likely to be consistent with the preferences, needs, and values of patients with a given condition in general.
  • Acceptability is defined as the extent to which the process of following a given recommendation is likely to be consistent with available resources (e.g., time and finances) and with the ethical standards of patients with a given condition.

Participants also explain the rationales behind their ratings by identifying the factors that most affected their responses, which they enter into the open-text boxes presented below each rating question.

To help participants comment about patients in general, it is important to give them a description of patient preferences and needs that is based on the literature review and/or Round 0 results. Such information may also include a description of common barriers and facilitators to seeking care for a given condition.

Round 2: Feedback and Discussion

Participants review and discuss Round 1 results. Rating data are analyzed to determine the existence of group consensus. The analytic approach described in the RAND/UCLA Appropriateness Method User’s Manual could be used to do so. Participants review charts summarizing how their own response compares to that of the group. To ensure that the results of these analyses are easy-to-understand, color-coding of group decisions and hover-overs is used to show if consensus has been reached and to provide explanations of different statistical terms.

In addition, rationale comments are summarized thematically to explain how and why patients and their representatives rated a particular recommendation. To ensure consistency between the analysis of rating data and rationale comments, summarizing qualitative data by rating tertiles is helpful. Analyzing the rationale comments of participants who rated a given recommendation as 7, 8, or 9 provides a summary of why participants felt the recommendation was important or acceptable. Displaying these results in a tabular way and presenting them next to the charts describing rating results is useful.

Finally, participants discuss Round 1 results using asynchronous, (partially) anonymous, and moderated discussion boards with a threaded structure. The asynchronous nature of discussions makes participation more convenient and facilitates engagement across time zones. Using alpha-numeric participant IDs (i.e., patient01 or caregiver02) that reveal participants’ stakeholder group and assign a unique number to each participant helps ensure participant anonymity, while making it easy to identify all comments from a particular individual. Discussions are moderated to promote exchange of ideas and clarification of rationale comments. To facilitate discussion board navigation, it is useful to assign discussion threads to either a particular recommendation, rating criterion, or a rationale comment.

Round 3: Reassessment

Participants review Round 2 discussion and modify their original answers if they wish to do so. Participants are encouraged to explain why their responses changed or did not change using open-text boxes displayed after each rating question. The wording of recommendations or any text that explains them could be modified based on Round 2 results. Such changes are clearly identified.