Participatory feedback session for Center for Medicare and Medicaid Services Quality Rating System

Client: Policy research organization | Role: Workshop & synthesis | Year: 2018

Project Abstract

The challenge: The Affordable Care Act requires that Health and Human Services develop a Quality Rating System to help citizens, plans, and public officials compare health plan quality. The project team developed and facilitated a participatory session at the 2018 Medicaid Managed Care Congress to gather feedback from these stakeholders and understand how the new QRS could best inform quality decisions and how it might fall short.

The solution: The facilitators guided participants through five small-group activities to prompt both tactical and conceptual feedback. A combination of worksheets and moderated discussion allowed us to capture both in-the-moment reflections and detailed written information to analyze after the session.

Outcomes: Session findings included a list of features needed for broad buy-in of QRS usefulness and will inform the design of future versions of the QRS.

Project Approach

Timeline: 9 weeks

Project team: Organizational designer, visual designer, user researcher, experience designer

Team contribution: I helped design the activity worksheets and other session artifacts, led research synthesis, and created and presented findings to the client.

First, determine how to structure the session to get meaningful responses.

Due to the constraints of the conference, we did not know in advance how many participants would attend the session. Our first challenge was to figure out how to structure the session to be sufficiently flexible to accommodate an unknown number of participants while maintaining sufficient depth of responses to be able to extract meaningful themes. We settled on a station-based format to keep the participant to facilitator ratio manageable and to allow participants with only limited time to contribute.

Activities included:

Love Letter/Breakup Letter: Participants choose to write either a love letter or breakup letter to their current quality improvement measures. This format prompts more emotional responses based on past experiences, rather than a simple list of desirable or undesirable features, which may not fully represent their experience. 

Rose, Bud, Thorn:  Participants record strengths (roses), opportunities for improvement (buds), and weaknesses (thorns), ensuring that their responses reflect both positive and critical feedback.

QRS Feedback: Participants respond to specific features of existing quality rating systems, allowing them to give detailed, feature-level feedback based on their anticipated use cases.

Finish the Story…What Comes Next?: Participants fill in the blank of a visual narrative to show how they would use the QRS in their role. Our goal was to understand what use cases were most salient for participants.

 

 

Then, synthesize responses and identify themes.

Facilitators took notes during the session and collected participants’ worksheets. I synthesized these notes and identified themes and patterns across stakeholders.

High-level themes included:

The importance of context: The QRS should vary quality benchmarks based on location and population, and take the trend of plan performance over time into account.

Perception that clinical measures were over-emphasized: Quality measures should not overlook the importance of quality of life in addition to clinical measures.

Value of information sharing and feedback: There should be opportunities to share best practices from comparable, highly-rated plans.

Data visualization: A more visual interface would make it easier to meaningfully compare plans.