Our system, Hospital Quality Reporting (HQR), supports CMS and other downstream data repositories by collecting, calculating, and reporting on healthcare quality measures for inpatient and outpatient facilities. Its core functions allow users to submit quality measure data, view the outcomes of their submissions, and complete administrative tasks associated with data submissions and reporting. HQR users are staff associated with quality measures at facilities and healthcare systems, vendors who perform services for the providers, and contractors affiliated with CMS.
Our goal was to implement a single-ease question survey with which we could evaluate how easy or difficult it is to complete key tasks in HQR, such as submitting data via web forms, submitting bulk data, or viewing reports.
We collaborated with our CMS Product Lead and Chief Product Owner team, the CCSQ HCD CoE, the Paperwork Reduction Act (PRA) team at OSORA, and with the Touchpoints support team at the General Services Administration (GSA).
The steps we took were as follows:
The survey has one question: “Overall, how easy or hard did you find it to complete your task today using Hospital Quality Reporting?” (users then select a response on a scale from 1-5, very easy to very hard).
The process has taken a very long time and has incorporated many steps, with some steps being uncovered along the way. We have successfully passed each gate, but it’s taken the support of many stakeholders and has been a very gradual process. We are still in the midst of custom development and don’t have experience with data analysis or reporting yet.
We chose to use Touchpoints because we wanted to implement a site-intercept user survey, and we believed that, as a government-created and supplied tool, it would be easier to get the approvals and do the implementation.
So far, our key learning has been to allow ample time, and seek key stakeholder support, for PRA and TDB approvals.
Is there anything you would have done differently? Why?
We started with a plan to evaluate key user tasks, vs overall user satisfaction, and in retrospect it might have been easier to implement the tool with a single question as a generic survey rather than making it task-specific. Our developer seems to have gotten it working fairly easily (submit a web form, see the survey) but it still might have been easier to go with a more straightforward implementation first for approvals and testing.
Do you have metrics from the CSAT survey? If so, how has your team used these to improve the customer experience?
We are still in the implementation phase, so we don’t have outcomes or metrics to report on yet.
We have to complete back-end development and test the survey for accessibility compliance.
The HCD Center of Excellence is here to support you as you participate in the CCSQ-wide goal of increasing customer satisfaction research. We can partner with you through key tasks, such as:
Use the buttons below to email us or submit a project intake through Confluence and our team will follow up with you shortly. And learn more about the Customer Service Initiative>>
While customer satisfaction as an idea is a general one, a customer satisfaction survey is more defined and refers to a particular type of customer feedback survey. CSAT surveys allow customers to assess and provide feedback, both quantitative and qualitative, on customer service and product quality. These surveys are an ‘always on’ method of continuous data collection that allow us to measure aspects of the CX. These surveys typically provide insights related to:
|