by Joanne Lynn
The Centers for Medicare and Medicaid Services (CMS) has quietly put out two evaluations of the readmissions work– and both documents are remarkable for their failure to evaluate the programs fairly or to provide insights as to what works in what circumstances.
The Community-Based Care Transitions Program (CCTP) pays community-based organizations (often Area Agencies on Aging) to work with hospitals to improve transitions from hospital to home. This first evaluation, covering the 48 programs that started before 2012 [http://innovation.cms.gov/Files/reports/CCTP-AnnualRpt1.pdf], found just four of them to have made statistically significant gains in reducing the ratio of readmissions to discharges from the participating hospitals.
The readmissions/discharges metric that CMS and its evaluators use for categorizing success or failure is seriously flawed. CMS has known this for a long time: In 2009, that metric had to be changed during the Quality Improvement Organization (QIO) work in 14 communities because the numerator and denominator were declining together [http://jama.jamanetwork.com/article.aspx?articleid=1558278&resultClick=3]. We recently published a review of the conceptual issues [http://medicaring.org/2014/12/16/protecting-hospitals/] and a data-driven example of the problem [http://medicaring.org/2014/12/08/lynn-evidence/]. There is no easy switch to population-based metrics in programs that were never set up to be population-based. Indeed, much of the problem with the measures probably has roots in national leadership still conceptualizing the transitions work as being dominantly the responsibility of hospitals and their staffs, while people living with serious chronic conditions need a more comprehensive, community-anchored, population-based approach. Even so, responsible evaluation would require, at the very least, a close examination of actual numerators and denominators in order to interpret the simplistic and routinely misleading ratio.
There are bound to be terrific success stories in the sites that did not “win” according to the malfunctioning readmissions/discharges metric. Some sites probably had reduced their denominator, hospital discharges, at the same rate (or higher) than they reduced 30-day readmissions. The four sites that the CMS report considered successes might well have included some sites where a shift in the local market increased their hospital utilization with lower-risk patients. Moreover, the 30-day limit on tallying readmissions does not mark a magical divide: Nearly everything that works to help in the first 30 days will continue to have a positive effect for much longer, and better support arrangements and care planning in the community will end up reducing index admissions.
This CCTP evaluation also observed the speed of start-up and the success of efforts to achieve targeted enrollment — both interesting and potentially important process components of success, though neither is actually essential. A delay in starting might be imposed by contracting issues, business associate agreements, software development, or any of an array of challenges that do not affect long-term success. And the proposed target enrollment figures are much less important than whether the sites targeted enough high-risk patients who had opportunities to reduce risk and enrolled enough of those patients to demonstrate a difference.
Of course, the most important issue is whether the CCTP program is helping to improve transitions and keeping people who are living with fragile health in a more stable condition while living in the community, thereby reducing hospitalization. It would be easy for the evaluation to show that the supplemental services are desirable. Evaluators could test whether enrolled patients had many fewer medication errors, many more patients and families confident in their self-care, many more social services in place, and much more medical support in the community. However, the current evaluation does not address these points.
Consider that the CCTP program pays “per person served.” In a very important sense, then, the program is a winner if it reduces hospital utilization enough to cover the program’s costs. For example, if one program reduces hospital utilization by 1,000 hospitalizations per year in an area where Medicare’s average hospitalization cost is $15,000, then it saves $15,000,000 per year. CMS pays community-based organizations a modest fee, around $300 per intervention patient. The CCTP program would have to be serving 30 people to prevent one readmission in order to break even. Clearly, the ratio is likely to be more like 10 or 15 people to save one readmission – and get a return on investment of 2:1. This suggests that the return on investment with even modest success is wildly favorable and would be so at virtually any revised estimate of cost and effectiveness (which an evaluation could provide). So why does the evaluation not address these central issues?
Further, there is no reason why a 20% reduction in the now thoroughly discredited readmissions/discharges ratio is the best target. A more informative target would clearly focus on providing a reliable, well-characterized set of services that work to the advantage of patients and families and that also reduces total costs. The CCTP program and other efforts to improve care transitions have already met that criterion, so the question now needs to be, “What are the next prudent steps for health care managers and policymakers?”
To answer that question, it makes sense to look to the other recently released evaluation of readmissions work, developed for the Partnership for Patients [http://innovation.cms.gov/Files/reports/PFPEvalProgRpt.pdf ]. This report claims that readmissions reductions may have saved Medicare $2.8 billion (out of $3.1 billion saved by all of the hospital-acquired conditions reductions; Table 3 in the report), but this presentation only attributes improvement to the Partnership for Patients (PfP) and its Hospital Engagement Networks (HENs). The CCTP, the hospital penalties under the Hospital Readmissions Reduction Program, and the QIO’s extensive work in supporting community efforts are not mentioned, let alone cited as possible parts of the causal chain. The metrics supporting the claim of gains (pages 3-2 and 3-3) are similarly inconsistent: One figure uses 30-day readmissions/discharges in Medicare, one uses the QIOs’ readmissions/1,000 beneficiaries per quarter (but does not report any statistical tests), and one uses the hospital-reported 30-day all-cause, all-payer readmissions/discharges. The report aims to have the reader believe that the PfP and the HENS generated a number of positive results, including saving money. On closer inspection, however, it becomes clear that the authors are counting the reductions in admissions as well as the reductions in readmissions in estimating the savings, a tacit admission that at least some people at CMS recognize that good practices in transitions and in longer-term community support reduce both the numerator and the denominator in the readmissions/discharges metric.
There are other evaluations to come, with some presumably already in the works. Many site visits have been made and much data are available. Let’s hope that the next round of evaluation reports starts to answer serious policy questions about how to proceed. Now that we have come this far, what combinations of services should become standard and expected by beneficiaries and family caregivers, and which ones tend to be useful only in particular settings? Which specific interventions should be used for targeted patients, and which should become part of the ordinary operations of high-quality health care delivery? What have we learned about care planning, interoperability, feedback loops, community action, and useful measures?
Recently, the Patient-Centered Outcomes Research Institute (PCORI) has announced a multi-million dollar, multiyear contract focusing on care transitions. Maybe that work will start identifying better measures of quality care during transitions and leading to better support of people with fragile health in the community. Perhaps that work and future evaluations could synthesize data and reports from a wide array of sites and efforts and provide guidance for future management and policy actions. CMS and the Office of the National Coordinator for Health Information Technology (ONC) should be working toward better metrics based on information in electronic records, and the IMPACT act [http://medicaring.org/2014/10/28/impacts-impact/] will generate better databases to work from.
But these first forays into evaluation of the readmissions work are quite disappointing. There are contractors and participants who know much more, and there are evaluation methods that would be much more revealing. The work on care transitions has been a powerful catalyst toward more comprehensive care planning and service support for people living with fragile health. It is time to push CMS and PCORI and any other funding agency, contractor, or grantee to do the work that informs managers and policymakers about what to do next, given what we’ve learned in the work so far.
Want to Know More?
Evaluation of the Community-Based Care Transitions Program:
Project Evaluation Activity in Support of Partnership for Patients: Task 2 Evaluation Progress Report
For an essay by Stephen Jencks giving more of the context see:
The Evidence That the Readmissions Rate (Readmissions/Hospital Discharges) Is Malfunctioning as a Performance Measure
Hartford Foundation blog: Care Transitions Evaluation Is Premature and Confusing