Medical Care Senior HealthHospital Report Cards Have No Impact on Surgery Outcomes By Sondra Forsyth If you’re an older person having a major operation these days, it is very likely that your hospital is receiving a “report card” on their performance. These reports are designed to prompt hospitals to improve in areas where they perform poorly. Unfortunately, those “report cards” do not seem to be making things better for patients. That is the finding of a study published in February 2015 in JAMA by a team from the University of Michigan. According to a release from the university, the research turned up no differences in surgical safety among 263 hospitals taking part in a major national quality effort, and 526 similar hospitals that weren’t involved. The study analyzed data from 1,226,000 seniors enrolled in Medicare who had one of 11 major operations at those hospitals over a decade.The release explains that the initiative, called the American College of Surgeons National Surgical Quality Improvement Program or ACS-NSQIP, has been run since the early 2000s. Trained nurses at participating hospitals record data about every operation carefully, and send it to a secure central database. The ACS crunches the data from all ACS-NSQIP hospitals and shares quality reports, allowing hospitals and doctors see how their overall performance stacks up against others. Yet this quality reporting alone is not enough to accelerate the pace of improvement in surgical safety nor cost savings.Even so, the authors says those results don’t mean such efforts should stop or aren’t worthwhile. Still, the study does point to the need for better efforts to ensure hospitals put the ACS-NSQIP data to good use and work with other hospitals in their region to develop and share “best practices” that lead to improvements across hospitals. The ACS, and other large health care quality organizations such as the Michigan Surgical Quality Collaborative have recently begun to emphasize the importance of collaboration.The release quotes lead author Nicholas Osborne, M.D., M.S., a vascular surgeon at the U-M Health System’s Frankel Cardiovascular Center, as saying, “Although ACS-NSQIP hospitals are improving over time, so are other non-participating hospitals. Our study suggests that the ACS-NSQIP is a good start, but that reporting data back to hospitals is not enough. The ‘drilling down’ that is needed to improve quality using these reports is better suited for regional collaboratives.”Osborne notes that the analysis is the first to use a control group of hospitals to study the impact of ACS-NSQIP participation and that the team matched each ACS-NSQIP hospital with not one but two control hospitals. Patients treated at the two types of hospitals were generally similar, though ACS-NSQIP hospitals were larger, did more operations, and were more likely to be nonprofits or teaching hospitals.The 11 types of operations analyzed were esophagectomy, pancreatic resection, colon resection, gastrectomy, liver resection, ventral hernia repair, cholecystectomy, appendectomy, abdominal aortic aneurysm repair, lower extremity bypass, and carotid endarterectomy.The lack of an effect from ACS-NSQIP participation could be due to many factors, the authors say. Hospitals may not have used the reports to improve care or quality improvement efforts by hospitals using their data may have fallen short of affecting the four items the study evaluated. Many hospitals may not have the infrastructure needed to develop effective strategies to improve care. In addition, outside factors, such as reimbursement-driven efforts to improve safety, improvements in care across all hospitals or selective referral of patients to high-volume hospitals, could also have played a role in improving safety at all hospitals.“Knowing where you perform poorly is the important first step,” says Osborne. “But the next leap from measuring outcomes to improving outcomes is much more difficult.”Adds senior author Justin Dimick, M.D., MPH, “Better approaches for engaging surgeons, better systems for supporting them in change efforts, and better tools for helping them re-engineer care are clearly needed. Future national and regional quality improvement initiatives must be aimed at not only providing feedback to participants, but also providing an infrastructure for implementing change.”How the study was doneUsing Medicare data, the researchers looked at how well hospitals did at keeping patients ages 65 to 99 from dying within 30 days of their operation, suffering a major complication that kept them in the hospital longer, having to have a repeated operation, or needing to be readmitted to the hospital again in their first month after going home. They also looked at the cost of patients’ care, using a formula to account for general differences in hospital payments nationwide.They looked at the three years before hospitals started participating in ACS-NSQIP, and at least two years following enrollment. Then, they compared the ACS-NSQIP hospitals’ performance with that of non-ACS-NSQIP hospitals during the same period.The authors also controlled for many other factors that could have influenced safety and outcomes — other conditions the patients had, and factors about the hospital where they were treated, such as for-profit status, geographic region, bed size, teaching hospital status, and urban location.No matter how they sliced the data, the result was the same: there was no improvement in any of four measures at ACS-NSQIP hospitals as compared to similar non-ACS-NSQIP hospitals. For instance, before ACS-NSQIP hospitals entered the program, 4.9 percent of their senior patients having these 11 operations died within 30 days of their operation, compared with 5 percent of those at non-participating hospitals. About one in 10 patients suffered a complication, about 13 percent went back to the hospital within 30 days, and 0.5 percent had to have a second operation. Rates were slightly higher at non-ACS-NSQIP hospitals analyzed in the study.After three years of participation in ACS-NSQIP, the rates of all four measures had dropped at NSQIP hospitals – but they had also dropped at the other hospitals. When the researchers adjusted improvement across all hospitals over time, there was no statistical difference between those treated at hospitals taking part in the ACS-NSQIP and those treated at comparison hospitals.The cost of the patients’ care, after adjustment, was also similar — including payments for the initial hospital stay, and payments for additional stays and extraordinary “outlier” cases.Dimick, Osborne and co-authors Andrew Ryan, Ph.D. and Jyothi Thumma, MPH, are members of the U-M Center for Healthcare Outcomes & Policy. Dimick and Osborne are members of the U-M Institute for Healthcare Policy and Innovation. Osborne is a member of the VA CCMR, and Dimick and Ryan have appointments in the U-M School of Public Health. Co-author Lauren Nicholas, Ph.D., is at the Johns Hopkins University Bloomberg School of Public Health. Dimick is a consultant and has an equity interest in ArborMetrix, Inc, which provides software and analytics for measuring hospital quality and efficiency.The U-M Health System participated in NSQIP for multiple years, though it no longer takes part. It does take part in the MSQC along with 66 other Michigan hospitals.Share this: