We often look to third-party rankings and ratings-based platforms to help guide our decision-making. What are the best restaurants in Manhattan, or what are the top colleges in the U.S.? In each of these cases, there are probably a few list-curators that come to mind – including one that continues to draw scrutiny, in large part due to biased, self-reported data collection practices where falsified or inaccurate data are being provided by the very organizations vying for top spots.
Healthcare, to the detriment of consumers and healthcare leaders, is unfortunately no exception.
Consumers and health systems alike tend to view the U.S. News & World Report’s annual Best Hospitals Honor Roll list as the singular source of truth and preeminent ranking source. Some health systems will use the list as a de facto competitive guide to help develop forward-looking market strategies. And, while many will use the results as a key marketing tool to attract both consumers and healthcare workers, it can often serve neither appropriately, as many great hospitals that have analogous quality levels are often left out.
Instead of leaning on the arbitrary and subjective nature of prominent, perceived prestige-focused industry rankings, health systems must shift to a data-driven and mathematically sound approach to determining and benchmarking quality. Only when equipped with targeted objective information – and thus the knowledge of what exactly makes similar hospitals successful – can executives make meaningful comparisons and better-informed decisions about their policies, service lines and other important business strategies (selection of business lines, partners, patient populations to pursue, physician and resource allocation, where to open/build locations, and so on.).
Only then does a “best of” designation mean anything.
Benchmarking methodologies leave much to be desired
Optics and perceived “prestige” value aside, the U.S. News & World Report’s Best Hospitals Honor Roll list has already drawn criticism from both academic researchers and clinicians. During a recent Freakonomics, M.D. podcast, Dr. Karen Joynt Maddox, a cardiologist and researcher at Washington University in St. Louis, noted that her trust level with the U.S. News & World Report’s Best Hospitals Honor Roll list was “six out of 10.” One deficiency in the list’s methodology, she said, was how large a role health system reputation plays in determining the final rankings, which is fed, in large part, by a “black box” survey of self-reported data.
Maddox is uniquely familiar with the methodology behind the U.S. News & World Report Best Hospitals list, and that of others well-known in the industry, like the Centers for Medicare and Medicaid Services’ (CMS’) Hospital Compare Overall Star Ratings, Healthgrades’ Top Hospitals, and the Leapfrog Safety Grade and Top Hospitals lists. In 2019, Maddox and other quality-focused clinicians and researchers authored a NEJM Catalyst piece – “Rating the Raters” – which dug into the methodology and pros and cons of each of the benchmarking lists. The authors identified five prevalent issues across lists, including limited data, lacking data audits, and varying methods for compiling and weighing measures.
However, beyond just those concerns, the current ratings and rankings lists also lack comparative elements, which leads hospitals, health systems, and other stakeholders to make arbitrary and incomplete parallels between a particular hospital and some of the nation’s “top” hospitals. Some comparisons focus on singular aspects of hospital performance and effectiveness, such as outcomes (e.g., Healthgrades) or patient safety (e.g., Leapfrog Group), which is sufficient for a ranked list or assigning a grade, but not to compare hospitals.
The rankings also tend to offer contradictory and competing results. For example, while the Cleveland Clinic is ranked fourth by U.S. News & World Report, receives five stars from CMS, and rates among the top 50 U.S. hospitals according to Healthgrades, it has a “B” Hospital Safety Grade from the Leapfrog Group, ranks 562 of the nation’s most socially responsible hospitals by the Lown Institute, and is not included in Merative’s Top 100 Hospitals list.
What matters is “how”: Lifting up the hood for defensible, informed decision making
Human nature and society has made us accustomed to wanting to know who or what is “best.” But when you’re talking about something as complicated and advanced as healthcare delivery, the most important thing to understand is “how”: How do health systems in the same area compare on a certain measure like care quality? How do various systems compare and rank based on outcomes in a certain specialty or for a certain procedure? And what does just the data say?
If you want an objective, data-driven way to compare quality and similarity against each other, then perception and historical prestige should be thrown out the window – letting just the data points, math and processing power do the rest.
That latter is what historically held accurate comparison and benchmarking practices back. Many of the well-known lists were introduced decades ago, long before machine-learning and computational capacity matured.
However, today we not only have the technology, but we have plenty of widely accepted, publicly available data sources (e.g., Medicare Cost Reports, CMS Care Compare) that provide sufficient information needed to make empirical, data-driven decisions. Instead, the reliance on subjectivity and “black box” methodology has negatively affected how consumers and healthcare leaders compare healthcare organizations to their own “data-driven” decisions.
At the end of the day, I don’t believe that rankings – especially in the healthcare industry – will all of a sudden go away. They are powerful marketing tools, and a lot of money is invested to make them that way. They also have their place: the concept of “awards” are strong motivators that celebrate excellence. However, the “best of” means nothing without contextual guardrails, and strategic decision-making should never be motivated by the phrase. For health systems, and the consumers who rely on them for care, prestige and perception shouldn’t matter – but outcomes and care quality should.
Photo: Prostock-Studio, Getty Images