U.S. News and World Report addresses Ophthalmologists’ concerns about Rankings

Various misunderstandings concerning the Best Hospitals rankings addressed

USNlogo_101810_CMYK

Advertising Policy

Cleveland Clinic is a non-profit academic medical center. Advertising on our site helps support our mission. We do not endorse non-Cleveland Clinic products or services Policy

Avery Comarow profile photo, July 2013By Avery Comarow, Health Rankings Editor
U.S. News & World Report

 

The Cleveland Clinic asked U.S. News to contribute an article to ConsultQD/ophthalmology to address various misunderstandings concerning the Best Hospitals rankings. We are happy to do so.

The misunderstandings surfaced in a national discussion with ophthalmologists convened by the Cleveland Clinic’s Cole Eye Institute. As the editor who has directed Best Hospitals from its genesis in 1990, I appreciate the opportunity for clarification. Below are responses to the specific issues raised, but I want to note that a detailed methodology report is freely available for downloading.

How are the ophthalmology rankings generated?

In this specialty and three others (psychiatry, rehabilitation and rheumatology), hard outcomes and other relevant performance data are some combination of elusive, unreliable or unavailable. Ranking is based on results from our three most recent annual surveys of a random sample of boarded specialists (200 per year) who are asked which hospitals in their specialty they believe provide the best care for the most challenging patients if cost and location were not considerations. Centers named by 5 percent or more of the responding physicians are nationally ranked as Best Hospitals. The average response rate in ophthalmology over the last three years has been 39.5 percent. That is quite high for a physician survey.

Is the sample truly random? Is there geographical weighting?

It is a geographically weighted probability sample. The source is the AMA Masterfile. Of the 200 physicians surveyed per year in each specialty, 50 are selected from each of the four census regions.

Who actually receives the survey?

The survey goes to individual physicians. Hospitals do not know whether or which staff or privileged physicians are surveyed unless the doctors tell them. Also, it is not in the form of a list of centers to check off. It simply has spaces for entering up to five hospitals. (If a name is not in the U.S. News database, our contractor, RTI International, crosswalks the name to the appropriate entry.)

Advertising Policy

Do hospitals pay U.S. News to be placed higher on the list?

Emphatically, no. There is nothing that a hospital can arrange with U.S. News to get a bump of any kind. There has never been and will never be any way for a hospital to pay to play or improve its standing.

In ophthalmology – in all of the specialties, for that matter – aren’t the rankings just a popularity contest?

I’ve addressed this question previously, most recently last year in the Wall Street Journal. The following is excerpted from my published reply to that publication and was echoed in a concurrent Second Opinion blog posting:

“We believe that responsible specialists plug into extensive networks of other specialists in seeking the best care for the most challenging patients wherever it might be located. The late Bernadine Healy, a cardiologist and director of the National Institutes of Health [and dean of Ohio State’s medical school] before coming to U.S. News as health editor, used to call the physician survey a form of peer review.”

In the 12 specialties that use hard data in addition to reputation, you can see at a glance that many of the ranked hospitals have a very small reputational score or none at all. Reputation counts, but except for a small number of medical centers, which include the Cleveland Clinic, it is not always a key factor.

We have always wanted to decrease the role of reputation, however, in favor of metrics that directly reflect quality of care. The next round of Best Hospitals rankings, which will appear in July, will show that in the 12 data-oriented specialties, reputation weight in the final score will be reduced from the current 32.5 percent to 27.5 percent. The collective weight assigned to patient safety metrics will rise from the current 5 percent to 10 percent.

We announced the coming change in another Second Opinion post. Reputation is the sole determining factor in ophthalmology and the other three reputationally driven specialties, so those rankings will be unaffected.

Advertising Policy

A second change involving reputation is the expansion of this year’s physician survey to include members of the Doximity physician network. Announced on our web page two weeks ago, it has generated considerable interest and some concern, which we plan to address shortly in a follow-up post. Watch the Second Opinion page for the update.

We would much prefer to evaluate hospital performance in every specialty with the help of hard data. If comparable meaningful, robust data are available that would allow us to do so for ophthalmology, we want to know – and if we can gain access to such data, I promise that we will use them.