The New York Times writes an editorial about hospital rankings based on mortality of medicare patients from cardiac disease, and not surprisingly, misses the point on metrics of patient survival comparisons between hospitals.
Famed medical institutions like Johns Hopkins, the Cleveland Clinic and Massachusetts General Hospital are lumped into the broad national average category when perhaps they deserve better (we can’t tell), and no doubt many other hospitals deserve a lesser ranking. In the next round of evaluations, the Medicare program ought to make public every institution’s mortality rates along with any caveats needed to help patients understand them.
I’ll tell you a dirty little secret if you like. It explains why all this data is essentially going to be bunk.
Shhhhhhh.
Private hospitals select their patients based on perceived outcomes.
Now don’t tell anyone that. Certainly not the NYT editorial page. God forbid anyone would figure out how to really judge medical outcomes – by comparison of patients with equivalent prognosis going into treatment – rather than who hospitals choose to treat.
The result is many good hospitals – even big medical centers like Johns Hopkins with the best and brightest docs – end up looking worse. Why is that? They’re in cities, they are often serving a population of very poor people, and public hospitals are where the train-wrecks and difficult cases go that the private hospitals refer away so their stats stay high.
It’s a continuing problem in the assessment of the efficacy of medical care. Metrics that are easy to mine from patient information are going to provide deceptive information. The picture is a lot more complicated and difficult to control for adequately. This problem could be studied. But not by simply comparing outcomes in “medicare patients” between hospitals. For any of these ratings to be believed they’re going to have to dig deeper.
In a way studying these metrics in a superficial way hurts good medical care. For example, see this article in the NEJM (free). Also, ignore the abstract – the authors ignored their own findings – and read the data. All they proved when doctors were compensated based on metrics of good patient care was that doctors that saw fewer patients, younger patients and richer patients (and those that fudged the stats) got paid better. Doctors that saw more patients, older patients, and poorer patients were penalized. Superficial ratings of patient care based on poor control of patient populations will always provide inaccurate information about the real quality of care from your doctor or hospital. Publishing ratings based on these poor controls will give hospitals more incentive to refer or turf patients that are difficult, old, poor, or very sick onto other institutions. This is not good for patient care.
Leave a Reply