Dr. David McKee, a neurologist in Duluth, Minn., didn't much like an Internet review that called him "a real tool" and suggested he didn't care about his patients' comfort. So he filed a defamation suit against the patient's son who wrote the critical piece, which also alleged McKee wasn't interested that his dad's gown was hanging from his neck with his backside exposed.
A judge ultimately dismissed the case, stating that "the court does not find defamatory meaning, but rather a sometimes emotional discussion of the issues." But it's not the first time a physician filed a suit against a consumer for a bad Internet review -- and probably won't be the last. A physician's reputation is all he or she has, and a sour review on the Web can make us very anxious.
Online review sites, of course, are imperfect and open to manipulation. But we all head to Google nevertheless in search of information and advice, whether we're shopping for a book or a new physician. So how do you know whether the doctor you're seeing is any good? And how do I know how good a doctor I am?
I recently Googled myself to determine how I fared on sites like Healthgrades, which exclusively rates doctors; and Yelp! and Angie's List, which grade doctors alongside restaurants and plumbers. The results were inconclusive. Many sites had me listed but not rated. However, on Vitals.com I earned a mere one-star review (out of four). I had no idea who had rated me, or why I earned such a subpar grade. Some of the other information on the site was correct and some was not. It claimed I work at two four-star hospitals (that's incorrect -- just one four-star center), attended a three-star medical school, and that patients wait an average of 20 minutes to see me. It's unclear how that number was calculated.
All of which suggests the amount of information online about doctors and the growth of ratings sites doesn't make it any easier to figure out whether your doctor is brilliant or a quack.
The main reason for this is because it's hard to figure out what "good" means. On one hand, it could mean delivering safe and effective care. Let's call this high-quality care (though even defining "quality" this way is also sure to raise debate). Practically speaking, this could mean that if you bring your child to me with fever and an earache, I have the skill to diagnose an ear infection (an accurate exam) and prescribe the correct treatment (the right dose of the right antibiotic for the right number of days). On the other hand, "good" can also mean determining the kind of service I provided. When you brought your child to see me, did I greet you with a smile, listen, show some empathy? Was my office staff courteous and professional? Was it hard to find parking? Did you wait too long? Ideally, we want our doctors to give us both the highest-quality care and service. In reality, that's almost impossible to judge.
Vitals and other sites have collected lots of anecdotal information about service -- indeed, it's one reason why the site was launched. "I was about to get my Achilles' tendon repaired. On the table, the doc said, 'I'm excited to do one of these. It doesn't happen to me that often.' That's not the info I wanted to know then," said Mitch Rothschild, the CEO of Vitals. "So we started Vitals to help people get that info ahead of time -- when they are deciding, not when they are in a hospital gown."
Rothschild said that we "are a social species -- we care what other people think. And many of us make decisions not empirically, but by soliciting other people's opinions." Online sites are often the easiest place to speak out as well. After all, how many of us know where and how to file a formal complaint against a doctor or hospital?
But even with the best intentions and rationale, ratings sites have taken fire from the medical community. Much of this has to do with the traditional culture of medicine -- new-media transparency causes a clash between the conservative and hierarchical nature of medicine and the forces that are trying to level the playing field between doctor and patient. In our guts, doctors are deeply uneasy about transparency; no one wants their strengths and weakness splayed for all to see in even the smallest open square, let alone anyone who Googles us. We want to care for patients in the best ways possible, despite all of the modern factors (insurance, bureaucracy, cost, risk) that have made this harder than in the past. So a negative review, while usually not leading to a lawsuit, often leads to anxiety, a crisis of confidence and concern for our reputation.
Doctors, of course, aren't the only people who have to deal with potentially unfair reviews online. And most of us recognize that some criticism online is part of the deal these days. Nevertheless, doctors grounded in science bristle at the unscientific methods behind these ratings -- especially if people are using them to make life or death decisions about medical care.
These sites also do very little to help me get better as a doctor or improve the doctor-patient relationship. Did my one-star review come from someone who felt I was rude or from someone who demanded a prescription but didn't get one from me? With anonymity, it is impossible to tell. And even if I wanted to respond, federal privacy laws would not allow it.
That anonymity can also deceive our potential patients. A study looking at physician rating sites published in the Journal of General Internal Medicine last year identified several reviews that were written by doctors themselves. "Every anonymous review I've written on myself has been glowing," confessed one doctor to researchers. On Vitals, I gamed the system myself. My own review went up from 1 to 3.5 stars after I entered several positive reviews. When I told Mitch Rothschild about this, he said, "We strive for the highest ‘signal to noise ratio' by limiting the IP address' ability to submit multiple ratings, asking for email afterwards, and then seeing if a doctor has many reviews with no emails -- a suspicious note." That didn't stop me from entering those raves from my mobile phone, laptop, iPad and then my desktop.
Another problem academic reviewers of these websites have pointed out is that they often ask the same questions about a doctor regardless of her specialty. Asking whether a pathologist (who examines slides, not patients) communicates a diagnosis to a patient well doesn't make sense at all.
Perhaps the biggest limitation with Vitals and other sites has to do with the paucity of reviews. While Vitals claims it has information on some 720,000 doctors, according to Rothschild, each doctor has only an average of four ratings. In another study of physician rating sites, researchers found that only three out of 250 doctors had been rated five or more times. Given the thousands of patient visits one doctor will take part in each year, one-to-four opinions hardly counts as the wisdom of the crowd.
Despite all of these criticisms, it's worth noting that nearly 90 percent of reviews, as sparse as they are per doctor, are positive, suggesting that doctors' collective angst is probably overblown. Still, in an effort to fight back, some doctors have taken to making their patients sign gag orders that prevent them from writing reviews online. Others have "incentivized" patients to favorably review them online by offering discounts on certain services. (Botox for four stars, anyone?) Some have suggested perhaps the best thing to do is to actually encourage patients to use these sites so the overall number of good reviews drowns out bad ones.
So if service "data" is not necessarily helpful for figuring out how good a doctor is, what about quality statistics? Some states have taken to publishing hard outcome data about certain doctors. New York, for example, publishes data on the performance of its cardiologists and cardiac surgeons. It's an impressive set of spreadsheets dating back to the 1990s. But this kind of data does not exist for all doctors, and for the ones it does exist for, it certainly isn't easy to find. Even if they did find it, it's pretty clear from studies and surveys that consumers aren't motivated to drill this deep into statistics to figure out which doctor is right for them. From a physician's standpoint, there is some indication that cardiologists and CT surgeons may be reluctant to treat riskier patients for fear of getting dinged on these spreadsheets. Finally, to get back to the root of my inquiry: Can data like this help doctors get better? No study has suggested any proof that it helps.
The result of all this push toward transparency doesn't seem to have affected consumers. A 2008 survey by the California HealthCare Foundation found that although more than 80 percent of the state's adults turn to the Internet for health-related information, less than one-quarter have looked at physician ratings sites. Only 2 percent of those surveyed made a change in physicians based on information posted on a rating site. Other surveys demonstrate similar lack of influence for published quality data.
All this puts us back at square one when it comes to figuring out how good our doctors are. Service and quality are both essential skills for doctors to master, but online rating sites don't have enough information to attest to quality. And the states' quality data tells us nothing about bedside manner.
Perhaps "how good is your doctor?" is the wrong question to ask. Given how complex medicine and medical care is these days, no single doctor can know it all and do it all. Instead, it may be better to look for a system of care -- primary care, specialists and other members of a team -- that works to provide quality care and multi-star service in a coordinated fashion. A few such systems exist around the country, and as healthcare reform continues, we'll probably see more sprouting up. If you're skeptical of that view, just look at the scandal in the military at Walter Reed Hospital. That shameful service and quality wasn't because of a single doctor, but because the entire system meant to take care of wounded soldiers was in shambles, leaving patients out in the cold.
Shares