skip to main content
PEP-C Patient Survey Producing Dubious Data?

PEP-C Patient Survey Producing Dubious Data?

by Robert Nielsen

The California HealthCare Foundation and the California Institute for Health Systems Performance have just published results from the 2002 NRC/Picker Patients' Evaluation of Performance (PEP-C) Survey, a rating of California hospitals by past patients. The highest average ratings for "All Patients Combined" went to the Fresno Surgery Center (20 beds) in Fresno; Tehachapi Valley Healthcare District (24 beds) in Tehachapi; Mammoth Hospital (15 beds) in Mammoth Lakes; and Frank R. Howard Memorial Hospital (28 beds) in Willits.

It's immediately apparent that these top-rated facilities are also some of the smallest hospitals in California. It is likely that their high ratings are, at least in part, a function of the fact that difficult cases -- namely, very sick patients -- are referred to larger facilities with more comprehensive resources. So how can these tiny hospitals be directly comparable to larger facilities that provide a broader range of services? Despite its "three-star" rating system, sponsors of the PEP-C survey say that the system is not intended to gauge clinical competency, and the survey is really not a measure of "good" versus "bad" hospitals.

Nevertheless, ranking tiny community hospitals that have small, relatively healthy patient populations against huge institutions with more critical-care services produces an outcome that may mislead consumers. Imagine the billboard as you drive into Tehachapi: "NRC/Picker Says Tehachapi Valley Healthcare District Hospital Trounces UCLA Medical Center." Further, imagine the demand for homes in Willits, Mammoth Lakes, and Tehachapi when consumers are led to believe they have the finest medical centers in all of California.

The Impact of Non-Response

PEP-C's potential to mislead consumers is a major problem, and inappropriate comparisons are only one source of that potential. Non-response poses an even bigger problem. Less than half of California's eligible hospitals (47%) volunteered to participate in the 2002 survey.

This participation rate is up substantially from the prior year's 30%, but it is still disappointing, given that Blue Shield tiered its preferred status for hospitals, threatening a lowered status for non-participating hospitals. Further, California HealthCare Foundation subsidized 50% to 100% of the cost for the more financially strapped hospitals. With such a high non-response rate and no non-response bias testing reported, the resulting data are not projectable -- that is, the surveys are essentially straw polls and not truly representative of the larger population of patients for each hospital.

Most proponents of the PEP-C survey are committed to keeping the process voluntary for hospitals, though some are challenging that stipulation, saying hospitals that do not volunteer have something to hide. Gallup has encouraged its clients not to participate in the survey -- but not because they have something to hide. We challenge the PEP-C for several reasons. In addition to the lack of representativeness inherent in the data collection, the non-empirically derived question model (the "Picker approach") has only an anecdotal basis -- there is no systematic evidence that the results will help patients make better decisions. Along those same lines, there are no demonstrated linkages to positive patient outcomes with the PEP-C survey.

Bottom Line

The people of Willits may feel pretty lucky knowing that their Frank R. Howard Memorial Hospital receives a far higher PEP-C rating than Cedars Sinai (870 beds) and UCLA Medical Center (599 beds). But the distinction is dubious at best -- and at worst, it may actually be harmful to California patients' ability to make informed healthcare decisions.


Gallup https://news.gallup.com/poll/8827/PEPC-Patient-Survey-Producing-Dubious-Data.aspx
Gallup World Headquarters, 901 F Street, Washington, D.C., 20001, U.S.A
+1 202.715.3030