awareness and use of comparative provider quality

177
The Pennsylvania State University The Graduate School The College of Health and Human Development AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY INFORMATION: IMPACT OF DISSEMINATION OF QUALITY “REPORT CARDS” A Dissertation in Health Policy and Administration by Neeraj Bhandari © 2016 Neeraj Bhandari Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy August 2016

Upload: others

Post on 20-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

The Pennsylvania State University

The Graduate School

The College of Health and Human Development

AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

INFORMATION: IMPACT OF DISSEMINATION OF QUALITY

“REPORT CARDS”

A Dissertation in

Health Policy and Administration

by

Neeraj Bhandari

© 2016 Neeraj Bhandari

Submitted in Partial Fulfillment

of the Requirements

for the Degree of

Doctor of Philosophy

August 2016

Page 2: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

ii

The dissertation of Neeraj Bhandari was reviewed and approved* by the following:

Dennis P. Scanlon

Professor of Health Policy and Administration

Dissertation Advisor and Chair of Committee

John Moran

Associate Professor of Health Policy and Administration

Chair of Graduate Program

Rachel Annette Smith

Associate Professor of Communication Arts and Sciences

Yunfeng Shi

Assistant Professor of Health Policy and Administration

*Signatures are on file in the graduate school

Page 3: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

iii

Abstract

Public reporting of provider quality has grown substantially and a number of public and private

organizations now publish and disseminate data on hospital and physician quality. A large body

of empirical work has tested the effectiveness of quality “report cards”, focusing heavily on

outcomes related to consumer choice (e.g., choice of physicians or hospitals) or on changes in

market shares of providers after publication. Despite recognition of the importance of

dissemination of report cards to consumers, none of the existing studies attempt to capture the

proactive dissemination strategies used by publishing sources or model how media coverage of

issues related to provider quality and comparative performance impacts consumer awareness or

use of report cards. In this study, we develop a framework that conceptualizes the trajectory by

which public reports “reach” consumers to affect their attitudes towards comparative quality

information (CQI), and their “shopping” behaviors for high quality providers. We also describe

the spectrum of approaches used by the AF4Q multi-stakeholder alliances in disseminating their

public reports to consumers and capture regional variation in availability, applicability, and

credibility, dissemination, and media coverage of CQI. Finally, we use a two period panel data

and fixed effects methodology to examine the relationship between CQI dissemination and

consumers’ awareness of, use of, and attitudes towards CQI. We use a diverse set of data sources

drawn from evaluation of the Aligning Forces for Quality (AF4Q) program, including a panel of

4235 chronically ill adults living in the 14 diverse regions of the United States. Our study finds

that few consumers are paying attention to the provider quality reports while extending earlier

work that shows an increasing regional “footprint” of report cards over time stemming from

significant improvements in availability and applicability of CQI to the consumer. It also

provides first empirical evidence that the credibility of CQI has improved over time, yields

evidence of limited media coverage of report cards, bolsters the notion that reports need to be

Page 4: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

iv

more applicable to consumers’ clinical needs to encourage their greater use, fails to detect a

meaningful role of intensified producer dissemination in improving consumer “uptake” of report

cards, and offers modest support for the proposition that increased print media coverage may

induce more consumers to use public reports in making provider choices. Taken together, our

findings suggest a “disconnect” between “supply” of CQI and its “demand” by consumers, even

as financial resources invested into improving its content and accuracy have grown substantially.

Deeper entrenchment of the provider transparency movement within the healthcare delivery

landscape now needs to be followed by a higher engagement of the general public in its promise

and potential.

Page 5: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

v

Table of Contents

List of Figures………………………………………………………………………………..viii

List of Tables…………………………………………………………………………………ix

Acknowledgements…………………………………………………………………………..xii

Chapter 1 Introduction………………………………………………………………………1

Background………………………………………………………………………….2

Motivation…………………………………………………………………………...2

Study Aims…………………………………………………………………………..5

Chapter 2 Conceptual Framework and Syntheses of Literature……………………………..6

Conceptualizing Dissemination of Quality Report Cards:

Prior Work and Current Need………………………………………………………..7

An Overview of Conceptual Framework…………………………………………….9

Formal and Informal CQI……………………………………………………9

How Do Report Cards Reach Consumers?

The “Mechanics” Of Dissemination…………………………………………9

“Updating” Effect of Organizational Push and Consumer’s Pull of CQI…..11

How Do Consumers Choose Providers? Role of Informal Sources…………………11

Organizational “Push” of CQI towards Consumers…………………………………13

Dissemination in the Broader Health Care Literature……………………….13

Key Dimensions of Organizational “push”………………………………….14

Availability of CQI…………………………………………………..15

Applicability of CQI…………………………………………………16

Credibility of CQI……………………………………………………17

Proactive Dissemination of CQI by Sponsors/Producers……………18

Low Numeracy, Literacy, English Proficiency, and High

“Cognitive Burden” of Information Complexity…………….19

Lack of “Consumer-Targeting” of Reports………………….20

High Information Search Cost……………………………….21

Lack of Standardization in Production and Presentation

of CQI………………………………………………………..22

Provider-Initiated CQI Dissemination……………………………….22

Role of Media in Dissemination of Report Cards…………………...24

Agenda-Setting Function of Media………………………….25

Past Literature on Media Coverage of Report Cards………..26

Future Avenues of Research…………………………………26

Consumers’ “pull” of CQI…………………………………………………………...27

Updating Effect of CQI Dissemination and Consumer Search……………………..29

Chapter 3 Research Questions……………………………………………………………….32

Page 6: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

vi

Chapter 4 Methods…………………………………………………………………………..34

Study Design………………………………………………………………………...35

Data Source(s)……………………………………………………………………….35

Study Sample………………………………………………………………………...38

Timeline of Measurement……………………………………………………………39

Dependent Variables…………………………………………………………………39

Independent Variables……………………………………………………………….41

CQI Availability……………………………………………………………..41

CQI Applicability……………………………………………………………42

CQI Credibility………………………………………………………………42

Alliance Proactive Dissemination…………………………………………...43

Media Coverage of CQI……………………………………………………..44

Selection of Local Print Media………………………………………45

Selection of Search Keywords……………………………………….46

Selection, Content-Coding, and Weighting of Articles……………..46

Selection for Relevance and Full Text Review……………………...49

Applying Content Codes…………………………………………….49

Weighting the Coded Articles……………………………………….49

Generation of Media Coverage Scores………………………………51

Inter-rater reliability (IRR) testing…………………………………..52

Analytic strategy……………………………………………………………………..54

Model Specification and Identification Strategy…………………………….54

Subsample analyses………………………………………………………….56

Sensitivity of Estimates to Coding Alliance Dissemination Strategies

as Dummy Variables………………………………………………………...57

Effects on Consumer Awareness and Use of Physician Comparative

Quality Information………………………………………………………….57

Sensitivity to Variation in Local Newspaper Density……………………….58

Sensitivity of Primary Estimates to Weighting……………………………...58

Sensitivity of Estimates to Alternative Measurement

of Consumer Attitudes……………………………………………………….59

Chapter 5 Results…………………………………………………………………………….60

Summary Statistics…………………………………………………………………..61

Descriptive Summary of Awareness, Use, and Consumer Attitudes towards CQI…61

Availability, Applicability, and Credibility of Report Cards………………………..63

How Do Organizations Disseminate Report Cards?

AF4Q Alliances as a Case Study…………………………………………………….67

Online Posting of Report Cards……………………………………………...68

Posting/printing CQI in non-English languages……………………………..68

Updating Information in Report Cards………………………………………70

Media Press Releases and Advertisements…………………………………..71

Collaboration with Key Stakeholders/Community-Based

Organizations to Disseminate CQI…………………………………………..71

Page 7: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

vii

Publishing Reports in Consumer-Focused Magazines………………………72

Hiring Public Relations/Communications Expert

to Aid Dissemination Efforts………………………………………………...73

Research on Consumer Decision-Making to Aid Dissemination……………73

Print Media Coverage of CQI………………………………………………………..74

Print Media Coverage of Patient Safety Practices of Providers……………..76

Main analyses………………………………………………………………………..78

Supplementary Analyses…………………………………………………………….85

Chapter 6 Discussion………………………………………………………………………...97

Limitations and Future Directions…………………………………………………...106

Chapter 7 Conclusion………………………………………………………………………..109

References……………………………………………………………………………………112

Appendix A: Methods………………………………………………………………………..131

Appendix B: Results………………………………………………………………………….147

Page 8: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

viii

List of Figures

Figure 2-1 Conceptual Model of Dissemination and Media Coverage

of CQI on Consumer Awareness of, Attitudes towards, and Use of CQI…………………...10

Figure 4-1 Timeline of Measurement of Independent (Blue Double Arrows)

and Dependent Variables (Red Double Arrows)……………………………………………39

Figure 4-2 Flowchart Depicting Selection for Relevance,

Content Coding, and Weighting of Media Articles on CQI…………………………………47

Figure 4-3 Flowchart Depicting Selection for Relevance,

Content Coding, and Weighting of Media Articles on Patient Safety………………………48

Figure A-1 Stepwise Algorithm to Guide Selection of Articles

Relevant To Comparative Quality Information……………………………………………..136

Figure A-2 Stepwise Algorithm to Guide Assignment of Valence

Weights To Discussion of Quality Transparency…………………………………………...139

Figure A-3 Stepwise Algorithm to Guide Assignment of Valence

Weights to Discussion of Quality Variation………………………………………………...140

Figure A-4 Stepwise Algorithm to Guide Assignment of Valence

Weights to Discussion of Patient Safety Practices of Healthcare Providers………………..141

Page 9: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

ix

List of Tables

Table 4-1 Calculation of Inter-Rater Agreement…………………………………………….54

Table 4-2 Interpretation of Key Coefficients of Main Model………………………………..56

Table 5-1 Descriptive Statistics……………………………………………………………...63

Table 5-2 Awareness Of, Attitudes Towards, and Use of CQI, By Alliance……………….64

Table 5-3 Availability, Applicability, and Credibility of

Quality Reports, By Alliance………………………………………………………………...66

Table 5-4 Alliance Proactive Dissemination Scores………………………………………..68

Table 5-5 Media Coverage Scores…………………………………………………………...77

Table 5-6 Impact of Dissemination of CQI on Awareness and Use…………………………81

Table 5-7 Impact of Dissemination of CQI on Attitudes towards CQI……………………...84

Table 5-8 Impact of Dissemination of CQI on Use among Subsample

of Respondents, By Awareness Status at Baseline………………………………………….87

Table 5-9 Impact of Dissemination of CQI on Attitudes towards

CQI among Subsample of Respondents, By Awareness Status at Baseline…………………90

Table 5-10 Impact of Dissemination of CQI on Awareness and Use

Of Physician Quality Reports………………………………………………………………..91

Table 5-11 Impact of Media Coverage of CQI on Awareness and Use:

Sensitivity to Removal of Valence and Prominence Weights………………………………..92

Table 5-12 Impact of Media Coverage of CQI on Attitudes towards CQI:

Sensitivity to Removal of Valence and Prominence Weights………………………………..92

Table 5-13 Impact of Alliance Sponsored Dissemination of CQI on Awareness

And Use Of CQI: Using Dummies for Individual Dissemination Strategies………………..93

Table 5-14 Impact of Dissemination of CQI on Attitudes Towards CQI:

Sensitivity to Choice of Dichotomizing Threshold…………………………………………..95

Table 5-15 Impact of Dissemination of CQI on Awareness And Use:

Sensitivity to Inclusion of Nearby Metropolitan Cities Newspapers

Page 10: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

x

For Humboldt County and South Central Pennsylvania Alliances…………………………..96

Table 5-16 Impact of Dissemination of CQI on Attitudes Towards CQI:

Sensitivity to Inclusion of Nearby Metropolitan Cities Newspapers for

Humboldt County and South Central Pennsylvania Alliances……………………………….96

Table A-1 Process Used To Identify and Classify Alliance

Dissemination Strategies…………………………………………………………………….132

Table A-2 Print Media Sources By Alliance Region………………………………………..133

Table A-3 Types of Media Coverage and Corresponding

Search Terms (Keywords)…………………………………………………………………..135

Table A-4 Definition of Key Coding categories for Media coverage of CQI………………137

Table A-5 Definition of Key Coding categories for Media coverage

of Patient Safety……………………………………………………………………………..138

Table A-6 Illustrative Examples of Code Application And

Valence Weight Assignment………………………………………………………………...142

Table A-7 Calculation of Normalized Unweighted and Weighted Scores

For Each News Article……………………………………………………………………...144

Table A-8 Assignment of Alliance Coding Among Author and Raters……………………..145

Table A-9 Results of Inter-Rater Agreement for Selection, Coding,

And Weighting Of Media Articles…………………………………………………………..146

Table B-1 Availability of Quality Reports, By Type of Measure And Alliance……………148

Table B-2 Availability of Credible Quality Reports, By Alliance…………………………..149

Table B-3 Distribution of Media Articles for Alliance-Sponsored

CQI Media Coverage, By Alliance………………………………………………………….150

Table B-4 Distribution of Media Articles for CMS-Sponsored

CQI Media Coverage, By Alliance………………………………………………………….151

Table B-5 Distribution of Media Articles for Non-Alliance Non-CMS

CQI Media Coverage, By Alliance………………………………………………………….155

Page 11: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

xi

Table B-6 Distribution of Media Articles for Media Coverage Of

Patient Safety, By Alliance………………………………………………………………….159

Table B-7 Impact of Dissemination of CQI on Awareness and Use

(Standard Errors Clustered On Individuals)…………………………………………………163

Table B-8 Impact of Dissemination of CQI on Attitudes towards CQI

(Standard Errors Clustered On Individuals)…………………………………………………164

Page 12: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

xii

Acknowledgements

While I want to express my appreciation towards a number of individuals who contributed

directly and otherwise to the conception and implementation of this study, I believe my mentor

and faculty advisor Dr. Dennis Scanlon has been more than just a contributor. Dennis has guided

me through some pretty rough shoals with a steady hand and an unrelenting eye towards the big

picture: learning to translate raw ideas into formal work capable of recognition by your peers.

My deepest sense of gratitude is also due towards Dr. John Moran and Dr. Yunfeng Shi, who

have been critical partners in transforming a raw and curious mind into one armed with empirical

researcher’s arsenal of tools and ready to engage the world of ideas with formal rigor. My

sincere appreciation is also due to Dr. Rachel Smith’s guidance and expertise in Mass

Communication and her willingness to be accessible for crucial advice in many issues that arose

during the completion of this work. I am thankful to Anna Farnsworth and Hannah Hoenshell for

their invaluable help in coding data and helping to deliver the project in time. Finally, and not the

least, my debt towards HPA Faculty and staff is immense, starting with the intellectual

atmosphere of incisive debate and collaboration, a warm and supporting environment towards

students and valuable friendships that I hope will last a lifetime.

Page 13: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

1

Chapter 1

Page 14: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

2

Introduction

Background

Public reporting of provider quality has grown substantially and a number of public and

private organizations now publish and disseminate data on hospital and physician quality

(Christianson et al., 2010; Brien et al., 2010; Roski and Kim, 2010). The theoretical rationale

behind provider quality “report cards” is based on four major pathways (Mukamel et al., 2008;

Fung et al., 2008; Mehrotra et al., 2012): 1) armed with new information on provider quality

differentials, consumers in contemporary price and quality-opaque healthcare markets will select

higher quality providers, and force low quality providers to improve their healthcare delivery due

to competitive pressures (“consumer pathway” or “demand pathway”), 2) low-quality providers

will react to poor scores by improving the quality of healthcare in order to protect their

reputation in market (“provider pathway” or “supply pathway”), 3) purchasers such as

employers, insurers, and single payers (e.g., Medicare, Medicaid) will use quality reports to

benchmark performance in ways that propel providers to improve quality (e.g., tiered networks,

pay for performance bonuses), and 4) regulatory agencies may use publicly reported data to

inform accreditation standards for healthcare provider organizations (“regulatory pathway”). A

less recognized element of the consumer pathway emphasizes improved patient-physician

interaction resulting from publicly available information, instead of provider switching by

consumers (Mehrotra et al., 2012).

Motivation

Over the last decade, a large body of empirical work has tested the effectiveness of the

consumer pathway in response to publication of quality report cards (Mennemeyer et al., 1997;

Mukamel et al., 2004; Romano and Zhou, 2004; Jha and Epstein, 2006; Mukamel et al., 2008;

Page 15: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

3

Fung et al., 2008; Mehrotra et al., 2012). These studies have typically focused on outcomes

related to consumer choice (e.g., choice of physicians or hospitals) or on changes in market

shares of providers after publication of report cards. Although findings from this literature are

mixed, most studies reveal trivial or modest effects of report cards on consumer choice (Fung et

al., 2008). While these results have ignited a debate about the efficacy of public reporting in

general, they have not slowed the momentum for public reporting of quality scores. The

methodology of these studies varies from qualitative to econometrically rigorous; collectively,

however, they suffer from a few important limitations that make it difficult to draw any firm

conclusions about the effectiveness of the consumer pathway. First, the consumer pathway relies

on consumer awareness of report cards to affect consumer attitudes towards provider quality

information and, consequently, consumer choices of providers. Although there is prior literature

on whether the consumers are aware of CQI (Kaiser Family Foundation, 1996, 1998, 2000, 2004,

2006, 2008, 2011; Fox and Jones, 2009; Hanauer et al., 2014 ), little attention has been paid to

how regional variation in consumer awareness and use of CQI is related to variation in amount

(e.g., number of quality reports available) and relevance (e.g., CQI that rates physicians on

specific chronic condition(s) that a consumer has) of provider quality information available in

local markets (Scanlon et al., 2015; Shi et al , Under Review). Second, despite explicit

recognition of the importance of dissemination of report cards (Findlay, 2016), none of the

studies has attempted to capture or model the proactive dissemination strategies used by

publishing sources. Most studies just implicitly assume that a report is available to all consumers

if it has been published online. Third, none of the prior studies model how media coverage of

issues related to provider quality and comparative performance impacts consumer awareness of

report cards or consumer attitudes towards differences in health care quality among providers.

Prior research suggests that a single media report on hospital safety may be far more influential

Page 16: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

4

than a quality report card in affecting consumer choice of hospitals (Mennemeyer et al., 1997).

Moreover, using media reports to capture the effect of report cards has the advantage that media

tends to cover quality reporting from a diverse variety of sources, ranging from commercial sites

that publish customer reviews like Angie’s list and Health Grades to more comprehensive reports

published by non–profits like Aligning Forces For Quality (AF4Q) multi-stakeholder alliances or

the U.S. government’s Centers for Medicare & Medicaid Services (CMS).

This study is an integral part of an ongoing evaluation of the Aligning Forces for Quality

program, a national initiative funded by the Robert Wood Johnson Foundation and focused on

improving the quality of healthcare in sixteen distinct regions of the United States (Scanlon et

al., 2012). The program envisions the achievement of this goal through sustained collaboration

between the stakeholders in health and healthcare delivery at the level of the community (e.g.,

physicians, hospitals, insurers, employers, and consumers) by funding the formation of multi-

stakeholder entities called alliances. Although there is considerable variation in both the

governance of alliances and implementation of key programmatic initiatives across alliances, all

alliances are required to organize their activities around five programmatic areas outlined by the

foundation: public reporting of CQI, consumer engagement, quality improvement, payment

reform, and reduction in disparities in health care delivery (Hearld et al., 2012). Since the

inception of program in 2006, public reporting has remained a key focus of AF4Q alliances

although the capacities of and progress made by individual alliances has varied. As of 2013,

every alliance has released at least one public report each with physician quality measures and

hospital quality measures (Christianson et al., 2012). Alliances have attempted to disseminate the

information in the reports to the consumers via a range of approaches, the most important of

which has been to make this information available over their websites (Mittler et al., 2012). In

evaluating the progress made by the alliances in public reporting it is important to capture the

Page 17: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

5

variation in alliance approaches towards dissemination of CQI. This study is part of the overall

AF4Q evaluation designed to assess the progress made by alliances in this key domain of the

program.

Study Aims

This study has three major aims. First, we develop a framework that conceptualizes the

trajectory by which CQI “reaches” consumers and affects their awareness of CQI, attitudes

towards CQI, and “shopping” behaviors for high quality providers. Second, we describe the

spectrum of approaches used by the AF4Q multi-stakeholder alliances in disseminating their

public reports to consumers. Finally, we capture the regional variation in availability,

applicability, and credibility of CQI (following a framework proposed by Christianson et al.,

2010), as well as alliance dissemination and media coverage of CQI and, using a two period

panel data and fixed effects methodology, examine its relationship with consumer outcomes

including consumers’ awareness of CQI, consumers’ attitudes towards CQI and their use of CQI

to inform provider choices.

Page 18: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

6

Chapter 2

Page 19: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

7

Conceptual Framework and Syntheses of Literature

Conceptualizing Dissemination of Quality Report Cards: Prior Work and Current Need

Progressive entrenchment of provider quality transparency movement in the contemporary

healthcare delivery systems, coupled with lackluster public enthusiasm for report cards, has

turned the spotlight on the “mechanics” of how report cards reach consumers and affect their

decision-making process. Two distinct but related strands of literature provide useful insights

into how consumers cognitively process CQI and which strategies may be effective in increasing

the likelihood of their use of CQI. In an influential series of papers starting in mid-nineties,

Judith Hibbard and colleagues have laid out the cognitive barriers faced by consumers in

processing and assimilating information provided in the report cards (Hibbard and Jewett, 1996;

Hibbard and Jewett, 1997; Hibbard et al., 1997; Hibbard et al., 2000; Hibbard, 2008). This work

highlights the concern that the growing “density” of information and lack of standardization in

generation and presentation of performance metrics could result in confusing rather than

informing consumers, and has led to calls for reduction of information burden through simpler

presentation approaches, and enhanced role of knowledgeable intermediaries in consumer

decision making. A separate body of literature emphasizes the importance of context and timing

of provision of CQI in engaging the general public with the provider quality transparency

movement. The work of Dale Shaller and colleagues is illustrative of this approach; they present

a visual framework to illustrate how the varying contexts for consumer’s healthcare decision-

making (e.g., short term medical needs like maternity care or “shoppable treatments”, “external

disruptions” such as job loss) may interface with consumers’ cognitive and emotional state to

create opportunities for enhanced engagement through timely and targeted provision of provider

quality information (Shaller et al., 2014).

Page 20: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

8

While important, these studies fall short of providing a coherent theoretical framework that

outlines the dissemination pathways of CQI to its diverse set of end users (patients, physicians,

payers). Such a theoretical framework is important for a number of reasons. First, as noted

above, most of the empirical work on the consumer pathway focuses on consumer responses to

online or print copy availability of CQI, while neglecting the specific intermediary pathways by

which report cards reach consumers. Such an approach may fail to fully capture the information

“signal” that potentially drives consumer responses to CQI. Second, tracing the analytical

components of the dissemination pathways may guide future research by sharpening focus on

key elements of the causal chain and suggesting empirical strategies for identification of the

causal relationships. Finally, a formal explication of the context in which dissemination occurs

may provide practical knowledge to design future dissemination efforts in order to maximize

consumer impact.

With the foregoing in mind, our study makes three distinct contributions to the existing

literature. First, we develop a comprehensive logic model to illustrate the pathways by which

quality report cards reach consumers. Second, we synthesize the existing literature on CQI

dissemination by illuminating the key concepts that underlie it, in part by coalescing and

distilling prior insights on diverse aspects of dissemination, including consumers’ customary

sources for provider quality information, consumer attitudes towards provider quality

differentials and CQI, their exposure to CQI and its impact on “prior” attitudes, their demand and

searches for such information, and finally, the organizational and institutional context in which

producers of CQI tend to “push” their products towards consumers. Third, at each stage of our

explication of key concepts, we trace its implications for future empirical work, evaluation

efforts, and dissemination strategies.

Page 21: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

9

An Overview of Conceptual Framework

Formal and Informal CQI

The conceptual framework for this study is shown in Figure 2-1, where the boxes represent

the main variables and the width of the arrows is used to depict the relative importance of a

(hypothesized) specific pathway of causality. A key distinction in our framework pertains to the

type of information source that carries information about provider quality to consumers. We term

as formal CQI sources that have certain key signposts of scientific measurement: compare

providers on multiple dimensions of quality, generated by a careful method of selection of

measures and providers that ensure statistical accuracy, and use sampling techniques that

maximize generalizability (Christianson, 2015). By contrast, consumers may use

recommendations from their family/ friends, recommendations from healthcare professionals,

their perception of the market reputation of providers, and unsolicited, patient experience

reviews from online sources to make initial judgements about providers. We term these sources

collectively as “informal” CQI.

How Do Report Cards Reach Consumers? The “Mechanics” Of Dissemination

We conceptualize the potential trajectory of report cards from publication to eventual impact

on consumer awareness, use, and consumer attitudes towards CQI (“consumer outcomes”) as

comprising of two distinct elements: 1) efforts by organizations that publish and/or disseminate

reports (organizational “push” towards consumers) and 2) consumers’ efforts to acquire CQI

(consumers’ “pull”). The term “organization” subsumes all entities that are responsible for

producing and/or distributing quality reports, including federal agencies (e.g., Centers for

Medicare & Medicaid Services), state governments, non-profit groups (e.g., AF4Q alliances),

Page 22: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

Organizational “Push” Of formal CQI

towards Consumers

Availability of CQI

Applicability of CQI

Credibility of CQI

Proactive Dissemination of CQI

Provider Dissemination of CQI

Media coverage of CQI

Consumers “Pull” Of formal CQI

Online search

Consulting health plan materials

Newspapers/magazines

Consumer Awareness of and

Attitudes towards CQI after

exposure to formal report cards

High Likelihood of Awareness of CQI

High Likelihood of Perceiving CQI as

Important

High Odds of Acknowledging Quality

Differentials between Doctors

High Willingness to Switch Doctors

Based On Quality

Consumer Use of CQI after

exposure to formal report cards

High Likelihood of Use of CQI

In choosing physicians

For discussion of quality

report content with physicians

Figure 2-1 Conceptual Model of Dissemination and Media Coverage of CQI on Consumer Awareness of, Attitudes towards, and Use of CQI

Informal Sources of Quality Information Online commercial

sources of CQI (e.g., WebMD)

Family and friends Personal experience

Known healthcare professionals

Consumer Awareness of and

Attitudes towards CQI prior to

exposure to formal report cards

Low Likelihood of Awareness of CQI

Low Likelihood of Perceiving CQI as

Important

Low Odds of Acknowledging Quality

Differentials between Doctors

Low Willingness to Switch Doctors

Based On Quality

Consumer Use of CQI prior to

exposure to formal report cards

Low Likelihood of Use of CQI

In choosing physicians

For discussion of quality report content with

physicians

“Updating” Of

Consumer

Awareness/

Attitudes

Page 23: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

11

media organizations (e.g., newspapers, television news), and employer sponsored groups (e.g.,

Leapfrog Group).

“Updating” Effect of Organizational Push and Consumer’s Pull of CQI

Prior to their exposure to formal report cards, consumers are viewed as having a low

likelihood of being aware of and using any form of comparative provider quality information.

These “prior probabilities” are assumed to be generated through use of a set of informal sources

of CQI (e.g., family and friends, peer groups etc.). We hypothesize a similar effect of informal

sources on the favorability of consumer attitudes towards CQI. In other words, we expect that

consumers, in general, hold less favorable attitudes towards the role of CQI in provider choice

before they get exposed to formal report cards. We further posit that, for consumers not yet

exposed to formal CQI, low likelihood of awareness coupled with less favorable attitudes

towards CQI lead to lower odds of using CQI. Dissemination of formal report cards and

consumers’ own active efforts to pull information increases the likelihood of awareness of CQI

and improves favorability of attitudes towards CQI. Primarily through improving awareness and

heightening the odds that consumers will view CQI more favorably, dissemination eventually

leads to enhanced utilization of CQI. Below, we discuss each of these elements in detail.

How Do Consumers Choose Providers? Role of Informal Sources

What information sources do consumers rely on in markets with and without quality report

cards? Starting in 1996, Kaiser Family Foundation has sponsored a series of oft-cited surveys

representative of American consumers, seeking to uncover the role of quality information in

consumers’ decision-making regarding healthcare (Kaiser Family Foundation, 1996, 1998, 2000,

2004, 2006, 2008, 2011). Two findings surface from this literature with a remarkable

Page 24: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

12

consistency: first, most consumers still consult their family, friends and known health

professionals (“informal sources”) for guidance on choosing doctors and, second, even after

fifteen years of tracking trends, the share of population who report being aware of or using these

sources has remained low. As an example, nearly two thirds of the respondents pointed to friends

and family as their main sources of information regarding provider quality in 1998 as well as

2004. Asked about the relative influence of informal sources compared to formal report cards, a

large majority would still rely on family’s and friends’ recommendations over rankings produced

by quality metrics, although more are now willing to trust information in report cards. For

instance, the proportion of respondents who would choose a hospital familiar to them or their

family/friends even if it scored lower on quality rankings fell just 19% points over 15 years, from

76% in 1996 to 57% in 2011. By contrast, the shares of population which saw and used some

form of formal CQI on hospitals and doctors remained flat between 1996 (6% and 4%,

respectively) and 2008 (7% and 6%, respectively).

A second line of evidence on the important role of informal quality information comes from

empirical studies that used discrete choice models to reveal the extent to which consumers’

provider choices are affected by information available in report card “naïve” markets (“market

learning”). Studies have varied in how they capture the informal quality information (i.e.,

available to patients in markets without report cards), with most using a set of observable

provider-related characteristics to proxy for quality (e.g., years of experience of provider,

teaching status of hospital) or information extracted from report cards that were accessible to

providers or health plans but not publicly available (Luft et al., 1990; Shahian et al., 2000;

Mukamel et al., 2004; Dafny & Dranove, 2008; Jung et al, 2011). Depending on the modeling

Page 25: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

13

approach, the effect of market learning dominates or matches the report card effect, consistent

with a prominent role of informal information sources in healthcare markets.

The preceding review helps identify some gaps in current knowledge on consumer decision-

making under imperfect information. While continued focus on sources of CQI is important,

little attention has been paid to the context in which consumers make their choices of providers.

There are reasons to think consumers may be more attracted to report cards if they were available

at the “right time” and the “right place” (Shaller et al., 2014). Hence, it may be fruitful to

elucidate the variety of “decision points” in which choice of providers assumes greater urgency

and need for quality rankings becomes more pronounced, preferably through qualitative or

mixed methods approaches that can expose a more richly textured understanding of consumers’

experiences than possible with the traditional survey methods. In addition, it is important to

recognize that while public reports are intended to encourage consumers to “shop” for better

providers, a part of their rationale is to foster a more informed interaction with their regular

physicians. To what extent are report cards achieving this important objective is an open

question and worthy of a deeper inquiry.

Organizational “Push” of CQI towards Consumers

Dissemination in the Broader Health Care Literature

It is possible to draw a close analogy between dissemination of quality report cards and

public health information campaigns delivered through mass media (e.g., anti-smoking

messages, warnings against unprotected sexual intercourse, etc.). Two key questions confronting

researchers of public health campaigns are: are they effective in moulding public opinion, and if

so, how do they work? A rich literature examines the theoretical rationales and empirical

Page 26: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

14

effectiveness of a variety of public health campaigns (Snyder, 2007; Wakefield et al., 2010). To

explain the effectiveness (or lack thereof) and to guide implementation and evaluate success of

media campaigns, a broad spectrum of conceptualizations has been advanced (Atkin, 2001).

These include the Social Marketing framework (Andreasen, 1995), Communication-Persuasion

Matrix (McGuire, 1978), Agenda setting (McCombs, 2004), Diffusion of innovations (Rogers,

2003), Social Cognitive Theory (Bandura, 1986), Theory of Reasoned Action (Ajzen & Fishbein,

1980), Transtheoretical Model (Prochaska & Velicer, 1997), and the Health Belief Model

(Becker, 1974). Drawing from the ecological models of health, these frameworks view

individual action as resulting from a complex interplay of interpersonal and social determinants,

and advocate focus on social networks and community-level factors (in addition to individual-

level factors) in designing and implementing mass media campaigns. A recurrent theme is that

informational campaigns reach their targeted audiences through both direct and indirect

channels. Direct effects are achieved through messages specifically targeted or “pushed” towards

certain subgroups (e.g., based on relevance or potential receptivity) and indirect effects work

through influencing the intermediaries (policy makers, opinion leaders) who are in a position to

influence the targeted audience, making them more desirous of soliciting or “pulling”

information

Key Dimensions of Organizational “push”

Using the ecological focus underlying many earlier mass communication frameworks as a

foundation, we propose an expansive conceptualization of organizational push of CQI that

comprises of six distinct dimensions. We view these dimensions as part of organizational push

insofar as they involve strategic targeting of CQI to specific audiences. These dimensions are,

respectively, availability of CQI, applicability of CQI, credibility of available CQI, proactive

Page 27: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

15

dissemination of CQI by producers of report cards, dissemination of CQI by providers, and

dissemination of CQI to the general public via media coverage. We discuss each of these

dimensions separately below.

Availability of CQI

Perhaps the simplest way to make a report card available to consumers is to house it on a

website and most organizations that issue report cards make them available online (Sick &

Abraham, 2011). Although online availability is simple to conceptualize and intuitively

appealing, there are significant difficulties in operationalizing the concept owing to substantial

variation in practice patterns of included providers (e.g., practicing out of region for a part of

workweek), target audience (e.g., patients or providers), online accessibility, formatting and

presentation of reports, updating of information (e.g., quarterly, annual, or less frequently),

sourcing of data (e.g., claims, medical records, or surveys), and sourcing of quality information

presented (e.g., providing link to other report cards or original data). A few empirical studies that

have sought to measure regional and over-time variations in availability of CQI offer insights

into the complex measurement choices involved. Reports were considered “available” in the

region if they provided quality comparisons between providers who practiced in the region,

although the regions varied substantially in size and population (Christianson et al., 2010;

Scanlon et al., 2015). Another key aspect of availability is whether the report is freely accessible

to general public (without a secure login or subscription paywall) or to a specific narrow segment

of the population (e.g., in case of health plans, to subscribed members of the plan); only reports

freely accessible were considered “publicly” available (Scanlon et al., 2015). Similarly, reports

from organizations which sourced their CQI entirely from other producers by simply offering

hyperlinks on their websites were excluded from these studies (to avoid overestimating regional

Page 28: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

16

availability) while those that provided links to additional material or other reports were counted

separately (Christianson et al., 2010; Scanlon et al., 2015). Report cards that had comparative

data for virtually every region (e.g., Hospital Compare, HCAHPS) were counted as available for

each regional estimate (Scanlon et al., 2015).

Applicability of CQI

Kaiser surveys suggest that a large fraction of the general public finds much of the

information in public reports not relevant to their specific clinical needs. For instance, 53%

survey respondents in 2004 complained that the information available in the report cards “was

not specific to their personal health conditions or concerns” (Kaiser Family Foundation, 2004).

Scanlon et al (2015) and Shi et al (Under Review) are the only two studies in the existing

literature that attempt to capture applicability of report cards. Applicability was viewed as access

to information specific to the consumers’ clinical conditions and was measured for a large group

of chronically ill survey respondents by counting regional public reports that had at least one

clinical measure applicable to the respondents’ chronic condition.

The concept of applicability may subsume other aspects of report cards that consumers may

view as “relevant” to their cognitive (as opposed to clinical needs) capacities. For instance,

experimental evidence indicates that “text-dense” information with too much detail imposes a

substantial “cognitive load” on users and may trigger triaging strategies to screen out “irrelevant”

contextual information, leading to potentially misleading inferences (Hibbard et al., 1997). Some

researchers have argued for simpler, aggregated quality metrics (i.e., summary measures of

multiple quality indicators) to circumvent the problem of excess cognitive burden (Peters et al.,

2007), pointing to surveys showing consumer preference for composite patient experience

Page 29: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

17

measures (Edgman-Levitan & Cleary, 1996; Harris & Buntin, 2008). Nonetheless, these

strategies pose their own risks since consumers may have needs specific to a clinical condition or

a process and a single composite indicator may miss important quality differentials. Further,

patients with certain type of clinical conditions (e.g., chronically ill), who have higher odds of

sustained interaction with the healthcare system and ongoing need for effective disease self-

management, may find granular provider measures more informative (Schlesinger et al., 2012,

Shaller et al., 2014). Collectively, these debates underscore the importance of more effective

“targeting” of CQI to the underlying populations’ clinical profile and cognitive capacities.

Making CQI more “applicable” to the concerns of the target audience may well be a prudent

strategy to enhance their utilization by consumers.

Credibility of CQI

Use of an information source, especially for a decision as consequential as choosing a health

care provider, implies a high degree of trust (Cline & Haynes, 2001; Craigie et al., 2002).

Previous literature has attempted to identify sources of CQI that consumers deem trustworthy.

Acknowledging the importance of provider buy-in for the success of quality reporting initiatives

and provider role in guiding consumer choices, some studies have also looked at quality

measurement and dissemination strategies that physicians may consider more credible than

others. The emerging evidence is consistent with the intuition that consumers assign higher

“credibility” to certain sources of information than others (Dutta-Bergman, 2003; Harris &

Buntin, 2008), and that organizations may leverage that knowledge by targeting consumers via

channels that are perceived to be more “neutral” (Christianson et al, 2010). For example, some

consumers may consider information provided directly by health plans or employers as less

credible than if the same data is provided by non-profit multi-stakeholder entities that have

Page 30: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

18

health plans as partners or members. Some studies observe a high consumer trust in sources

validated by the federal government agencies (Dutta-Bergman, 2003). Providers, on the other

hand, may be more inclined to trust information derived from patient medical records that can

capture disease severity in a more nuanced manner than claims databases. Christianson et al

(2010) provides a nice framework that distills the empirical findings into a three-fold measure of

credibility: the report is deemed more credible for consumers and providers if it is endorsed by a

national agency with expertise in quality measurement, if it is produced by a local non-profit

organization or a government agency through a collaborative process involving providers, and if

it uses medical records data as opposed to insurance claims data to generate the quality metrics.

Proactive Dissemination of CQI by Sponsors/Producers

Most product markets have well-established marketing strategies designed to “push” their

products toward consumers. Such strategies typically leverage knowledge of the timings and

circumstances when consumer’s need for a product is high (i.e., consumer is “in the market”),

considerably heightening the salience of convenient product availability (Celsi & Olson, 1988;

Pratkanis & Greenwald, 1996). Although healthcare markets are somewhat different from

conventional product markets (for one, healthcare provision is rarely viewed as a strictly

commercial commodity), similar principles likely condition consumer’s attention and dedication

of cognitive resources to information in quality report cards (Shaller et al, 2014). Recognizing

this, the initial thrust of the quality transparency movement on developing consumer-friendly

measurement and presentation approaches is gradually evolving into a more pointed emphasis on

context-informed marketing. In this, producers of report cards have increasingly sought to inform

their strategies by the burgeoning research into cognitive biases that afflict consumers’

information processing as well as affective responses to situations that pose health threats and

Page 31: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

19

often necessitate provider choice (Shaller et al, 2014). This body of work has uncovered a

breadth of challenges to achieving optimal consumer response to quality transparency and

common dissemination approaches tend to reflect these realities. We provide an overview of the

principal issues below.

Low Numeracy, Literacy, English Proficiency, and High “Cognitive Burden” of Information

Complexity

An oft-expressed concern relates to the average consumer’s ability to understand complex

performance measures (Hibbard, Greene & Daniel, 2010), especially at a time when the sheer

number of metrics in the public realm has proliferated (Schlesinger et al., 2012). Some

consumers are at added risk of being overwhelmed by the complexity owing to cognitive

difficulties in handling information dense in numerical comparisons or medical jargon. Others

such as recent immigrants may have problems understanding the English language content of

reports. These issues have the potential to widen existing disparities in access to quality

information among socio-economically disadvantaged populations and minority ethnic groups

(Casalino et al., 2007; Greene et al., 2015). CQI producers have attempted to address some of

these concerns by experimenting with consumer friendly-presentation approaches (star-ratings,

smileys, etc.), composite quality indicators that summarize across multiple dimensions,

publishing reports in Spanish or other non-English languages, and simpler normative evaluations

of quality (indicating best or worst result on an underlying set of indicators). Despite these

initiatives, problems may remain for certain vulnerable subgroups of populations who are

lacking in resources needed to make optimal use of the available information. For such

individuals, some researchers have advocated for intermediaries (“navigators”) who can guide

Page 32: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

20

consumers by serving as trusted counselors in helping them make informed choices (Shaller et

al., 2014).

Lack of “Consumer-Targeting” of Reports

With the recognition that most consumers at most times may not be looking for CQI and the

reports are best targeted to those who are, a major debate now underway involves how to

leverage such consumer “decision-points” to foster greater consumer engagement (Shaller et al.,

2014). For example, some otherwise healthy consumers may be in the market for a well-defined,

time-delimited need such as maternity care, preventive screenings for cancer or heart disease,

dental procedures, or elective surgery like hip replacement. These “shoppable treatments” often

involve prior planning and a need to choose a provider for the first time by consumers who are

generally outside of the healthcare system; such individuals may need aggressive outreach

strategies (e.g., targeted marketing to pregnant women by publishing in health-oriented

magazines aimed at a female audience). In some cases, people may start looking for provider

quality information when they move to a new area or a new job. Surveys indicate nearly 1 in 10

consumers are looking for a new primary care provider at any time (Tu & Lauer, 2008). Facing

such “external disruptions”, people may have a short time window to act and information

provided in a timely way at the right place can promote higher utilization (e.g., open enrollment

periods in the context of new employment, requiring employees to choose primary care provider

at open enrollment) . Finally, many contexts for choosing new provider may arise when people

have negative experiences with their regular providers (“problematic experiences”). Although

most Americans tend to trust their providers (Hall et al., 2001; Goold & Klipp, 2002), nearly a

third indicate having problems with the quality or access to healthcare and many report switching

physicians in the past year (Mitchell & Schlesinger, 2005; Schlesinger et al., 2002). This may

Page 33: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

21

present an opportunity when individuals are more receptive to using specific types of

information (e.g., on egregious medical errors rather than on higher quality achievers on process

measures).

High Information Search Cost

A handful of studies that focus on how consumers search for provider quality information

reveal considerable difficulties in finding reliable online sources of information (Eysenbach &

Köhler, 2002; Sick & Abraham, 2011). For instance, one study found that “web sites most likely

to be found by consumers are owned by private companies and provide information based on

anecdotal patient experiences” and, further, that “searches that focus on clinics or physicians are

more likely to produce information based on patient narratives” (Sick & Abraham, 2011). These

problems are hardly new and formal CQI sponsors have looked to media and advertising to

ensure their products becomes more accessible to the general public. Many issue press releases

when new or major report updates become available or use radio or television spots to advertise

their continued availability. For instance, CMS partnered with American Hospital Association to

announce the official launch of Hospital Compare website at the Association of Health Care

Journalists National Conference, an event likely to enhance the website’s profile among reporters

and news correspondents (American Hospital Association, 2002). Perhaps similar concerns

motivate many producers to offer online material free of charge and without requiring a

password protected log-in. Nevertheless, common search queries are unlikely to lead to

authoritative sites like Hospital Compare; the multitude of sources that typically come up in

response vary considerably in content and reliability, challenging even committed consumers’

ability to sift out relevant information (Eysenbach & Köhler, 2002). Moreover, it is unclear what

search terms most consumers use when looking for CQI, amplifying the need to further probe

Page 34: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

22

common online search patterns. Other policy/practice suggestions to guide consumers towards

reliable online content include embedding common search terms within the content of websites

housing formal report cards and partnering with high-visibility online sites to have them embed

report card hyperlinks in conspicuous locations (Sick & Abraham, 2011).

Lack of Standardization in Production and Presentation of CQI

A somewhat distinct worry is the multiplicity of quality measures for the same clinical

problems, mirroring the considerable variation in measurement and presentation approaches of

CQI sponsors (Halasyamani & Davis, 2007; Austin et al., 2015). Such variations may

unavoidably confuse users confronted with specific clinical needs but now saddled with the

unenviable task of making sense of conflicting quality signals embedded in disparate sources

(Rothberg et al., 2008; Rau, 2013). Proliferation of quality measures has fueled a growing bid to

appeal to authority of organizations with proven expertise in quality measurement such as the

National Quality Federation (NQF) or the National Center for Quality Assurance (NCQA).

Consequently, many CQI sponsors now seek to validate their metrics by using standardized

measurement and presentation approaches developed by these central agencies. To what extent

has the report card content “homogenized” in terms of measurement and presentation approaches

and what impact, if any, this may have had on consumer use is an important subject of future

studies.

Provider-Initiated CQI Dissemination

Although the quality transparency movement has largely been viewed by the provider

community with considerable skepticism (Marshall et al., 2000; Robinowitz & Dudley, 2006 ), it

has also set up incentives for them to compete on objective and transparent quality metrics aside

Page 35: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

23

from the more generic factors such reputation or clinical experience (Marshall et al., 2000).

There are increasing signs that many have taken this opportunity to advertise their relative

standings in quality (whether self-generated or drawn from other sources) to the general public

through a variety of marketing strategies. Most hospitals have their own Facebook pages and

websites, with a vast majority now offering user-driven star ratings (1-5 stars) drawn from

unsolicited consumer feedback (Glover et al., 2015). User-generated ratings have been shown to

be broadly indicative of both the actual quality measured by more objective metrics and the

firms’ growth in market share in non-healthcare sectors of economy (Luca, 2011; Galloro, 2011);

this, increasingly, seems to be true of the healthcare sector as well (Lagu, 2010; Greaves et al.,

2014).

As to the more granular and broadly validated rating systems such as the Hospital Compare

and Leapfrog, very little empirical data exists on how hospitals use them in their marketing

efforts (Muhlestein et al., 2013). Anecdotal media reports indicate a selective and self-serving

(e.g., touting “cherry-picked” favorable ratings while ignoring negative ones) use of public

reports by many regional providers (Ornstein, 2013; Rau, 2013). These concerns are deepened by

reports of major commercial and nonprofit raters (e.g., Healthgrades, U.S. News, and Leapfrog)

charging sizable license fees to providers for using their ratings in advertisement efforts (Rau,

2013). To date, providers seeking to market federal ratings selectively have not faced any hostile

regulatory scrutiny. Indeed, American Hospital Association has even encouraged members to

disseminate ratings from the Hospital Compare and Nursing Home Compare websites to

consumers, albeit with the caveat that they refrain from comparing themselves to their peers

(American Hospital Association, 2002). Other practices that have raised similar worries include

hospitals’ advertising their Emergency Department wait times via conspicuously sited billboards

Page 36: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

24

and strategically directed television spots (Weiner, 2014). The spread of provider-initiated

marketing efforts has drawn minimal scrutiny from health services researchers, offering a

valuable opportunity to probe into whether and how these initiatives may have affected the

overall public policy goal of matching consumers to higher quality providers.

Role of Media in Dissemination of Report Cards

Dissemination by the CQI producer/sponsor may influence consumers indirectly through

media coverage of issues (Gerbner et al., 1982; Shanahan & Morgan, 1999; Gerbner et al., 2002)

related to comparative provider quality (e.g., articles covering press events sponsored by the

publication source, efforts to increase media reporters’ awareness of CQI and its importance). As

we discussed earlier, mass media campaigns have played a prominent role in the field of public

health, attested by a large and growing literature focused on the efficacy of modern

communication channels (Rogers & Storey, 1987; Snyder, 2007). Further, many media outlets

may independently cover issues or events directly or indirectly related to CQI (e.g., safety record

of local hospitals, comparative performance of regional providers on key conditions relevant to

public health, sentinel events like major surgical mishaps). These issues may be covered by print,

television, or radio sources, but also increasingly by social media (Facebook, Twitter) and issue-

specific blogs affiliated with large media organs. For most media-driven campaigns and “push”

strategies, crucial decisions regarding the drafting of messages, developing logic models of

behavior change, specifying target population, and selecting optimal channels for message

delivery depend, in part, on answering a pivotal question: how do media messages shape public

opinion and attitudes?

Page 37: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

25

Agenda-Setting Function of Media

Following McCombs and Shaw (1972), “agenda setting” has been defined as “the transfer of

issue salience from the news media to the public agenda” (McCombs et al., 2014). Although the

term “salience” has been used in differing ways in communication theory, political science, and

cognitive psychology, in the agenda-setting literature it specifically refers to the ability of news

media to raise the importance of specific issues in public opinion (Kiousis, 2004). Scholarship on

agenda-setting has identified two distinct dimensions of media salience: prominence and valence

(Kiousis, 2004). “Prominence” indicates the importance assigned to the story by its contextual

features (such as placement in title or text, location on front-page, space devoted to story etc.,),

often reflecting an active process of selection by the author. In the language of the agenda-setting

theory, the concept of prominence would be used to capture the “first level” of agenda-setting,

which is focused on “objects of attention” (e.g., personalities, issues etc.) (Winter & Eyal, 1981;

Behr & Iyengar, 1985; Watt et al., 1993). The other important dimension of salience is the

concept of “valence”, a measure of affective or emotional aspects of a news story that determines

its normative “framing” (positive, negative, or neutral) of the objects of the story (“attributes” of

objects or “second-level” of agenda setting) (McCombs et al., 1997; Lopez-Escobar et al., 1998;

Lopez-Escobar et al., 1998 ). In the context of media coverage of CQI, we expect that the

prominence (issue agenda-setting) will act primarily to raise media consumers’ awareness of

CQI, whereas the valence dimension (attribute agenda-setting) will have its primary impact on

the consumer attitudes towards CQI (e.g., perceived importance of CQI in healthcare decision-

making).

Page 38: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

26

Past Literature on Media Coverage of Report Cards

Somewhat surprisingly, media coverage of report cards has attracted very little attention from

empirical researchers. Indeed, we were able to identify just two studies that have examined the

issue to date. In the first study, published nearly twenty years ago, Mennemeyer et al. (1997)

explored the coverage of hospital CQI released by the Health Care Financing Administration

(HCFA) in late eighties. Authors used a well-known archival database of print newspapers to

study the content and framing of items focused on quality rankings of local hospitals, concluding

that media coverage of releases was sufficiently sparse so as to yield null effects on hospital

market shares in their regression models. One interesting finding was that media discussion of

salient events at a hospital dramatically reduced its market share. In a more recent study, Higashi

et al. (2012) probed media coverage of the public release of unadjusted cancer survival rates of

local hospitals in five major Japanese newspapers that published a total of 13 news items

following the release. Although authors did not comment on the intensity of news coverage, their

results appear to substantiate Mennemeyer et al. conclusions regarding thinness of coverage of

report cards in the popular press.

Future Avenues of Research

Capturing media coverage raises a host of difficult measurement issues related to the

“conceptual scope” of media, geographical “reach” of individual media channels, as well as

paucity of data sources for major media channels. For instance, the interpersonal and consumer-

driven nature of dialogue on major social media platforms raises questions about whether it can

be counted as a true media organ (Hirsch & Silverstone, 2003; Kwak et al., 2010;). Its hybrid

status as a major source of information about current events in lives of millions and a personal

Page 39: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

27

communication medium poses challenges to any empirical attempts to quantify its information

content (Mangold & Faulds, 2009). The advent of the Internet has further complicated the

measurement of consumer response to “local” media coverage (Althaus & Tewksbury, 2000;

Jeffres et al., 2012). Most print newspapers now have an online version making them instantly

accessible all over the world, raising difficult questions about the extent to which their content

can truly be considered “local”. Relatedly, despite significant expansion of work attempting to

quantify television coverage of health issues, absence of transcripts of local television broadcasts

remains a major limitation (Long et al., 2005). Beyond these issues, the existing literature lacks a

systematic effort to develop a comprehensive set of content themes and valence frames

applicable to media articles on report cards. The pair of studies that investigated media coverage

of public reports (noted above) did not dwell extensively on which content themes received more

attention from the reporter, which specific valence frames were applied to the content, and how

such framing may have affected readers’ knowledge and behavior. A systematic analysis of the

aforementioned issues using the diverse spectrum of contemporary news media, with the overall

goal of assessing its impact on consumer propensity to use CQI or consumer matching with more

efficient providers may significantly advance our knowledge of public reporting.

Consumers’ “pull” of CQI

Instead of passive responders to organizational “push” of quality reports, consumers can be

viewed as active agents, constantly responding to ongoing needs that trigger efforts to pull

information from sources around them. Although most people still consult their family members

or friends or known health professionals for recommendations on choosing providers, nearly a

third say they would look for information online, in a newspaper or a magazine, or request their

health plan for quality information (Kaiser Family Foundation, 2000). Due to the widespread

Page 40: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

28

access to the Internet, online search for health information, in particular, has rapidly become a

focus of intense interest amongst health services researchers. The portrait that emerges from this

body of work illuminates the powerful role of the Internet: consumers, among other things, use

the Internet to self-diagnose their conditions (Fox & Duggan, 2013) prepare for clinical

encounters (Anderson et al., 2003; Otte-Trojel, 2014), self-treat minor ailments (Fox & Duggan,

2013), access their health records (Otte-Trojel, 2014), schedule appointments (Eysenbach &

Jadad, 2001), exchange emails with providers (Bhandari et al., 2014), chat with patients having

similar clinical conditions (Ziebland & Wyke, 2012), and search for provider quality information

(Kaiser Family Foundation,2000).

Only a handful of studies have looked at the actual stepwise process by which consumer

retrieve and evaluate health information about CQI from the World Wide Web. An important

early study examined consumer search patterns using focus groups and related qualitative

techniques (Eysenbach and Kohler, 2002 ); a later study attempted to simulate a real world

online search for CQI using terms expected to be used by consumers (Sick & Abraham, 2011).

Together, they provide valuable insights into the “mechanics” of the consumer pull. Most study

subjects were likely to use online search engines rather than medical or professional healthcare

sites, use search terms composed of single words rather than combinations, and use the first

results from the output to rephrase the search terms rather than examine later results. Credibility

assessment was perfunctory and rested on professional looking layout, scientific terms and

citations, ease of use, and familiarity with official sources; very few research participants

attempted to verify the actual source of information. Most online searches led to private

websites with anecdotal, unsolicited patient experience reviews, while government or community

websites that had quality comparisons along multiple clinical dimensions were harder to find. A

Page 41: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

29

general implication of these studies is that for an average consumer, “findability” of CQI is low

while credibility of information is hard to assess. Policy wise, such findings tend to support

attempts by report producers to make their products more easily accessible and efforts to educate

consumers about the promise and the limitations of the Internet. These preliminary insights,

however, leave significant gaps in our understanding of how the general public searches for

provider quality information. For one, small sample sizes limit generalizability of findings; it

may be helpful to explore online experiences of a more representative sample of consumers,

preferably those who acknowledge looking for or using report cards. Also, a host of socio-

demographic (e.g., low income status, literacy, numeracy) and health-related factors (e.g.,

chronic debilitating illness, disability) may condition consumers’ approaches and success in

finding the desired information, and therefore warrant a closer scrutiny than current literature

permits.

Updating Effect of CQI Dissemination and Consumer Search

Mounting evidence supports the notion that the effect of information on consumer choices is

conditioned by consumers’ prior beliefs about the “state of the world” (Ackerberg, 2003;

Crawford and Shum, 2005). Increasingly, while examining the impact of provider quality

information on consumer beliefs and choices, researchers have begun to explore the impact of

“new” information in quality reports (Erdem and Keane, 1996; Mukamel et al., 2005; Mukamel

et al., 2007; Chernew et al., 2008; Jung et al, 2011). We use the analogy of such Bayesian

learning to apply it to a different context: to describe the effect of CQI dissemination on

consumer awareness/attitudes towards CQI, and consumers’ use of CQI. We expect that prior to

their exposure to the more systematic and comprehensive quality information contained in the

“formal” quality report cards, consumer’s awareness of any form of CQI is low and stems from

Page 42: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

30

exposure to certain informal sources of provider quality information; these sources typically

include recommendation of friends and family members drawn from personal experience,

hearsay, and/or Internet sites that rate patient experience (e.g., WebMD). These experiences

generate a set of attitudes characterized by consumers assigning low importance to CQI in choice

of healthcare providers, deeming most physicians as providing roughly equal quality of

healthcare, and being reluctant to switch doctors based on healthcare quality. These set of

“priors”, in turn, yield consumers’ low odds of using quality information to inform their choices

of physicians. The organizational “push” of CQI towards consumers, along with consumers’

propensity to “pull” CQI, act on these consumer “priors” to “update” consumers’ awareness of,

attitudes towards, and use of CQI, yielding higher awareness, more favorable attitudes towards,

and consequently, higher likelihood of use of report cards.

Furthermore, we hypothesize that the updating effect of organizational push and consumer

pull of CQI will vary based on the consumers’ state of awareness (or lack thereof) at baseline.

Specifically, dissemination will increase the likelihood of consumer becoming aware of CQI if

they are unaware of CQI prior to dissemination (“gaining awareness”) and, conversely, decrease

the likelihood of becoming unaware of CQI (“losing awareness”) if they are already aware of

CQI. Similarly, dissemination of CQI will increase the likelihood of “starting” to perceive

quality reports as important in choosing providers, “starting” to acknowledge quality differentials

among doctors in the region, “becoming” willing to switch doctors on grounds of provider

quality differences, and “starting” use of quality reports among the consumers who are unaware

of CQI. Conversely, among those already exposed to CQI, dissemination will decrease the

likelihood of “losing” belief in its importance, “losing” perception of quality differentials among

Page 43: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

31

doctors in the region, “losing” willingness to switch providers based on quality differentials, and

“stopping” use of CQI.

Page 44: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

32

Chapter 3

Page 45: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

33

Research Questions

1. How much formal comparative quality information is available to consumers? How

much of it is applicable to their clinical conditions? To what extent is the information

credible?

2. Do availability, applicability, and credibility of CQI vary across AF4Q regions?

3. Do greater availability, applicability, and credibility of CQI lead to its higher awareness

and use or to more favorable attitudes towards it?

4. What strategies do AF4Q alliances use to disseminate quality report cards?

5. Do AF4Q sites vary in type and intensity of report card marketing efforts at a point in

time and over time?

6. Does more intense report card dissemination generate gains in consumer awareness and

use of CQI or produce more favorable consumer attitudes towards CQI?

7. How and to what extent do local print media cover quality report cards?

8. How does media coverage of public reporting affect consumer awareness and use of

CQI?

9. How does “framing” of report card media coverage affect media’s impact on consumer

awareness, use, and attitudes towards CQI?

Page 46: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

34

Chapter 4

Page 47: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

35

Methods

Study Design

Our study consisted of two parts: a descriptive component that focused on regional

availability, applicability, credibility, alliance strategies to disseminate quality report cards

(describing the spectrum of CQI dissemination approaches), and print media coverage of CQI

during the two study periods, and a quantitative component that consisted of a longitudinal

(two-period panel) analyses using fixed effects regression methods to explore the relationship

between these key predictors and a set of consumer outcomes, including consumer awareness of

CQI, their attitudes towards CQI, and their actual use of CQI in decision-making related to their

health care.

Data Source(s)

AF4Q Community Quality Reporting Tracking Database (AF4QTD) regularly tracks and

records the number of quality reports released to the public in AF4Q communities by a variety of

public and private organizations including health plans, Medicaid, state government, the federal

government, and private non-profit organizations. AF4Q research staff constantly updates this

information by reviewing websites and collecting additional data by periodically interviewing

key informants in the AF4Q alliances about their public reporting activities. We collected

information on a number of distinct domains that included name and type of sponsoring

organization (government, coalition, hospital association, etc.), name of report, geographic

coverage area (local, state, or national), reporting unit (physician, hospital, health plan), source(s)

of scores (if reprinted from another public report, like Leapfrog or Hospital Compare) or

Page 48: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

36

measures (CMS, AHRQ, AQA, HEDIS, H-CAHPS), source of data (administrative, medical

record, or survey), number of providers included in the report, year of publication, website URL

link, recipient type (health plan members, general public, physicians, etc.,), form of distribution

(web-only or hard copy), and year of data collection.

Site tracking reports include information on a variety of topics related to public reporting

activities of the alliances, including number of iterations of web-based quality reports issued,

public reporting dissemination partners, budget, public release plans, alliance website traffic

information, etc. Our site tracking databases culled information from project-related documents

ranging from community funding proposals, websites of alliance or community partners,

strategic planning papers, agenda and minutes of staff meetings, alliance feedback reports to the

AF4Q National Program Office and the Robert Wood Johnson Foundation, and media stories on

alliance public reporting activities.

Alliance public reporting summaries and public reporting timelines provide regularly

updated information on all the alliance activities related to generating, posting, and disseminating

quality report cards, along with a detailed record of important dates pertaining to such activities.

Key informant interview transcripts record insights gained from an intensive process of in-

depth open-ended interviewing with alliance personnel chosen for their in-depth knowledge of

AF4Q program activities (both in staff and leadership positions). These conversations were held

during periodic site visits by AF4Q evaluation team and covered a wide range of topics,

including participants’ views of the alliance’s progress and barriers in each of the AF4Q program

areas. The data from the audio recordings was transcribed and saved in text files, which were

thoroughly read and assigned a set of content codes in accordance with pre-established coding

Page 49: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

37

guidelines and definitions. The coded data was then entered into Atlas.ti,, a qualitative analyses

software that allows sorting and querying of data by specific content codes

Access World News (NewsBank) and LexisNexis Academic are two large newspaper

databases that hold a systematic collection of current and archival newspaper articles in regional,

national and international news sources, with instant retrieval systems by location and time

frame. These databases allow searches by keywords and region (for Access World News this is

usually a state or a city/town within United States whereas for LexisNexis the smallest region

over which the search may be carried out is a state) of all newspapers published in the region and

contains options for organizing the search results by relevance or time of publication. Additional

functionalities allow narrowing of search to specific types of media (e.g., newspapers, blogs,

magazines, newswires, television broadcast transcripts, videos etc.,), specific publication (e.g.,

New York Times), subject, and language. The full text of the newspaper articles is displayed

with byline, dateline, length of article, and section of newspaper (front page, sports section, etc.,)

that carried the print article. Both databases allow saving of search results and generation of print

versions of Word or PDF documents for the saved results.

Aligning Forces for Quality Consumer Survey (AF4QCS) is a random-digit-dial (RDD)

survey initially conducted between June 2007 and August 2008 for chronically ill adults (18 or

older) in the 14 AF4Q regions. To be eligible for the survey, the respondents had to have at least

one of the following five chronic conditions: diabetes, hypertension, asthma, heart disease, and

depression. The same respondents were resurveyed in the second wave between July 2011 and

November 2012, along with additional RDD respondents to account for attrition and

demographic change. The first round of survey yielded a response rate of 27.6% by American

Association of Public Opinion Research (AAPOR) method and 45.8% based on the Council of

Page 50: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

38

American Survey Research Organizations (CASRO) method. The panel response rate in the

second wave was 63.3% yielding an overall response rate of 39.7% by AAPOR and 42.1% by

CASRO method. While our response rates are comparable to most other large national telephone

surveys and reflect a continuing trend of declining response rates over the last two decades, we

attempted to validate the demographic profile of our respondents by benchmarking it against

face-to-face interview surveys that are considered a “gold standard” with respect to survey

methodology. To do this, we compared the AF4QCS respondents to the 2008 and 2011 National

Health Interview Survey respondents (which has a nearly 90% response rate) and found no

significant differences in the demographic composition and prevalence of chronic illness

between the two surveys.

Commercial health plan enrollment data from HealthLeaders InterStudy Health Plan

Enrollment Database (InterStudy) was used to compute the percentage of local population that

had access to health-plan sponsored provider quality information; this dataset has been widely

used in prior literature to calculate commercial health plan penetration rates (Adams and Herring,

2008). Following previous studies, the Dartmouth data were used to construct county-level

estimates of physician supply per 1000 residents (Lewis et al., 2013).

Study Sample

Our analytic sample consisted of 4235 chronically ill adults (18 or older) in the 14 AF4Q

regions who acknowledged having consulted a physician at least once in the past 24 months for

treatment of one or more of the following conditions: diabetes, hypertension, asthma, heart

disease, and depression.

Page 51: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

39

Timeline of Measurement

The timeline for measurement of dissemination (independent variable) and consumer

outcomes (dependent variable) is illustrated in Figure 4-1. Dissemination efforts of alliances as

well as media coverage of CQI was measured for the period of one year immediately preceding

the administration of a round of consumer survey i.e., June 1, 2006 to June 1, 2007 (first period)

and June 1, 2010 to June 1, 2011 (second period). This was done to ensure all consumers who

were administered the survey, which was staggered over a period of roughly 12 months at both

rounds, were exposed to the factors measured by our key independent variables. The consumer

outcomes are drawn from AF4QCS, which was administered at two time periods following these

time frames, as described earlier in the data sources section.

Figure 4-1 Timeline Of Measurement of Independent (Blue Double Arrows) and Dependent Variables

(Red Double Arrows)

Dependent Variables

Our principal dependent variable, which captures consumer awareness of comparative

quality information, is generated from the following two survey items: “Did you see any

information comparing the quality among different doctors in the past 12 months?” and “Did

2006 2007 2008 2009 2010 2011 2012

Page 52: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

40

you see any information comparing the quality among different hospitals in the past 12

months?” The resulting binary variable is assigned a value of 1 if the respondent replied yes to

any or both questions and 0 if the reply was no to both items.

We captured three distinct consumer attitudes towards CQI with different set of survey

items. First, we asked respondents about the importance they assigned to CQI (perceived

importance of CQI in choosing doctors) if they had to choose a doctor to treat their condition

and recorded their responses in the form of a Likert scale. The three items used to measure this

variable were prefaced by a common query “The next time you choose a doctor to treat your

condition(s), how important might you consider” followed by, in turn, (1) a report that shows

which doctors follow recommended approaches to treat your chronic condition(s), (2) for people

with conditions similar to yours chronic conditions, a report that shows the outcomes for

patients treated by different doctors, and (3) a report that compares how satisfied other patients

are with their doctor or medical group. If the respondent characterized any of the three types of

reports as “very important” or “important” in choosing doctors, we coded the variable as 1; else

we coded it as 0.

Second, we examined the extent to which consumers perceive providers in their community

to be different with regard to health care quality (acknowledgement of provider quality

differentials) by asking them to agree or disagree with the following categorical statement:

“Doctors in my community are all pretty much the same in terms of the quality of the care they

provide”. If consumers were in agreement or strong agreement with the statement we coded it as

0, and coded it as 1 if they strongly disagreed or disagreed.

Finally, we assessed consumers’ willingness to switch physicians based on differences in

quality by asking them to react to the following statement of intent: I would consider going to a

Page 53: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

41

different doctor than the one I normally see if the new physician's quality was higher and my

costs were about the same. We coded this variable as 1 if respondent agreed or strongly agreed

with the statement and 0 otherwise.

Our survey allowed us to codify two distinct types of use of report cards by consumers: to

make decisions about providers (Did you personally use the information you saw comparing

quality among doctors in making any decisions about doctors? and “Did you personally use the

information you saw comparing quality among hospitals in making any decisions about

hospitals? Yes to any=1, no to both=0) and to discuss the report with their doctor (Did you talk

with your doctor about the report? Yes=1; No=0).

Independent Variables

CQI Availability

Availability of CQI was be measured for each consumer by number of publicly available

physician and hospital quality reports in the alliance region in which the consumer resided from

June 1, 2006 to June 1, 2007 (first period) and from June 1, 2010 to June 1, 2011 (second

period). Notably, reports varied considerably in completeness of coverage of physicians or

hospitals that supply their services to local residents in the specific reporting region, and whether

information was provide in aggregated form (i.e., for a group of physicians) or for individual

physicians. Following the approach of an earlier study (Scanlon et al., 2015), we excluded

reports that had information on a very narrow or small group of physicians but included national

(i.e., reports published by national public and private organizations that are available in all

regions), local, and regional reports that were available to the general public (including those

produced by health plans) without a secure log-in or password.

Page 54: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

42

CQI Applicability

Applicability of CQI was measured for each consumer by counting regional publicly

available physician and hospital quality reports that had at least one measure related to that

consumers’ chronic condition(s). For instance, if a hypothetical region had just two report cards

available, first having (at least one) measure(s) of diabetes and hypertension and the other having

(at least one) measure for elective hip surgery, a consumer living in that region was assigned an

applicability score of 1 if he had diabetes, 1 if he had hypertension, 2 if he had both, and 0 if he

had neither. Note that availability of report will be 2 for each consumer irrespective of their

clinical condition. Hence, our measure of availability varied only at the alliance region, while

applicability varied based on both the kind of reports available and the type of clinical condition

that consumer suffered form.

CQI Credibility

Credibility of CQI was measured for each consumer by counting publicly available physician

and hospital quality reports in the consumer’s alliance region that had all of the following three

attributes: (1) were produced/sponsored by non-profit agencies and /or governmental entity (as

opposed to health plans), (2) were constructed from medical records data or patient experience

surveys (as opposed to claims data or data from provider surveys), and (3) their report quality

measures were endorsed by reputed national organizations (e.g., National Quality Forum). This

measurement strategy is inspired by Christianson et al (2010) categorization of credibility of

report cards, based on empirical evidence on consumer and provider trust in sources of provider

quality information. Provider trust may be relevant to consumer’s use of CQI since many consult

their regular physicians for guidance on choice of other providers (e.g., specialists) and providers

often provide referrals based on quality information in public reports.

Page 55: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

43

Alliance Proactive Dissemination

We explored the role of producer-driven dissemination strategies by distilling the broad

spectrum of alliance approaches into an ordinal “Alliance Proactive Dissemination Score” for

the two study periods (June 2006–June 2007 and June 2010–June 2011) using a two-step

process. In the first step, three data sources (public reporting summaries, public reporting

timelines, and KII transcripts) were thoroughly reviewed to identify major strategies used by

alliances to disseminate their public reports. Interview transcripts were scanned through querying

software to identify excerpts that were tagged with the code “dissemination”; the resulting output

was printed out and manually read to isolate specific content that provided information about

CQI dissemination strategies. Public reporting timelines were consulted for important landmark

events (e.g., online posting of report) and public reporting summaries were probed to identify

and crosscheck information gained from the other sources. An example of the review process is

provided in Table A-1 which illustrates data analyses performed on public reporting summaries

and KII transcripts for two alliances. In the second column are excerpts culled from text in PR

summaries/KII transcripts that describes alliance dissemination activities. The third column

indicates the potential categories (drawn from review of summary excerpts and other supporting

data) along which dissemination strategies were classified (“categories” of alliance

dissemination).

We identified eight distinct categories at the culmination of this process and, in the second

step, assigned a binary score to each individual alliance based on whether a given strategy was

adopted during the time period of interest. An overall score was calculated for each alliance by

summing scores for each individual category. These eight categories were respectively, whether

alliance posted quality report on website, whether alliance quality report was published in non-

Page 56: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

44

English language, whether online report was updated at least once within period of

measurement, whether alliance published report in consumer-focused magazines, whether

alliance issued press releases to media outlets about the quality report, whether alliance

collaborated with community-based organizations/stakeholders to disseminate report, whether

alliance hired a special public relations/communication expert to aid dissemination, and, lastly,

whether alliance did original consumer research to aid its marketing/dissemination strategies.

Media Coverage of CQI

This is one of the first studies to examine media coverage of report cards. At the outset, we

faced an important constraint: data on local non-print media sources (television, radio) was

fragmentary or unavailable. Therefore, we chose to focus on print media. Three distinct types of

regional print media coverage scores were generated: media coverage of alliance issued public

report (alliance-sponsored CQI), media coverage of public reports issued by Centers for

Medicare & Medicaid Services (CMS-sponsored CQI), and media coverage of public reports

issued by an agency other than an alliance or CMS e.g., Leapfrog group (“general” or non-

alliance, non-CMS CQI). These choices implemented an important study aim: to explore

possible differences in intensity and nature of press coverage of distinct type of report cards. The

categorization echoes our expectation that certain types of CQI may not only receive coverage of

differing intensity but that it may have distinct impacts on consumers. For instance, CMS-

sponsored websites (such as Hospital Compare and Nursing Home Compare) are possibly the

most well-known formal sources of CQI and provide information on virtually all

hospitals/nursing homes in the nation. It is plausible that press coverage of Hospital Compare

may have a stronger impact on consumer use than similar coverage of a less known CQI source.

Similarly, we expected alliance-sponsored reports to be extensively covered in local media

Page 57: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

45

owing to special efforts made by alliances to comply with the AF4Q mandate to actively

disseminate.

In addition to measuring coverage of report cards, we also evaluated coverage of patient

safety issues in the print press. We motivated this strategy by the notion that local media

coverage of patient safety issues is likely to be strongly correlated with consumers’ likelihood of

searching for quality information as well as the regional “supply” of CQI. In other words, if

consumers are more aware of safety problems with local healthcare providers (owing to media

discussion) they may be more inclined to use CQI; CQI producers, in turn, may be more

motivated to provide quality differentials in areas with especially high incidence of events

compromising patient safety.

Selection of Local Print Media

We used Access WorldNews as our primary database and LexisNexis for some regions such

as Western New York where search results were not available owing to technical issues (e.g.,

output failed to display on repeated attempts). Each AF4Q county was linked to its towns/cities

using updated Census Bureau information. The resulting list of cities/places was used to conduct

searches for media articles published within the “catchment area” of specific alliances. Table A-

2 displays a listing of local newspapers for all 14 alliance regions. The list covers all the

newspapers published in the alliance region for which archival data was available and included

106 local newspapers in 2007 and 114 in 2011. Owing to varying size of AF4Q regions and

differing historical newspaper density, there is substantial variation in the number of newspapers

published in a given area. For instance, Humboldt County alliance, despite covering only one

county, had 3 newspapers, whereas a large city like Memphis had just two. News sources that

Page 58: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

46

were web-only, blogs, magazines, and transcripts of radio/TV stations were excluded from the

search.

Selection of Search Keywords

In order to avoid missing relevant articles, we tested alternative keywords for general

searches that were not linked to a specific website name. As an example, we tested two keywords

(“Quality AND Physicians” and “Ranking AND Physicians”) relevant to general CQI for Maine

for the period June 1, 2010–June 1, 2011. The search “Quality AND Physicians” yielded 7

relevant items, whereas “Ranking AND Physicians” yielded just 3 and was, therefore, rejected in

favor of the former. Table A-3 provides the list of all four types of media coverage with their

corresponding keywords. Note that for some types of media coverage multiple keywords were

used to avoid the possibility of missing articles relevant to the query. For instance, general CQI

may consist of quality report cards about hospitals or individual physicians. Therefore, it was

deemed appropriate to include keywords that could extract media articles focusing on both types

of CQI.

Selection, Content-Coding, and Weighting of Articles

Each print media article retrieved from the search was selected for relevance, coded for

content, and then assigned weights that captured its likelihood of impact on consumer outcomes.

This three step process is illustrated in Figure 4-2 and 4-3.

Page 59: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

47

Figure 4-2 Flowchart Depicting Selection For Relevance, Content Coding, And Weighting Of Media Articles On

CQI

Key word search For CQI

Alliance website name

Quality AND Physicians

Quality AND hospitals

Hospital Compare, Nursing Home Compare, HCAHPS, Home Health Compare

Shortlist initial set of results

Selection for full text

review by scanning title

and abstract of the article

Assign one or more categories to content: Discussion of importance of health quality transparency Discussion of variation in quality across health providers Web linkage to CQI source Direct comparisons between providers

Apply content coding scheme

Reject

Exclude if not

related to CQI

Weight each article for prominence based on three elements: Location of keyword in title, location of

article, and word length

Assign valence weights to each eligible coded content category within each article

Apply prominence and

valence weights to

each coded article

Page 60: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

48

Figure 4-3 Flowchart Depicting Selection For Relevance, Content Coding, And Weighting Of Media Articles on

Patient Safety

Key word search For Patient Safety

Patents AND Safety

Medical Errors

Medical Malpractice

Shortlist initial set of results

Selection for full text

review by scanning title

and abstract of the article

Assign article content to one or more of the following categories: Discussion of patient safety practices of healthcare providers

Sentinel events

Apply content coding scheme

Reject

Exclude if not

related to

Patient Safety

Weight each article for prominence based on three elements: Location of keyword in title, location of

article, and word length

Assign valence weights to each eligible coded content category within each article

Apply prominence and

valence weights to

each coded article

Page 61: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

49

Selection for Relevance and Full Text Review

The first step consisted of selecting articles relevant to the broad theme of CQI using the

corresponding keywords. The process of selection was guided by an algorithm that laid out a

stepwise approach with a sequential set of decision rules (Figure A-1). In determining relevance

of an article, a conservative approach was taken such that if there was some doubt, the article

was included in the shortlist for a full text review. A similar process was performed for articles

on patient safety.

Applying Content Codes

The content of each CQI-focused article was read thoroughly and assigned one or more of

four distinct content categories: discussion of health quality transparency/disclosure, discussion

of variation in quality across health providers, a web link for a CQI source, and whether the story

provides a direct comparison between providers in terms of quality, cost, or efficiency of

healthcare delivery. Code selection was guided by a process of expert review and validation after

extensive discussion among members of dissertation committee, and reflected emphasis on major

themes of policy discussion with respect to provider quality transparency. Coding assignment

was guided by clear definitions of each coding category with accompanying description of key

terms for each definition (Table A-4). Following a parallel process, articles related to patient

safety were assigned one or more of two content categories: discussion of issues related to

patient safety in healthcare delivery and discussion of “sentinel” events (egregious medical errors

by healthcare providers) (Table A-5).

Weighting the Coded Articles

While the coding scheme provides a means for capturing the major themes related to CQI

/patient safety that are expected to affect consumers’ behavior, impact of media coverage may

Page 62: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

50

also depend on the way a news item “frames” the issues discussed. Drawing from a theoretical

framework developed by communication theorists for describing this agenda-setting function of

the media (and described above in the conceptual framework section), we used a two-pronged

scheme for weighting of articles that reflects their likelihood of capturing consumers’ attention

and mold attitudes. To do this we separately assigned valence weights and prominence weights

to each content-coded article.

“Prominence” of a media item was captured by assigning weights to three elements that

determined the conspicuousness of location of story within the newspaper: whether the text of

the article title included the keyword (1 if yes, 0.5 if no), location of article in the paper (1 if on

front page, 0.5 otherwise), and space devoted to story (1 if word-length exceeded 500, else 0.5).

“Valence” refers to the normative frame applied by the author to the topic. Valence weights were

assigned to individual content codes. The weighting scheme was motivated by the overall notion

that a news story framed to offer an exclusively positive view of importance or desirability of

provider quality transparency and/or of the importance of consumers to be aware of provider

quality differentials would plausibly have greater impact on consumer odds of being aware of

and using CQI. Conversely, an exclusively negative view of the functioning of healthcare system

or providers (e.g., compromised patient safety or egregious medical errors) may spur more

consumers to use CQI or be vigilant about provider quality differentials. Note that some coding

categories are unsuitable for assignment of valence weights because they describe a factual

scenario or an inherently negative event. For instance, we did not assign any valence weight to

discussion of web-links for CQI sources, or to text that made quality comparisons between local

providers. Similarly, we did not assign valence weight to the code for sentinel events since these

events are by definition negative. To assign valence weights to each eligible content code, we

Page 63: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

51

developed stepwise algorithms displayed in Figure A-2 (discussion of quality transparency),

Figure A-3 (discussion of provider quality variation), and Figure A-4 (discussion of patient

safety practices of providers). Each algorithm embodies a sequential set of decision rules at the

end of which each eligible code can be assigned either a positive (weight=1), a negative

(weight=0), or a neutral valence (weight=0.5). Table A-6 provides a few illustrative examples of

the text of media articles to which the content coding and valence weighting scheme has been

applied.

Generation of Media Coverage Scores

Media coverage score was estimated separately for alliance-sponsored, CMS-sponsored,

general CQI articles, and patient safety in two steps. In the first step, a normalized weighted and

unweighted score will be calculated for each article. Table A-7 gives three examples of such a

hypothetical calculation. In the first example, the top two rows show an article that has been

coded for content related to CQI (article 1) by assigning all four code categories (indicated by

letters A, B, C, and D; see key at the bottom of table) to its text, where each content code

receives a score of 1. Without valence or prominence weights, the unweighted normalized score

(top row) will be calculated by summing up the four scores and dividing by the total number of

possible content codes (i.e., 4), resulting in a score of 1. On the other hand, when we assign

valence weights to the two eligible content codes (indicated by A and B; see key at the bottom of

table) and three prominence weights (indicated by letters G, H, I; see key at the bottom of table)

to the article, we get a normalized weighted score of 1 (i.e., total score 7 divided by the highest

possible score i.e., 7). Note that the scores for valence-weight ineligible codes (C and D; see key

at the bottom of table) enter in the final calculation without being down-weighted. In the second

step, a cumulative media coverage score is computed for each alliance by summing up

Page 64: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

52

normalized scores across individual articles within specific media coverage categories. Within

each alliance, these scores are calculated separately for each of the three types of report cards

and patient safety.

Given our coding and weighting scheme, a score of 1 for a CQI focused article can be

interpreted as an article published in a regional newspaper on the front page whose title contains

corresponding keyword(s) or a close analogue that indicates provider quality comparison, and

which contains discussion related to all four content areas (transparency of quality or cost of

services of health care providers, variation in healthcare quality across regions or demographic

groups, comparisons of providers in terms of quality, cost or efficiency, and web links to quality

reports ), and underscores the importance of informing consumers about provider quality and

regional variation in healthcare quality without expressing any skepticism towards the utility of

CQI or doubts that it may confuse consumers. Similarly, a score of 1 for a press article focused

on patient safety can be interpreted as an article published in a regional newspaper on the front

page whose title contains corresponding keyword(s) or a close analogue that indicates patient

safety and which contains discussion related to two content areas (discussion of patient safety

practices of healthcare providers and sentinel events) and reflects negatively on patient safety

practices of providers. For brevity, in the following sections we will refer to such an article as an

“idealized” news article.

Inter-rater reliability (IRR) testing

Three raters, including the principal author and two undergraduate students, were assigned to

complete the process of article selection, content coding, and weighting. The assignment of

individual alliances among the three raters is shown in Table A-8. The rating process and the

inter-rater agreement analyses were completed in two steps. In the first step, the two student

Page 65: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

53

raters were extensively trained in selection of articles, code application, and weight assignment.

The training included extensive discussions of the selection, code, and weight assignment

algorithms, discussion of definitions to clarify details and elucidate key terms, and test selection,

coding and weighting of output from a selected set of key words. Once the raters achieved a

significant threshold of interrater agreement with author and among themselves (roughly about

80%), a second process of random audit was put into place. In this process, principal author

selected specific alliance-keyword combinations (from among assigned alliances and illustrative

of the each of major types of media coverage variables) which the raters used to search and

select relevant articles and then apply content codes and weights. For each selected keyword-

alliance pair, inter-rater agreement was calculated between the rater and the author separately for

the selection, coding, and weighting stages. The process of calculation of inter-rater agreement is

presented in Table 4-1.

The actual inter-rater agreements achieved are shown in Table A-9. For most keyword-

alliance combinations, a high degree of agreement was reached for the initial two stages of

selection and code assignment, possibly because these steps involved a lesser degree of

subjectivity in the relevant definitions and terms than the step involving valence weighting. In all

cases where agreement was in excess of 85%, the remaining disagreement was adjudicated by

discussion between author and rater and final assignment proceeded by mutual consensus. In

cases where agreement fell below 85% (one case for coding and 5 cases for weight assignment),

author reviewed the output de novo and reassigned codes and weights.

Finally, if should be noted that interrater agreement was not calculated for prominence

weights since the process of assignment of these weights was deemed to be objective, involving

straightforward entry of binary weights for whether article was on front page or elsewhere,

Page 66: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

54

whether article length exceeded 500 words or not, and whether or not the keyword appeared in

the article title.

Table 4-1 Calculation Of Inter-Rater Agreement

Stage Numerator (A) Denominator (B) Inter-Rater

Agreement

Selection

Number of articles assigned identical

status (selection or rejection) by

author and rater

Total number of articles (100*A) / B

Coding Number of codes applied identically

by author and rater

Total number of codes

applied (100*A) / B

Valence Weighting Number of weights assigned identically by author and rater

Total number of weights assigned

(100*A) / B

Analytic strategy

Model Specification and Identification Strategy

We used linear regression fixed effects (i.e., linear probability model) to evaluate the impact

of our key variables on consumer outcomes. The fixed effects model removes two distinct

sources of confounding variation: permanent difference between individuals and common trends

across individuals over time that may be correlated with key predictors (e.g., CQI dissemination)

and consumer awareness/attitudes towards CQI. The effect of each independent variable is

therefore identified off the residual variation i.e., variation within alliances (for variables that

vary only across regions and not individuals) or within individuals over time, which can be

treated as plausibly exogenous to consumer outcomes. This assumption breaks down if the

within-alliance (or within-individual) variation is correlated with factors that affect consumer

Page 67: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

55

outcomes, in which case such factors may have to be explicitly controlled for in the specification

to avoid omitted variable bias.

In all our specifications, we included a set of covariates related to consumer’s socio-

demographic characteristics, health status, healthcare access, and health care utilization as

controls to account for such a possibility. Specifically, we controlled for family income, having

college education, employment status, health insurance status, self-ratings of health, patient

activation score, type of chronic condition, per-capita physician density by county, a measure of

overall satisfaction with healthcare received in last 12 months, and percent population of alliance

region that had access to a quality report cards issued by a health plan. We used the linear

probability model (LPM) instead of logistic or probit regressions in our primary specification

(even though our outcomes are binary) as LPM generates parameter estimates that are directly

and conveniently interpreted as mean marginal effects of covariates on outcome whereas

coefficients for logistic regression have a more complicated log odds ratio interpretation. As the

error terms for individuals within an alliance region are likely to be correlated (violating the

assumption of independence and identical distribution of error terms), the standard errors were

calculated after clustering within alliances.

We estimated the following model (equation 1):

Yitz = β1AVitz + β2APPitz + β3CRitz + β4Ditz + β5AQIitz + β6GQIitz + β7CMSQIitz + β8GMitz + Xitzγ + ɸt + ¥ i + δitz

(1)

Where,

i = Index for consumer

t = Index for period of consumer survey administration (period 1 and 2)

z = Index for alliance region

Y= consumer awareness of CQI, consumers’ perceived importance of CQI, consumers’ acknowledgement of

quality differentials among physicians in the community, consumers’ propensity to switch doctors due to concerns

about quality of care, and consumers’ use of CQI

AV= Number of regional available reports

APP= Number of regional applicable reports

CR= Number of regional credible reports

Page 68: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

56

D= Proactive dissemination score

AQI= Media Coverage Score for Alliance-sponsored CQI

CMSQI= Media Coverage Score for CMS-sponsored CQI

GQI= Media Coverage Score for non-alliance non-CMS CQI

GM= Patient Safety Media Coverage Score for patient safety

X= Vector of consumers’ socio-demographic, health related, health care use, and health care access related covariates

δ = error terms

ɸt = time fixed effects

¥ i = individual-level fixed effects

Given the above specifications, we interpret the respective coefficients on the key

independent variables as follows (Table4-2):

Table 4-2 Interpretation Of Key Coefficients Of Main Model

Coefficient Interpretation

β1 Change in probability of outcome due to regional availability of one more report card

β2 Change in probability of outcome due to regional availability of one more applicable report card

β3 Change in probability of outcome due to regional availability of one more credible report

card

β4 Change in probability of outcome due to a one unit increase in regional alliance

dissemination score

β5 Change in probability of outcome due to one unit increase in regional media coverage

score for alliance-sponsored CQI

β6 Change in probability of outcome due to one unit increase in regional media coverage

score for general CQI

β7 Change in probability of outcome due to one unit increase in regional media coverage

score for CMS-sponsored CQI

β8 Change in probability of outcome due to one unit increase in regional media coverage

score for patient safety issues

Subsample analyses

In our subsample analyses, we tested our conjecture that the impact of our key dissemination

variables may be different between those who were and were not exposed to CQI prior to

dissemination. To test these hypotheses, we ran our main analyses on subsamples of respondents

Page 69: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

57

who reported being aware of CQI in 2008 and those who did not. These specifications are

modeled in equations 2 and 3 below:

(Yitz | Awareness in 2008=0)= β1AVitz + β2APPitz + β3CRitz + β4Ditz + β5AQIitz + β6GQIitz + β7CMSQIitz + β8GMitz +

Xitzγ + ɸt + ¥ i + δitz (2)

(Yitz | Awareness in 2008=1)= β1AVitz + β2APPitz + β3CRitz + β4Ditz + β5AQIitz + β6GQIitz + β7CMSQIitz + β8GMitz +

Xitzγ + ɸt + ¥ i + δitz (3)

Sensitivity of Estimates to Coding Alliance Dissemination Strategies as Dummy Variables

Our main variable for alliance dissemination assumes a homogenous impact of each

dissemination strategy on consumer outcomes. We used an alternative strategy to code alliance

dissemination, entering our binary measures for each strategy as dummy variables in the

empirical specification to elicit potentially important variations in the impact of individual

approaches on our key outcomes.

Effects on Consumer Awareness and Use of Physician Comparative Quality Information

Whether consumer response to CQI dissemination differs with respect to awareness and use of

a specific type of report cards (physician versus hospital) may be of independent policy interest.

Empirical information on PQI is especially scarce in literature despite growing policy interest.

Our data provides an opportunity to tease out the effect separately for physician quality

information (PQI). Recall that our main results measure the impact of dissemination on

consumer behavior with respect to any type of reports (whether hospital or physician). We run

our primary specifications by coding consumer awareness and use of PQI as main outcomes and

contrast our estimates with the main results.

Page 70: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

58

Sensitivity to Variation in Local Newspaper Density

An important source of measurement error lies in the process of assignment of local news

media to individual residents of alliance region. The AF4Q alliances vary substantially on both

geographic and demographic dimensions. For instance, we have alliances consisting of whole

states on the one hand (Maine, Wisconsin etc.) and single or few counties (e.g., Humboldt

County, CA and South Central PA) on the other. For people residing in small areas such as

Humboldt County there may few print media sources for which archival data is available in the

database, either because the newspaper has a very small readership or the area is primarily

catered by newspapers published in the neighboring metropolitan areas (e.g., San Francisco

Chronicle for Humboldt County residents). In our primary specification, Humboldt County, CA

and South Central PA (which do not have a major metropolitan area) were assigned media

coverage scores estimated from searching media articles published by the local small

newspaper(s). However, for such sites, we performed a sensitivity check of our main estimates

by adding media articles from newspapers that were published in nearest big metropolitan areas

(as defined by the Census Bureau). We used newspapers published in San Francisco for

Humboldt County and Baltimore area for South Central PA.

Sensitivity of Primary Estimates to Weighting

Recall that our weighting strategy was informed by our conjecture that news articles that

were “framed” as even slightly negative towards the utility of quality report cards were unlikely

to lead to improvement in consumer awareness and attitudes towards CQI, whereas we expected

articles that discussed CQI in wholly favorable terms to have a larger consumer impact.

Moreover, we thought that “prominent” items such as those located conspicuously within a

newspaper (front page) would more likely be noticed than others buried deep. We tested this idea

Page 71: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

59

by repeating our main analysis with unweighted media coverage scores and compared our

estimates to those of our main specification.

Sensitivity of Estimates to Alternative Measurement of Consumer Attitudes

One may worry that our estimates of impact of CQI dissemination on consumer attitudes may

be sensitive to our particular measurement strategy. Recall that to allow entry into our linear

probability models, we chose to dichotomize the Likert scale responses to consumer attitude

questions at the middle of the four response options. We tested robustness of our estimates to an

alternative dichotomizing threshold by generating binary variables in which the strongest

response was coded as 1 whereas other three responses were coded as 0. To the extent that

dichotomizing Likert scale response options entails some loss of information, this alternative

measurement strategy may partially address that concern.

Page 72: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

60

Chapter 5

Page 73: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

61

Results

Summary Statistics

Table 5-1 provides a general description of our analytic sample. Most respondents were

white, elderly or near-elderly women who had either public or private insurance, had a mean

annual income of about 45000, and were at least college educated. Nearly half reported being

employed, although this fraction dropped slightly in the second round of the survey, consistent

with the more elderly age profile in 2012 which suggests that some respondents may have

retired. Hypertension was by far the most common chronic illness reported and the average self-

reported health remained stable across survey rounds. Patient activation measure increased

slightly for the panel as a whole, indicating perhaps an improved focus on disease self-

management owing to increasing experience with chronic illness. Perceived satisfaction with

healthcare was high and remained stable across survey periods.

Descriptive Summary of Awareness, Use, and Consumer Attitudes towards CQI

Past research has highlighted poor awareness of CQI among the general public and our data

provide additional support for this finding in the chronically ill population. Table 5-1 shows that

barely one third of our respondents were aware of CQI at baseline, with awareness improving

minimally over four years from 2008 to 2012 (31 to 33%), a period which saw considerable

policy focus on making CQI widely available to consumers. Use of CQI in choosing providers

and discussing report contents with physicians show similar trends, from an even lower baseline

levels (9% to 11% and 4% to 5%, respectively). Given that this population is a prime target for

quality report cards, these trends are of concern. Consumer attitudes towards CQI provide a more

optimistic picture, with majority of respondents acknowledging the importance of CQI in

Page 74: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

62

Table 5-1 Descriptive Statistics (N=4235) #

Variable 2008∑

2012€

Mean (SD) Mean (SD)

Aware Of Physician Or Hospital CQI, Last 12 Months

Perceive CQI As Important In Choosing Doctors

0.31 0.33 0.91 0.88

Disagree That All Doctors Are Same In Quality Of Care 0.66 0.59

Willing To Switch Doctors Based On Quality Of Care 0.60 0.57

Used CQI For Choosing Physicians, Last 12 Months 0.09 0.11

Discussed CQI With Doctor, Last 12 Months 0.04 0.05

Population With Access To Health Plan Report‡ (%) 21.55 (13.02) 40.42 (23.49)

Number Of Reports Available Ω 6.42 (2.65) 8.99 (2.89)

Number Of Applicable Reports Available¥ Ω 1.17 (2.23) 2.23 (3.16)

Number Of Credible Reports Available¥ Ω 0.17 (0.37) 2.74 (0.86)

Alliance Dissemination Score (Range 1-8) Ω 0.68 (1.14) 4.82 (1.08)

Media Coverage Of Alliance CQI Ω 0.58 (1.97) 1.22 (1.71)

Media Coverage Of CMS CQI Ω 0.43 (0.85) 0.86 (0.83)

Media Coverage Of “Other” CQI Ω 10.05 (5.94) 7.95 (3.07)

Media Coverage Of Patient Safety Issues Ω 18.06 (20.12) 14.43 (14.16)

Age

18-40 0.10 0.07

41-50 0.17 0.13

51-65 0.42 0.40

>66 0.31 0.40

Race/Ethnicity

White 0.64 0.63

Black 0.24 0.24

Hispanic 0.05 0.04

Other 0.02 0.04

Family Income¶ (In 1000 $) 45.40 (29.19) 43.15 (28.93)

Education (College Or More) 0.63 0.61

Female 0.68 0.67

Employed 0.49 0.41

Health Insurance Status

Uninsured 0.07 0.07

Private Insurance 0.41 0.32

Public Insurance 0.52 0.61

Self-Rated Health Status (Range 1-5) 2.96 (0.96) 2.94 (0.94)

PAM∫ Score (Range 1-100) 65.70 (15.43) 68.81 (15.51)

Type Of Chronic Condition

Diabetes 0.29 0.31

Hypertension 0.66 0.59

Heart Disease 0.16 0.16

Asthma 0.17 0.15

Depression 0.27 0.23

Number Of Physicians Per 1000 residents, By County 3.16 (1.63) 3.29 (1.76)

Overall Satisfaction With Health Care Received (Range 1-10) 8.33 (1.68) 8.46 (1.67)

Number of unique persons

# Figures are not weighted for representativeness

¶ Income calculated as a continuous measure by replacing income categories with mean for that category

∑ Refers to the period extending from June 1, 2007 to June 1, 2008

€ Refers to the period extending from June 1, 2011 to June 1, 2012

Ω The CQI dissemination variables were measured over the period extending from June 1, 2006 to June 1, 2007 and from June 1, 2010 to

June 1, 2011 in order to capture dissemination that occurred prior to the start of each round of consumer survey. See text for details.

¥ Denotes the average number of reports per resident that have measures relevant to their chronic condition

∫ Patient activation measure

Page 75: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

63

choosing doctors (91%), agreeing that there are significant quality differentials among local

providers (66%), and reporting willingness to switch providers over concerns about quality of

care (60%). These attitudes, however, show little movement or even a slight decline over the

period between survey rounds. Breaking down these variables by alliance reveals considerable

variation across alliances and within alliances over time (Table 5-2). To take one example,

baseline use of CQI ranges from the low of 6% for Humboldt County to a noticeably higher 12%

for Detroit, over an average of 9%. Changes between periods are fairly variable as well, with

Cleveland going from 10% to 16% between 2008 and 2012 while Wisconsin dropping from 9%

to just 5%. These variations appear to point to a significant dispersion in determinants of

consumer outcomes across geographic regions.

Availability, Applicability, and Credibility of Report Cards

The exact level of awareness and use necessary to improve matching of patients with higher

quality providers is unclear and a potentially important target of future research. Despite this

uncertainty, it may be prudent to inform future investments in efforts to improve utilization of

report cards with a critical inquiry into the effectiveness of current initiatives that are designed to

“push” CQI towards the public. To this end, Table 5-1 provides a useful summary of current

dissemination efforts. Both the total number (6.42 to 8.99) and number of applicable reports

(1.17 to 2.23) showed modest improvement between 2007 and 2011. However, the large and

persistent gap between the total information available and information that is applicable to

consumers (in this case, chronically ill individuals) suggests that a significant fraction of

information being disseminated may not be directly applicable to the target population and, as

such, highlights a potentially important avenue of improvement.

Page 76: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

64

Table 5-2 Awareness Of, Attitudes Towards, and Use of CQI, By Alliance

Alliance

Awareness of CQI Perceived importance

of CQI

Agreement that

doctors differ in

quality

Willingness To Switch

Doctors

Based On CQI

Use of CQI

In making decisions about providers

In discussing with provider

2008 2012 2008 2012 2008 2012 2008 2012 2008 2012 2008 2012

Mean Mean Mean Mean Mean Mean Mean Mean Mean Mean Mean Mean

All Alliances Combined (n=4235) 0.31 0.33 0.91 0.88 0.66 0.59 0.60 0.57 0.09 0.11 0.04 0.05

By Alliance

Cincinnati, OH (n=360) 0.36 0.39 0.90 0.89 0.70 0.59 0.53 0.49 0.10 0.11 0.05 0.06

Cleveland, OH (n=360) 0.39 0.42 0.95 0.90 0.61 0.56 0.60 0.56 0.10 0.16 0.03 0.10

Detroit, MI (n=334) 0.41 0.46 0.92 0.88 0.69 0.58 0.58 0.58 0.12 0.13 0.05 0.06

Humboldt County, CA (n=204) 0.19 0.21 0.88 0.87 0.69 0.66 0.67 0.62 0.06 0.08 0.03 0.01

Kansas City, MO (n=367) 0.33 0.31 0.89 0.89 0.66 0.55 0.60 0.56 0.09 0.09 0.04 0.02

Maine (n=266) 0.20 0.25 0.90 0.89 0.63 0.57 0.56 0.56 0.04 0.08 0.03 0.04

Memphis, TN (n=300) 0.27 0.32 0.94 0.91 0.62 0.57 0.56 0.49 0.09 0.15 0.06 0.08

Minnesota (n=308) 0.36 0.32 0.92 0.84 0.64 0.54 0.66 0.62 0.11 0.09 0.03 0.06

Puget Sound, WA (n=328) 0.34 0.35 0.90 0.87 0.70 0.64 0.67 0.66 0.11 0.09 0.04 0.04

South Central PA (n=233) 0.27 0.30 0.92 0.89 0.68 0.56 0.65 0.54 0.06 0.08 0.02 0.03

West Michigan (n=219) 0.26 0.29 0.93 0.85 0.67 0.60 0.52 0.53 0.03 0.09 0.03 0.05

Western New York (n=341) 0.31 0.34 0.91 0.91 0.62 0.62 0.56 0.56 0.11 0.11 0.03 0.04

Willamette Valley, OR (n=379) 0.31 0.30 0.90 0.85 0.66 0.60 0.60 0.58 0.07 0.11 0.04 0.04

Wisconsin (n=236) 0.30 0.32 0.89 0.88 0.67 0.58 0.61 0.59 0.09 0.05 0.03 0.01

Sample size for each group is in parentheses

Page 77: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

65

Table 5-3 offers insights into regional variation in report card availability, applicability, and

credibility across AF4Q regions and over time. It is encouraging to note the across-the-board

increases in availability (with the exception of Maine which remains unchanged) and

applicability over time, although, as noted earlier, these changes are modest. The largest

improvements in availability come for regions that already had a high number of reports at

baseline (4 each for Minnesota and Humboldt County, CA). Increases in applicability range from

the 0.2 for Wisconsin to 2.1 for Minnesota. Table B-1 breaks the overall availability of report

cards by type of chronic illness measures contained in the report and offers a useful synopsis of

the kind of clinical conditions that the early and later public reporting efforts focused on. At

baseline, most of the available CQI featured quality measures of either heart disease and/or

diabetes with few report cards offering information on asthma, depression, or hypertension. All

alliance regions had at least one report with a heart disease measure and 6 had a diabetes

measure. Conversely, only three alliances had a quality measure for hypertension or depression

in 2007. Over time, we see uniform increases in diabetes and heart disease measures for all

alliance regions but availability of clinical measures for depression (4 regions in 2011) and

hypertension (6 regions in 2011) remain limited despite some modest increases. While only 5

regions were offering reports with asthma measures in 2007, almost all had at least one such

measure in 2011.

The number of credible reports on average jumped from just 0.17 in 2007 to 2.74 in 2011

(Table 5-1). Increase in credibility of reports over time reflect possible responses to the three

major critiques leveled at the provide quality transparency movement since its inception. First,

many researchers, policymakers, and healthcare providers have expressed concern about the

accuracy of clinical quality measures generated from insurance claims data while pointing to

Page 78: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

66

Table 5-3 Availability, Applicability, and Credibility of Quality Reports, By Alliance

Alliance

Available Reports Applicable Reports

Credible Reports

2007 2011 2007 2011 2007 2011

Cincinnati, OH 6 9 0.7 1.9 0 3

Cleveland, OH 5 8 0.6 2.2 0 2

Detroit, MI 7 9 1.6 3.4 0 2

Humboldt County, CA 9 13 1.4 1.8 1 3

Kansas City, MO 3 5 0.3 1.2 0 3

Maine 7 7 1 2 1 3

Memphis, TN 3 7 0.2 1.4 0 3

Minnesota 12 16 4.6 6.7 0 5

Puget Sound, WA 3 5 0.3 1.2 0 2

South Central PA 6 9 0.8 1.7 0 4

West Michigan 6 9 2 2.5 0 2

Western New York 10 12 1.4 1.9 0 2

Willamette Valley, OR 6 9 0.5 1.4 0 2

Wisconsin 9 10 1.8 2 1 3

medical records as a potentially more reliable source for constructing comparative metrics. In

Table B-2 (which provides a breakup of number of credible reports by alliance region and data

on individual dimensions of our measure of credibility) we see virtually uniform increases in

report cards that had clinical measures generated from medical record data (13 in 2007 to 52 in

2011). Second, Table B-2 also provides evidence of significant increases in report cards

produced by non-profit agencies or government entities over time (81 to 117), suggesting that a

large chunk of current CQI comes from sources that the general public is more likely to deem

reliable compared to other CQI producing organizations such as for-profit health plans or health

care organizations. Third, we document a growing tendency by CQI producers to incorporate

measures that have imprimatur of national institutions with expertise in quality measurement

such as the National Quality Federation (NQF) (53 to 82). Rapid profusion of clinical quality

Page 79: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

67

measures has raised concerns about whether provider comparisons based on metrics calculated in

differing ways by different organizations are strictly comparable, and has spurred efforts to

standardize clinical measures by subjecting them to uniform vetting standards developed by

organizations such as NQF. Our findings suggest these concerns have started to be heeded by

organizations that produce and distribute report cards.

How Do Organizations Disseminate Report Cards? AF4Q Alliances as a Case Study

Ours is also the first study to provide empirical information on strategies employed by the

producers of CQI to ensure that it reaches its intended beneficiaries. Using AF4Q alliances as a

case study, we find a significant increase in “intensity” of dissemination between 2007 and 2011.

Specifically, AF4Q alliances were using, on average, 4 more strategies to target their reports

towards consumers in 2011 than in 2007 (0.67 to 4.82) (Table 5-1). It is unclear whether this

finding is typical of other prominent organizations that produce and distribute report cards.

Table 5-4 offers a comprehensive description of strategies used by AF4Q alliances in

disseminating their report cards to consumers. Reading off from left, each column provides a

binary indicator of whether a given alliance employed a specific strategy during a specific time

period (either 2007 or 2011), while the final column on the right sums up the total number of

strategies used by alliance in that period. Two broad patterns are immediately evident: first, for

all alliances, the “intensity” of dissemination (as measured by total number of strategies

employed) increased between 2007 and 2011, and second, only four alliances (Detroit, Maine,

Minnesota, and Wisconsin) had active report generation and dissemination efforts in 2007

whereas by mid-2011 every alliance was creating and disseminating CQI in some form. The four

alliances that had active report cards in 2007 relied on preexisting organizational infrastructures

and stakeholder relationships to produce CQI prior to their entry into the AF4Q program, and

Page 80: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

68

Table 5-4 Alliance Proactive Dissemination Scores

Alliance

Alliance posted

quality report

on website

Quality report

posted/printed

in non-English

language

Online report

was updated at

least once

within

measurement

period

Alliance issued

press releases

to media outlets

and/or

sponsored

radio/television

spots

Alliance

collaborated

with

community

based

organizations/st

akeholders to

disseminate

quality reports

Alliance

published

report in

consumer-

focused

magazines

Alliance hired

a public

relations/comm

unications

expert

Alliance did

consumer

research to aid

dissemination

Cumulative

score¶

2007 2011 2007 2011 2007 2011 2007 2011 2007 2011 2007 2011 2007 2011 2007 2011 2007 2011

Cincinnati, OH 0 1 0 0 0 1 0 1 0 1 0 0 0 1 0 1 0 6

Cleveland, OH 0 1 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 4

Detroit, MI 1 1 0 0 1 1 0 1 0 1 0 0 0 1 0 0 2 5

Humboldt County, CA

0 1 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 4

Kansas City, MO 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 4

Maine 1 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 3 5

Memphis, TN 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 1 0 7

Minnesota 1 1 0 0 1 1 0 1 0 1 0 0 0 0 1 1 3 5

Puget sound, WA 0 1 0 0 0 1 0 1 0 1 0 0 0 1 0 1 0 6

South Central, PA 0 1 0 0 0 1 0 1 0 1 0 0 0 0 0 1 0 5

West Michigan 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 3

Western New York 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 3

Willamette Valley,

OR 0 1 0 0 0 1 0 1 0 1 0 0 0 1 0 0 0 5

Wisconsin 1 1 0 0 1 1 0 1 0 1 0 0 0 0 0 1 2 5

¶ Computed by summing scores for individual categories

Page 81: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

69

therefore had a “head start” over others who had to “start from scratch” in building AF4Q

alliances after the program inception in 2007. Western New York was the only alliance that did

not report provider quality data to public until early 2011 because the major health plans were

under heavy pressure from State Attorney General to commission a certification review from the

National Committee on Quality Assurance (NCQA) and consequently declined to release claims

data to the alliance without a legal determination by the Attorney General office, which came

late in 2010.

Online Posting of Report Cards

All alliances that produced CQI posted it online almost immediately. Online posting was

aided by active websites maintained by alliances to foster their consumer engagement and

stakeholder collaboration activities as a part of the overall community-based quality

improvement initiative. Alliances varied in their approaches towards formatting and presentation

of public reporting content to consumers as well as in providing accessible links to information

from other sources of comparative provider quality such as the Centers for Medicare and

Medicaid.

Posting/printing CQI in non-English languages

One factor limiting the impact of CQI on consumer decision making is the poor health

literacy and English language proficiency of many in disadvantaged communities (e.g., recent

immigrants). The realization that disparate utilization of CQI may exacerbate existing disparities

in health and healthcare use has spurred calls to provide information in native, non-English

languages such as Spanish. Alliances, however, faced a slew of barriers in translating their online

or hardcopy quality reports in languages other than English, including lack of technical logistical

Page 82: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

70

support and financial constraints. Memphis was the only alliance that translated a part of its

quality report in Spanish and distributed the information as a brochure to aid its awareness and

use among the sizable Hispanic population in Memphis and Shelby County.

Updating Information in Report Cards

A recurring concern with generating and publishing quality metrics has been the rate at

which many clinical guidelines become obsolete or outdated in a rapidly changing field of

clinical medicine. Some researchers have advocated a review of existing guidelines/measures

every few years to ensure compatibility with current evidence (Shekelle et al, 2001). Moreover,

if providers respond to quality information with rapid improvements in process of care delivery,

capturing and disseminating information that reflects data collected in previous periods risks

providing a misleading picture of quality differentials to consumers and may lead to unintended

patient allocation patterns amongst providers. On the other hand, the process of generating and

disseminating quality needs a sufficiently long reference period to be statistically valid and is

financially expensive and logistically time-consuming. These dueling concerns are echoed in

AF4Q alliances approaches towards updating their online report cards. Most alliances that had an

online report had produced an update in 2011, typically taking the form of either refreshing the

existing measures with fresh information or adding new metrics. Some alliances (e.g., Humboldt

County) had their data vetted by providers, in part to ensure that the information reflected current

practices, but the vetting process delayed posting their updates because the physicians objected

that the metrics did not provide a fair picture of their care improvements. Despite these

challenges, 11 of 13 alliances that had an online report were able to produce at least one update

in 2011.

Page 83: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

71

Media Press Releases and Advertisements

Using media attention to get the word out to consumers is a bedrock strategy of marketing in

numerous sectors of economy, in part due to the unparalleled reach of media outlets such as

print, television, and increasingly, social media platforms. It is therefore unsurprising that all

alliances used press releases to local media outlets to advertise their online reports and websites.

The specific alliance strategies, however, varied considerably, with many seeking to adapt to the

local context in shaping their approaches. Some alliances such as Memphis had contractual

relationships with local media organs and even had representation of media on their boards while

others (e.g., Detroit) had key decision makers on their board with strong professional ties with

media organizations. Most alliances planned their press releases around launch of their websites

or significant revamps or data updates. Many supplemented their print media strategies with

television and/or radio spots highlighting their websites (e.g., Memphis, Detroit). A few alliances

used social media strategies involving coordinating messages about reports with prominent local

health blogs (Humboldt County).

Collaboration with Key Stakeholders/Community-Based Organizations to Disseminate CQI

An important aspiration of community-based multi-stakeholder alliances is to mute the

competitive impulses that have diverse stakeholders working against each other in pursuit of

narrower organization-specific objectives, and instead foster collaboration to meet key

community-wide objectives. This may be especially important in reporting provider quality data

which typically requires a central entity that can collect data from multiple competing health

plans and providers, and build and sustain sufficient buy-in from key stakeholders so as to tame

their competing incentives to withhold data from potential competitors. All alliances with active

Page 84: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

72

online reporting programs sought to parley relationships with key stakeholders on their boards to

maximize the likelihood of consumers’ awareness and use of their respective report cards. Here,

as elsewhere, the specific approaches taken by alliances reflect their unique contexts and

stakeholder compositions. Many alliances exploited existing relationships with their employer

stakeholders to reach employees with information about their report cards by using mailed

newsletters (“stuffers”), distributing flyers through employee wellness programs, and soliciting

employee input during open enrollment events. Others looked to community-based organizations

such as libraries, churches, and consumer advocacy groups to get the word out to underserved

communities in their regions, often by convening community outreach events (“health fairs”)

that formed part of their formal consumer engagement efforts (Memphis). Some even solicited

support of provider groups to increase awareness of their website by putting online links on

prescription pads and business cards (Maine).

Publishing Reports in Consumer-Focused Magazines

A more direct way of reaching consumers may be to incorporate provider quality information

in consumer-focused magazines that have a well-established readership (e.g., Consumer

Reports). Memphis was the only AF4Q alliance that tried this approach in 2010 by launching its

own regional magazine (“Consumers Decide”) focused on health and healthcare issues, thus

allowing them to aggressively disseminate their patient experience survey results to patrons in

libraries, physician offices, barber shops, health fairs, churches, and other community social

gatherings. Since the new magazine did not have a preexisting readership, Memphis chose to

distribute nearly 10000 copies free at community drop-off points.

Page 85: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

73

Hiring Public Relations/Communications Expert to Aid Dissemination Efforts

Since dissemination of report cards to consumers was a key goal of public reporting efforts

and integral to the overall vision of the AF4Q program, some alliances (Maine, Cincinnati,

Detroit, Willamette valley, and Puget Sound) chose to hire individuals who had significant

expertise in public communication or long-standing experience in public relations to guide their

CQI dissemination initiatives. Often, alliances leveraged their existing relationships within the

local communities to choose personnel suited to their specific needs. For instance, Cincinnati,

which is home to Proctor and Gamble headquarters and is often branded as “Mecca” of

consumer marketing, asked a retiring P&G executive to join its Board and lead their consumer

marketing plans. Similarly, Detroit prioritized cultivating relationships with media organs (in

part to enhance the media footprint of its report card) and, therefore, chose personnel who had

expertise in media relations, whereas Maine brought in a communications expert to totally

revamp its website for consumers.

Research on Consumer Decision-Making to Aid Dissemination

Are consumers able to comprehend and assimilate information in report cards to make

informed provider choices? Drawing for a significant body of theoretical and empirical work into

how consumers process and use information in decision-making, two decades worth of empirical

literature has laid bare some of the cognitive biases operative when consumers integrate

information that varies along multiple dimensions into discrete choices (Oskamp, 1965; Slovik,

1982; Hibbard et al, 1997; Hibbard, 2008). One strong conclusion from this work suggests that

while consumers may want more information along distinct dimensions (as is standard in many

reports that are condition-specific and dense with numerical comparisons) they are able to

Page 86: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

74

process only a few variables at one time. The realization has spurred many producers of CQI to

format and present provider quality information in ways that foster more accurate judgements

about quality differentials (e.g., star rating systems instead of graphical comparisons along

multiple dimensions, composite measures that incorporate information about multiple clinical

processes into a single weighted sum). About half of AF4Q alliances (Cincinnati, Kansas City,

Memphis, Minnesota, Puget Sound, South Central PA, and Wisconsin) invested in consumer

decision-making research to inform their strategies aimed at both consumer engagement and

dissemination of their report cards. Although alliance approaches varied somewhat on details,

some common patterns emerged. Many alliances (e.g., Memphis, Cincinnati, Minnesota, Puget

Sound) organized and funded focus groups of consumers that centered around themes echoing

problems in cognitive processing of data-rich information: perception of provider differentials

and provider choice on margins of cost and quality, ease of interpretation of comparative data,

preferences for display of information (graphs vs. dots and stars), and feedback of consumers

who had utilized website reports. Others placed more emphasis on staffing their in-house

consumer advisory committees with large group of consumers or consumer advocates and

soliciting their input in decision-making regarding online posting of reports (e.g., Kansas City) or

hiring technical assistance firms to survey website users around their experience navigating

through the information (e.g., Wisconsin).

Print Media Coverage of CQI

Print media coverage is an important channel by which CQI may reach consumers, and

arguably a more influential one than direct dissemination (Mennemeyer et al., 1997). To our

knowledge, no prior study offers meaningful information on press coverage of report cards. In

terms of just the raw number of articles, the local media coverage of alliance-sponsored and

Page 87: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

75

CMS-sponsored report cards was surprisingly sparse. Tables B-3 through B-5 provide a

breakup of number of items published in local print media by the specific keyword(s) and

specific report card category for the two study periods, with the third column for each period

panel indicating the number of press items that were assigned at least one content code related to

CQI. For instance, only two of four alliances that had an online report card in 2007 had a press

article about their reports in regional newspapers, with Minnesota leading with the highest

number of news items (11). However, coverage did pick up in 2011, when most alliances had at

least one article about their reports appearing in the local press. Again, Minnesota led the pack,

with 10 articles discussing its report card in the Minnesotan news media in 2011. Similarly, press

coverage of CMS-sponsored report cards (e.g., Hospital Compare) was meager overall, and did

not exceed 4 press items for any alliance in either period. Media coverage of non-alliance, non-

CMS report cards was more robust, perhaps because this category collectively represented all

reports published by state and local entities in a given region. Detroit and Puget Sound saw

maximum number of articles that discussed this (“other”) category of report cards, both in 2007

and 2011.

Turning now to final scores (weighted by valence and prominence weights), we find that,

between 2007 and 2011, print media coverage of report cards generated and/or distributed by

AF4Q alliances and CMS almost doubled, albeit from a low base (0.58 to 1.22 and 0.43 to 0.86

“idealized” news articles, respectively) (Table 5-1). On the other hand, press coverage of CQI

developed by organizations other than AF4Q alliances or CMS (e.g., Leapfrog, Pennsylvania

Healthcare Cost Containment Council) was higher at baseline (10.05) but declined noticeably

(7.95) over the study period. This may partly reflect the more proactive dissemination and

Page 88: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

76

greater resources spent by alliances in publicizing their report cards and “higher brand

recognition” of CMS’s websites such as Hospital Compare.

In contrast to the uniform increases over time in availability, applicability, credibility, and

alliance-sponsored dissemination across regions, regional media coverage lacked any consistent

pattern (Table 5-5). Some regions such as Minnesota saw drops in press coverage of all types of

report cards while others such as Detroit, MI saw improvements in coverage of one type

(alliance-sponsored reports) while steep declines in coverage of other types (non-alliance non-

CMS reports). These patterns are not by themselves surprising since a significant proportion of

media “footprint” may be driven by hospital led efforts to advertise their relative standings in

quality to consumers and these efforts may vary based on regional quality scores and other

incentives of local hospitals. In fact, instead of investing resources in publicity of its Hospital

Compare website, CMS has encouraged hospitals to publicize the website, albeit with strong

discouragement of making quality comparisons with their peers in their publicity efforts.

Print Media Coverage of Patient Safety Practices of Providers

Our study offers the first empirical estimate of regional variations in media coverage of

provider practices that bear directly on patient safety and, as such, makes an important

contribution to the literature focused on provider quality measurement and consumer sensitivity

to quality differentials. A significant proportion of process and outcome quality measures used in

contemporary report cards focus on patient safety (“never events” such as wrong limb surgery,

infection of central lines, drug administration errors), not least because such events have an

intuitively large salience in patients’ minds as unusually clear indicators of provider quality.

Moreover, media publicity of “sentinel” events may drive market share of healthcare

Page 89: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

77

organizations in ways that a clinical condition-specific set of comparative metrics may never be

able to match (Mennemeyer et al, 1997). We find an overall decline in number of news items

discussing patient safety issues between 2007 and 2011, from 18.06 to 14.43 (Table 5-1).

Table 5-5 Media Coverage Scores

Alliance

Alliance-

Sponsored€ CQI

Media Coverage¶

CMS-Sponsored

CQI Media

Coverage

“Other” CQI Media

Coverage£

Media Coverage

Of Patient Safety

2007 2011 2007 2011 2007 2011 2007 2011

Cincinnati, OH 0 0 0 0 8.9 4.9 9.8 4.4

Cleveland, OH 0 0 1.4 1.5 2.5 4.3 7.1 1.1

Detroit, MI 0 1.1 0 0 20.2 10.2 5.9 6.7

Humboldt County, CA 0 0 0 0 1.4 2.4 0 4

Kansas City, MO 0 0 0 2 14.3 3.1 15.4 6.5

Maine 0 4.4 0 1.6 4.8 12 12.6 34.2

Memphis, TN 0 1.6 0 0.6 6.7 9.6 12.8 4.6

Minnesota 7.6 5.9 3.1 1.9 19.5 12.3 52.3 45.5

Puget sound, WA 0 0 0.6 0 16.1 10.1 38.1 25.1

South Central, PA 0 1.4 0 0 7 6.4 8.4 5.3

West Michigan 0 0 0.4 2.1 6.9 8.8 6.6 7.5

Western New York 0 0 0 1.4 7.9 8.8 4 14.7

Willamette Valley, OR 0 0.6 0 0 4.8 5.2 8.8 8.7

Wisconsin 0.5 1.6 0.3 1.1 15.9 11.7 78.9 41.5

€ Captures media coverage of report cards produced or sponsored by the regional AF4Q alliance

Captures media coverage of report cards produced or sponsored by the Centers for Medicare & Medicaid Services

£ Captures media coverage of report cards produced or sponsored by organizations other than AF4Q alliances and CMS (e.g., Leapfrog).

¶ A score of 1 for CQI media coverage can be interpreted as a single news article published in a regional newspaper on the front page whose title

contains corresponding keyword(s) or a close analogue that indicates provider quality comparison and which contains discussion related to all

four content areas (transparency of quality or cost of services of health care providers, variation in healthcare quality across regions or

demographic groups, comparisons of providers in terms of quality, cost or efficiency, and web links to quality reports ) and underscores the

importance of informing consumers about provider quality and regional variation in healthcare quality without expressing any skepticism towards

the utility of CQI or doubts that it may confuse consumers.

¶ A score of 1 for media coverage of patient safety can be interpreted as a single news article published in a regional newspaper on the front page

whose title contains corresponding keyword(s) or a close analogue that indicates patient safety and which contains discussion related to two

content areas (discussion of patient safety practices of healthcare providers and sentinel events) and reflects negatively on patient safety practices

of providers

¥Refers to the period extending from June 1, 2006 to June 1, 2007

∫Refers to the period extending from June 1, 2010 to June 1, 2011

However, this average decline masks some significant regional variations in media attention:

coverage increased for Detroit, Humboldt County, Maine, West Michigan, and Western New

York while declining for all others (Table 5-5). The decline was particularly sharp for Wisconsin

Page 90: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

78

and Memphis. These regional differences (both point-in-time and over time) may track

differences in sensitivities of local residents to patient safety record or practices of local

providers (i.e., events compromising patient safety may have a higher salience for people living

in certain regions) or, alternatively, the historical incidence of high profile sentinel events in the

region. In either case, they have the potential to bias our estimates if left unmeasured, since they

are likely to be correlated with both the regional supply and demand for CQI.

Main analyses

Results of our main analyses are presented in Table 5-6 which displays the estimates from

linear probability models for our main specifications. Mean values at baseline (2008) for key

predictors (column 1) and outcomes (row 1) are presented alongside for ease of interpretation.

With two exceptions, none of our key independent variables show a statistically significant

impact on awareness of CQI, with most estimates indicating small effect sizes. The effect size

for having one more news article focused on alliance-sponsored report is somewhat larger than

others (1.4% points, p<0.05) and significant at 5% level. Availability of one additional credible

report seems to diminish likelihood of awareness of CQI by 1.4 percentage points (p<0.05).

However, we find more meaningful effects for consumer use of CQI (column 3). For instance,

increase in one more applicable report is associated with 1.3 percentage point increase in the use

of CQI for choosing doctors (p<0.05), an improvement of nearly 16% over the baseline levels of

use (9%). Our point estimate of the effect on use of CQI of an increase in availability of any

report is positive and even larger (2.4 % points, p<0.01). Somewhat surprisingly, we find no

effect of proactive alliance dissemination of CQI on either awareness or use of CQI. However,

we do detect a positive effect of proactive dissemination on likelihood of consumers discussing

the report with the doctor (0.5 % points, p<0.05). Our failure to detect meaningful impacts of

Page 91: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

79

proactive dissemination on consumer behavior may partly reflect our measurement strategy for

this variable. Recall that we generated alliance dissemination scores by weighting eight distinct

dissemination strategies equally to arrive at a cumulative score for each alliance. Hence, our

variable may have been subject to significant measurement error which may have attenuated any

positive effects. We relax this assumption in our supplementary analyses to test the effect of

individual dissemination strategies by coding as dummy variables. The effect sizes of our

measure of credibility of report cards on both awareness and use are modest, negative, and

statistically significant (-1.4 % points for awareness and -1.8 % points for use of CQI,

respectively).

The effect of media coverage of CQI on its use in choosing providers depends on the category

of report card that is the focus of media attention: in general the effect sizes are small but right-

signed and statistically significant. An additional news article on alliance-sponsored report

increases likelihood of use of CQI by 1.1 percentage points (p<0.01). Similarly, for press

coverage of CMS-sponsored CQI, one more news article leads to a rise of nearly 1.6 percentage

points (a 19% increase over baseline; p<0.01) in use. None of our key predictors, with two

exceptions, has a statistically significant impact on consumers’ likelihood of discussion of CQI

with their providers. The exceptions are media coverage of alliance report cards which reduces

the likelihood of use of CQI by 0.9 percentage points (p<0.05) and non-alliance non-CMS

reports, which seems to spur consumers to discuss the content of quality report cards with their

doctors (column 4), although the effect size is pretty small (0.3% point increase over a baseline

of 4%; p<0.05). AF4Q alliances have invested considerable efforts in encouraging patients to

bring up quality issues while interacting with their providers and, based on our findings, these

efforts are yet to pay significant dividends. Our effect sizes for media coverage of CMS reports

Page 92: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

80

should be interpreted in the context of baseline levels of coverage (column 1). For instance, our

findings suggest that to obtain such increases in consumer use of CQI, media coverage will have

to increase nearly 3 fold, a somewhat unrealistic scenario at least in short term. Low level of

media scores partly reflect our conservative strategy of substantially down-weighting news items

that expressed skepticism about the utility of report cards and/or were insufficiently

conspicuously in located within the newspaper. Below, we test the sensitivity of our results to

this weighting strategy by running models with unweighted media coverage scores that more

closely reflect the actual amount of media coverage of CQI.

Page 93: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

81

Table 5-6 Impact of Dissemination of CQI on Awareness and Use

Variable Baseline For

Interpretation

Awareness Of

CQI

Use Of CQI

In Choosing Providers

Discussion of CQI

With Provider

Baseline For Interpretation Mean (2008) 0.31 0.09 0.04

Availability Of Report

Applicability Of Reports

6.42 0.008 0.024*** 0.002

1.17 0.005 0.013** 0.003

Credibility Of Reports 0.17 -0.014** -0.018*** -0.009

Alliance Dissemination 0.68 -0.003 0.002 0.005**

Media Coverage Of Alliance CQI 0.58 0.014** 0.011*** -0.009**

Media Coverage Of CMS CQI 0.43 0.002 0.016*** 0.003

Media Coverage Of “Other” CQI 10.05 0.000 0.002** 0.003**

Media Coverage Of Patient Safety Issues 18.06 0.000 0.000 0.000

Population With Access To Health Plan Report‡ (%) 21.55 0.000 0.001** 0.001***

Family Income¶ (In 1000 $) 45.40 0.001 0.000 0.000

Education (College Or More) 0.63 0.0088 0.008 -0.025*

Employed 0.49 0.017 0.013 0.002

Health Insurance Status

Private Insurance 0.41 Reference Reference Reference

Uninsured 0.07 -0.055 -0.046** -0.009

Public Insurance 0.52 0.052* -0.006 0.007

Self-Rated Health Status 2.96 -0.003 -0.010 -0.005

PAM∫ Score 65.70 0.002*** 0.001* 0.000

Type Of Chronic Condition

Diabetes 0.29 -0.007 -0.033 -0.007

Hypertension 0.66 0.023 0.012 -0.009

Heart Disease 0.16 -0.046 -0.077** -0.006

Asthma 0.17 0.040 0.000 0.015

Depression 0.27 0.018 -0.019 -0.002

Number Of Physicians Per Capita, By County 3.16 0.050** 0.036** 0.032

Overall Satisfaction With Health Care Received 8.33 0.005 0.001 0.000

R2 0.01 0.013 0.012

Page 94: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

82

How does increased dissemination of CQI affect consumer use if there are no or limited

gains in awareness? One possibility is that most of the gains come through increased use among

consumers who are already aware of report cards but not yet sufficiently convinced of their

utility to start using them (“learning” effects). This may be reflected in consumers’ changed

attitudes towards reports as they hear more about them. We investigate this possibility by

looking at how our key predictors affect consumers’ perceived importance of reports, their

acknowledgement of provider quality differentials, and their willingness to switch providers due

to quality concerns (Table 5-7). Unfortunately our results are unhelpful in drawing any firm

conclusions. Although many of the coefficients are statistically significant, no consistent patterns

emerge about the direction of effects. For instance, we do find that availability of an additional

applicable report makes consumers more willing to switch doctors (by 1.2 percentage points,

p<0.05) which, while important, provide us with little insight into the channels by which

dissemination improves CQI use. On the other hand, the coefficient for availability of any report

cards is wrong signed with a sizable negative effect (-1.6 % points, p<0.05). In a similar vein,

increases in alliance report media coverage lead to reduction in willingness to switch while

similar increases in press coverage of CMS report cards increases the consumers’ propensity to

change physicians. We also find that if a region gains an additional report that is deemed

credible, the consumer likelihood of switching doctors based on concerns about low quality care

diminishes by 3.0 % points (about 5 % decrease over a baseline of 60%, p<0.05). It is unclear

why greater availability of credible report may make consumers more reluctant to switch

physicians. Access to more credible information should, at least theoretically, enhance

confidence in provider choice decisions that are based on report cards and yield a higher

willingness to change providers. We discuss this apparent anomaly in more detail in a later

Page 95: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

83

section. In addition, in our supplementary analyses we explore the sensitivity of our findings on

consumer attitudes to alternative ways of measuring this variable.

Page 96: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

84

Table 5-7 Impact of Dissemination of CQI on Attitudes Towards CQI

Variable Baseline For

Interpretation

Perceived

Importance

Of CQI

Agreement that doctors

differ in quality

Willingness To Switch

Doctors

Based On quality

Baseline For Interpretation Mean (2008) 0.91 0.66 0.60

Availability Of Report

Applicability Of Reports

6.42 -0.006 0.000 -0.016**

1.17 0.001 -0.009 0.012**

Credibility Of Reports 0.17 0.011** -0.013 -0.030**

Alliance Dissemination 0.68 -0.001 -0.000 0.006

Media Coverage Of Alliance CQI 0.58 0.003 -0.016* -0.012***

Media Coverage Of CMS CQI 0.43 0.004 -0.001 0.009**

Media Coverage Of “Other” CQI 10.05 -0.001 0.004*** 0.000

Media Coverage Of Patient Safety Issues 18.06 0.000* 0.001* 0.001**

Population With Access To Health Plan Report‡ (%) 21.55 -0.001*** 0.000 0.001**

Family Income¶ (In 1000 $) 45.40 0.000 0.000 -0.001

Education (College Or More) 0.63 0.013 0.048* 0.011

Employed 0.49 -0.011 0.022 0.010

Health Insurance Status

Private Insurance 0.41 Reference Reference Reference

Uninsured 0.07 -0.016 -0.022 0.009

Public Insurance 0.52 -0.004 0.015 0.008

Self-Rated Health Status 2.96 0.016** -0.010 -0.021*

PAM∫ Score (Range 1-100) 65.70 0.000 -0.001* -0.001*

Type Of Chronic Condition

Diabetes 0.29 0.006 -0.013 0.011

Hypertension 0.66 0.010 0.030 -0.050*

Heart Disease 0.16 -0.004 0.030 -0.007

Asthma 0.17 -0.002 0.001 -0.031

Depression 0.27 0.040*** 0.019 0.007

Number Of Physicians Per Capita, By County 3.16 -0.039 -0.004 -0.009

Overall Satisfaction With Health Care Received 8.33 -0.008** 0.015*** -0.053***

R2 0.016 0.025 0.040

Page 97: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

85

Recall that our empirical strategy involved clustering standard errors on geographical markets

(i.e., alliances). It is however plausible that, given our panel data structure, there may be serial

correlation of errors within repeated panel observations (i.e., individuals). In alternative analysis,

we ran our main specifications with clustering on panel units and found that a few of our

estimates for media coverage of report cards lose significance and become imprecisely estimated

(Table B-7 and B-8). Hence, we caution that a conservative interpretation of our findings implies

even smaller effects of media coverage variables than we find in our main specification.

Supplementary Analyses

In our supplementary analyses, we investigate how consumer response to release of provider

quality information and media coverage of CQI varied by their status of awareness of CQI at

baseline. Consumers “unexposed” to CQI (i.e., unaware of CQI in 2008) and living in regions

where more information becomes available may respond more readily by using the information

because it may be more “novel” to them than their regional counterparts who are already

exposed to the information. Conversely, consumers who have had a prior exposure may be more

“primed” to use the new information, especially if it is more “applicable” to their specific needs,

as they may have learned to trust and use the information over a period of time. The true impact

of dissemination may depend on which of these countervailing effects of “novelty” and

“learning” predominate and, as such, is an empirical question. Table 5-8 presents our results for

use of CQI separately for the subsample of exposed (column 1) and unexposed consumers

(column 3). We find that, irrespective of whether the new information came in the form of an

additional applicable report or any report, consumers who were already exposed to information

responded with higher likelihood of use than their unexposed counterparts. For instance, one

more regional report card increased use of CQI by 1.2 percentage points in the unaware-at-

Page 98: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

86

baseline subgroup (p<0.10) compared to 3.8 % percentage points (p<0.05) in the aware-at-

baseline group. Similarly, if the additional report card was applicable to the consumer’s chronic

illness, the effects were greater in the aware-at-baseline group (2.0% points increase in use,

p<0.10) compared to those among unexposed consumers (0.9% points increase, p<0.10). We

also find, however, that only unexposed consumers (i.e., unaware-at-baseline) are more likely to

start discussing the report cards with their doctors when more information becomes available in

their regions (0.4% point

Page 99: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

87

Table 5-8 Impact of Dissemination of CQI on Use Among Subsample of Respondents, By Awareness Status at Baseline

Variable Among Those Aware Of CQI At Baseline Among Those Unaware Of CQI At Baseline

Use Of CQI

In Choosing Providers

Discussion of CQI

With Provider

Use Of CQI

In Choosing Providers

Discussion of CQI

With Provider

Availability Of Report

Applicability Of Reports

0.038** -0.021 0.012* 0.004

0.020 -0.001 0.009* 0.005***

Credibility Of Reports -0.043*** -0.022 -0.014** -0.008

Alliance Dissemination 0.004 0.005 0.007 0.008***

Media Coverage Of Alliance CQI 0.021 -0.032*** 0.002 0.001

Media Coverage Of CMS CQI 0.025* -0.000 0.012* 0.004

Media Coverage Of “Other” CQI 0.005** 0.007** -0.001 0.001

Media Coverage Of Patient Safety Issues -0.001 0.000 0.001** 0.000

Population With Access To Health Plan Report‡ (%) 0.002* 0.002** 0.001 0.001***

Family Income¶ (In 1000 $) 0.000 0.001 0.000 0.000

Education (College Or More) 0.031 -0.053** 0.004 -0.013

Employed 0.008 -0.026 0.006 0.012*

Health Insurance Status

Private Insurance Reference Reference Reference Reference

Uninsured -0.089* -0.011 -0.028 -0.007

Public Insurance -0.025 0.013 -0.010 -0.001

Self-Rated Health Status -0.008 -0.003 -0.008 -0.005

PAM∫ Score 0.000 0.000 0.001** 0.000

Type Of Chronic Condition

Diabetes 0.063 0.032 -0.073*** -0.025

Hypertension -0.005 -0.018 0.021 -0.004

Heart Disease -0.135 0.032 -0.033 -0.020**

Asthma -0.018 0.049 0.002 -0.004

Depression -0.024 0.005 -0.023* -0.006

Number Of Physicians Per Capita, By County 0.092* 0.075 0.022 0.015

Overall Satisfaction With Health Care Received 0.004 -0.002 -0.001 0.000

R2 0.057 0.029 0.091 0.048

Page 100: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

88

and 0.5% point increases in likelihood of discussion, respectively, with only latter estimate

significant at 1% level). For media coverage, the relative magnitude of point estimates suggest

that media coverage of CQI spurs more use of CQI in respondents already aware of it in 2008

(e.g., one additional news item on CMS-sponsored CQI increases likelihood of use of CQI in

choosing doctors by 2.5 % points, compared to 1.2 % points in the unaware-at-baseline

subgroup, both significant at 10% level). In Table 5-9, we further investigate the differential

effect of all our key predictors on consumer attitudes towards CQI by consumer awareness status

at baseline. We fail to discern any broad patterns in these results. Most of the key coefficients are

very small and statistically insignificant with the exception of the effect of media coverage of

CMS report cards, which yields a large increase in likelihood of perceiving CQI as important in

choosing doctors among respondents who reported being already aware of CQI at baseline (2.7%

points, p<0.05). Although we are unable to draw strong inferences from these findings,

collectively, however, these patterns do provide some limited support to the conjecture that more

exposure to CQI (driven by increased availability and/or dissemination) tips some consumers

who may be aware of CQI into actually using it in their healthcare decision-making over time,

especially if the information is more applicable to their needs (learning effects).

In Table 5-10, we report estimates of our key independent variables on consumers’ awareness

and use of physician quality information (PQI). There are several reasons why these estimates

may be of independent interest. First, empirical information regarding awareness and use of

physician report cards is especially limited. A few existing studies have looked at cross-sectional

associations between key sociodemographic determinants (such as gender, education, chronic

illness, and self-rated health) and use of physician reports (Greene et al. 2015). The only

longitudinal study which provides information on how variation in availability of CQI affects

Page 101: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

89

awareness and use of physician quality report cards found an increase in awareness for

consumers who had no prior exposure to CQI when a physician report card became available in

their communities (Shi et al, 2016, Forthcoming). We extend this work by using a broader set of

determinants to examine use of PQI. Second, federal health policy now increasingly emphasizes

consumers use of comparative quality information on individual providers, with the most notable

effort in this direction being the recent unveiling of the Physician Compare website which

provides rankings of physicians on broadly validated quality metrics. Our findings for physician

quality information echo our primary estimates: additional report cards produce a slight

improvement in awareness in PQI if the information contained in them is applicable to

consumers (0.8% points, p<0.05) but little effect of proactive dissemination and media coverage

of report cards, while we see broader improvements in use of PQI. Notably, the effect sizes for

improvements in use are smaller than our primary estimates for any form of CQI (improvement

of 0.9 and 0.7 percentage points following increased availability and applicability, respectively,

both significant at 5% level).

We also examine whether our results are sensitive to our weighting strategy for media articles,

which assumes that news items “framed” to express a favorable view of reports have a greater

impact on use of CQI (Table 5-11 and 5-12). We conjectured that, consistent with prior

literature, weighting news article with positive frames higher than those with negative frames

would noticeably augment the effect of media coverage on consumer outcomes. Contrary to our

hypotheses, however, we find that the effects of unweighted media coverage on CQI do not

Page 102: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

90

Table 5-9 Impact of Dissemination of CQI on Attitudes Towards CQI Among Subsample Of Respondents, By Awareness Status At Baseline

Variable

Among Those Aware Of CQI At Baseline Among Those Unaware Of CQI At Baseline

Perceived

Importance

Of CQI

Agreement that

doctors differ

in quality

Willingness To

Switch Doctors

Based On

quality

Perceived

Importance

Of CQI

Agreement that

doctors differ

in quality

Willingness To

Switch Doctors

Based On

quality

Availability Of Report

Applicability Of Reports

0.010 -0.027 0.029 -0.016 0.006 -0.028***

0.003 -0.015 0.012 0.000 -0.004 0.012*

Credibility Of Reports 0.000 -0.018*** -0.015 0.018** -0.008 -0.041***

Alliance Dissemination 0.010 0.003 -0.019 -0.004 -0.001 0.015**

Media Coverage Of Alliance CQI 0.021 -0.017* 0.011 -0.007 -0.014 -0.022***

Media Coverage Of CMS CQI 0.027** 0.004 -0.008 -0.005 -0.003 0.012*

Media Coverage Of “Other” CQI -0.006** 0.005*** -0.004 0.002 0.004** 0.001

Media Coverage Of Patient Safety Issues 0.001*** 0.002*** -0.001 0.000 0.001 0.001***

Population With Access To Health Plan Report‡ (%) 0.000** 0.001 -0.003 -0.002*** -0.001 0.003***

Family Income¶ (In 1000 $) 0.000 0.000 -0.001 0.000 0.000 -0.001

Education (College Or More) -0.024 -0.024 -0.025 0.026 0.077* 0.027

Employed -0.029 0.045 0.019 -0.002 0.011 -0.001

Health Insurance Status

Private Insurance Reference Reference Reference Reference Reference Reference

Uninsured -0.062* -0.021 0.027 0.003 -0.026 0.005

Public Insurance -0.021 0.002 0.015 0.006 0.023 0.004

Self-Rated Health Status 0.015 -0.025 -0.007 0.016** -0.004 -0.027*

PAM∫ Score (Range 1-100) 0.000 -0.001 0.001 0.000 -0.001 -0.002**

Type Of Chronic Condition

Diabetes 0.017 0.017 0.125** -0.001 -0.034 -0.033

Hypertension -0.001 0.037 -0.070 0.011 0.025 -0.047

Heart Disease -0.072 -0.039 0.032 0.025 0.055 -0.017

Asthma -0.026 0.026 -0.054 0.012 -0.009 -0.026

Depression 0.031 -0.002 0.009 0.043*** 0.024 0.006

Number Of Physicians Per Capita, By County -0.033 -0.006 -0.032 -0.042 -0.007 0.005

Overall Satisfaction With Health Care Received -0.007 0.026** -0.042*** -0.008** 0.011* -0.058***

R2 0.023 0.035 0.039 0.019 0.029 0.049

Page 103: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

91

Table 5-10 Impact Of Dissemination Of CQI On Awareness And Use Of Physician Quality Reports

Variable Awareness of CQI Use Of CQI

In Choosing Providers

Availability Of Report

Applicability Of Reports

-0.010 0.009**

0.008** 0.007**

Credibility Of Reports 0.016** -0.007*

Alliance Dissemination 0.008 0.001

Media Coverage Of Alliance CQI -0.007 0.001

Media Coverage Of CMS CQI -0.001 0.004

Media Coverage Of “Other” CQI 0.005*** 0.002**

Media Coverage Of Patient Safety Issues -0.000 -0.000

Population With Access To Health Plan Report‡ (%) 0.002** 0.001***

Family Income¶ (In 1000 $) 0.000 0.000

Education (College Or More) 0.001 -0.022

Employed 0.003 0.013

Health Insurance Status

Private Insurance Reference Reference

Uninsured -0.046** -0.009

Public Insurance -0.013 0.009

Self-Rated Health Status 0.014* -0.008

PAM∫ Score (Range 1-100) 0.000 0.000*

Type Of Chronic Condition

Diabetes 0.022 -0.037*

Hypertension 0.028 0.001

Heart Disease 0.043* -0.042*

Asthma 0.029 0.011

Depression -0.013 -0.010

Number Of Physicians Per Capita, By County 0.016 0.019

Overall Satisfaction With Health Care Received 0.002 -0.001

Page 104: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

92

Table 5-11 Impact of Media Coverage of CQI on Awareness and Use: Sensitivity to Removal of Valence and Prominence

Weights

Variable Awareness Of

CQI

Use Of CQI

In Choosing Providers

Discussion of CQI

With Provider

Media Coverage Of Alliance CQI 0.011 0.010** -0.010**

Media Coverage Of CMS CQI 0.001 0.018*** 0.007

Media Coverage Of “Other” CQI 0.000 0.001 0.003***

Table 5-12 Impact of Media Coverage of CQI on Attitudes Towards CQI: Sensitivity to Removal of Valence and

Prominence Weights

Variable

Perceived

Importance

Of CQI

Agreement that doctors

differ in quality

Willingness To Switch

Doctors

Based On quality

Media Coverage Of Alliance CQI 0.002 -0.016** -0.011***

Media Coverage Of CMS CQI 0.008 0.007 0.010**

Media Coverage Of “Other” CQI -0.001 0.004*** 0.000

Page 105: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

93

Table 5-13 Impact Of Alliance Sponsored Dissemination Of CQI On Awareness And Use Of CQI:

Using Dummies For Individual Dissemination Strategies

Variable Awareness Of

CQI

Use Of CQI

In Choosing

Providers

Alliance posted quality report on website Reference Reference

Quality report posted/printed in non-English language - -0.457

Online report was updated at least once within measurement period

-0.002 0.110***

Alliance issued press releases to media outlets and/or sponsored

radio/television spots

0.181 -0.230

Alliance collaborated with community-based organizations/stakeholders to disseminate quality reports

0.849 -0.889

Alliance published report in consumer-focused magazines -0.455 -

Alliance hired a public relations/communications expert -0.169 -0.165

Alliance did consumer research to aid dissemination 0.360 0.307

Page 106: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

94

attenuate (e.g., effect of CMS report card coverage is 1.6% points with weighting and 1.8%

points without weighting, both significant at 1% level). There may be several reasons for this

anomalous result. First, our estimates may embody significant measurement error, reflecting the

lower interrater agreement achieved for the process of assigning valence weights relative to the

process of coding of article text. Second, it is possible that, in the context of provider quality

reports, just getting media “hits” is as important as getting ones with a more favorable

discussion, especially when the overall intensity of media coverage appears to be vary sparse.

This finding should not, however, detract from the need to address significant concerns

expressed in media about collection of provider quality data, measurement of quality, as well as

relevance of reports for consumers.

In Table 5-13, we explore the effect of entering our key independent variables for alliance

proactive dissemination as dummy variables to explore potential heterogeneity in impacts. In

general, the point estimates are implausibly large and extremely imprecise. As earlier discussed,

noisy estimates may signify substantial residual measurement error or inability of our broad

outcome variable to detect impacts specific for alliance sponsored reports. We do find a large

and significant effect of producing an annual update of report card on consumers’ likelihood of

use of CQI (11 % point improvement, p<0.05).

Recall that our three variables that capture consumer attitudes towards CQI were coded as

binary measures by dichotomizing the four Likert scale responses for each respective survey

item (each response signifying a higher quantum on an ordinal scale of intensity) at the middle

two responses. For the variable that measures consumers’ perceived importance of CQI for

choosing providers, a very large number of respondents chose the top two responses (i.e.,

acknowledged that CQI was very important or important in choosing doctors) and therefore our

Page 107: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

95

measurement strategy yielded a highly skewed distribution with only 9% respondents denying

that CQI was important in choosing providers. Such a skewed distribution is subject to bias due

to “ceiling effects”, whereby a large baseline level precludes finding a significant effect in the

positive direction even if one truly exists. Moreover dichotomizing a response spectrum risks

losing important information embedded within response options. We address this potential

source of bias by an alternative coding strategy where we dichotomize the survey item responses

between the top and the bottom three responses and run linear probability models identical to our

main specification. We find many of our results are robust to this alternative measurement

strategy (Table 5-14). The exceptions are the effect of increased availability of credible reports

which now shows a strong negative effect on likelihood of consumers deeming quality reports as

very important in making decisions and a roughly equal effect in opposite direction on consumer

willingness to switch doctors owing to concerns about quality.

Table 5-14 Impact of Dissemination of CQI on Attitudes Towards CQI: Sensitivity to Choice

of Dichotomizing Threshold

Variable

Perceived

Strong Importance

Of CQI

Strong Agreement

that doctors differ in

quality

Strong Willingness

To Switch Doctors

Based On quality

Availability Of Report

Applicability Of Reports

-0.018 -0.009 0.002

-0.009 0.006 0.000

Credibility Of Reports -0.020** 0.001 0.018***

Alliance Dissemination 0.025** 0.010 -0.001

Media Coverage Of Alliance

CQI

-0.015 -0.013 0.011**

Media Coverage Of CMS CQI -0.011 -0.002 -0.019***

Media Coverage Of “Other”

CQI

-0.003** 0.003* 0.001*

Finally, we test the robustness of our estimates by adding closest metropolitan area

newspapers to capture media coverage for two alliances that did not have a major metropolitan

Page 108: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

96

area within their regions (Humboldt County and South Central PA) and find our results robust to

this addition (Table 5-15 and 5-16).

Table 5-15 Impact of Dissemination of CQI on Awareness And Use: Sensitivity to Inclusion

of Nearby Metropolitan Cities Newspapers for Humboldt County and South Central

Pennsylvania Alliances

Variable Awareness Of

CQI

Use Of CQI

In Choosing

Providers

Discussion of CQI

With Provider

Availability Of Report

Applicability Of Reports

0.007 0.023*** -0.003

0.005 0.013** 0.003

Credibility Of Reports -0.013* -0.016*** -0.009

Alliance Dissemination -0.003 0.002 0.005**

Media Coverage Of Alliance

CQI

0.014** 0.011** -0.009**

Media Coverage Of CMS CQI 0.000 0.014*** 0.002

Media Coverage Of “Other”

CQI

0.000 0.002** 0.003**

Table 5-16 Impact of Dissemination of CQI on Attitudes Towards CQI: Sensitivity to

Inclusion of Nearby Metropolitan Cities Newspapers for Humboldt County and South

Central Pennsylvania Alliances

Variable

Perceived

Importance

Of CQI

Agreement that

doctors differ in

quality

Willingness To

Switch Doctors

Based On quality

Availability Of Report

Applicability Of Reports

-0.006 0.002 -0.015** 0.001 -0.009 0.013*

Credibility Of Reports 0.011** -0.017* -0.030***

Alliance Dissemination -0.001 0.000 0.006

Media Coverage Of Alliance

CQI

0.003 -0.016 -0.011***

Media Coverage Of CMS CQI 0.005 0.003 0.011***

Media Coverage Of “Other”

CQI

-0.001 0.004*** 0.000

Page 109: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

97

Chapter 6

Page 110: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

98

Discussion

The emergence of provider quality reporting as a key health care reform strategy in the last

two decades has spurred considerable interest in both its effectiveness in improving healthcare

quality and the channels through which it might “work” its effects on healthcare delivery. The

cumulative body of literature, impressive in the breadth of its research focus and methodological

approaches, yields a mixed set of conclusions about its efficacy (Chernew and Scanlon, 1998;

Scanlon et al, 1998; Scanlon et al, 1999; Scanlon et al, 2001; Scanlon et al, 2002; Scanlon,

Lindrooth, and Christianson, 2008, Schneider and Lieberman, 2001; Dranove et al, 2002;

Mukamel et el, 2004; Epstein, 2010). By contrast, key questions still remain about the causal

pathways through which the effect of report cards on healthcare delivery may be transmitted,

especially as it relates to the “consumer channel”. Are consumers paying attention to quality

report cards? If so, how do report cards reach consumers, and if not, what explains the lack of

interest? How do producers of report cards disseminate them to consumers? Do these strategies

work as intended? What role do the media play in carrying the information to consumers? These

questions are beset with significant conceptual issues, measurement difficulties, and econometric

challenges.

In this study, we have attempted to address some of these gaps in literature. Our

contributions to the existing literature are threefold: first, we developed a comprehensive

conceptual framework to organize and clarify the causal pathways of dissemination of CQI to

consumers and to inform both our measurement and empirical approaches. Second, we compiled

a diverse set of data sources enabling us to capture key variables (i.e., availability, applicability,

credibility, and media coverage of CQI) that were lacking in the earlier literature. Consequently,

along with our previous work which we extended in this study (Scanlon et al., 2015; Shi et al.,

Page 111: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

99

Forthcoming), we were able to provide important descriptive information on these variables for a

geographically diverse set of regions. Finally, we employed rigorous methodological approaches

to explore whether dissemination strategies and media coverage are effective in raising consumer

awareness and altering consumer attitudes and behavior with regard to the report cards.

In sum, our study reinforces earlier empirical data suggesting that very few consumers are

paying attention to the provider quality reports, extends earlier work showing increasing regional

“footprint” of report cards over time stemming from significant improvements in availability and

applicability of CQI to consumers (Scanlon et al, 2015), provides first empirical evidence that

the credibility of CQI ( when measured in terms of metrics used in past literature) has improved

over time, yields evidence of sparse coverage of report cards in the press, bolsters the notion that

reports need to be more applicable to consumers’ clinical needs to trigger their greater use, fails

to detect a meaningful role of intensified producer dissemination in improving consumer

“uptake” of report cards, and offers limited support for the proposition that increased print media

coverage may induce more consumers to use public reports in making provider choices. Taken

together, our findings suggest a troubling “disconnect” between “supply” of CQI and its

“demand” by consumers, even as financial resources invested into improving its content and

accuracy have grown substantially.

Previous studies have documented a generally low consumer awareness of provider quality

information (Kaiser Family Foundation, 2008; Scanlon et al, 2015; Greene et al, 2015). Our

descriptive results align with these earlier reports, finding that just one third of consumers

reported seeing any CQI and only one in ten acknowledged using it to choose providers in 2008,

with hardly any movement over the ensuing four years. This has happened at a time when

resources invested in quality transparency have experienced a sharp upturn (Mehrotra et al,

Page 112: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

100

2012), more organizations than ever before are reporting comparative quality data and, as we

confirm empirically, many United States regions have seen virtually uniform increases in

availability of information, its applicability to consumers’ clinical needs, and its credibility in the

eyes of providers and consumers. Low consumer awareness would not be of such concern if a

small “critical mass” of consumers was needed to move markets towards more efficient

providers but, unfortunately, there is no empirical information on the exact incidence of

“optimal” use. Our findings suggest that policymakers need to find ways to reach consumers

more effectively, perhaps by targeting the report cards to consumers who most need them at

times when they most want them.

Our analysis offers valuable insights into the “supply” aspect of quality transparency by

providing regional estimates of the number of report cards available to consumers, the relevance

of their content to consumers’ needs and clinical profile, and the degree to which consumers and

providers are likely to deem the available information as credible. We find that report cards have

improved along all these margins, which speak to the fact that producers of CQI are alive and

responsive to some of the major criticisms directed at the public reporting. However, important

gaps remain. For instance, despite improvements in applicability of report card content, on

average just about one-sixth of regional reports had information that was directly relevant to the

potential clinical needs of our group of chronically ill respondents. Given that these set of

chronic conditions are widely prevalent and account for a large chunk of clinical problems faced

by patients (Sheller et al., 2014), and even allowing for the fact that some reports may have been

focused exclusively on other acute illnesses or preventive care use, one would expect a larger

fraction of report cards would have some information on one or more of the chronic conditions.

By contrast, we detect a more optimistic picture for measured improvements in report card

Page 113: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

101

credibility with noticeable gains in all three margins along which it was measured, including

greater involvement of the non-profit sector and governments in production, higher reliance on

clinical records instead of insurance claims for measure development, and increased

standardization of clinical measures through broader validation by nationally accredited

agencies. The upshot is that even as public reporting has failed to ignite strong consumer

enthusiasm, it has remained keyed into the broader critique leveled against it by providers and

researchers by responding meaningfully to perceived deficits in quality and reliability of the

report card content.

Did improvements in availability, applicability, and credibility of CQI produce any gains in

consumer awareness or use? We found that although people were no more likely to report seeing

a quality report card when more information became available than their counterparts who saw

no similar increases, they were more inclined to use the information to choose providers. One

interpretation of these results is that individuals may be “sitting on the fence” regarding its use

and may be persuaded to start using it once more information becomes available (learning

effects). Respondents were also more likely to use it if the additional information was directly

applicable to their clinical condition(s). These patterns provide modest support for efforts to

improve both quantity and quality (i.e., content) of reports in prodding the public to use them and

reinforce the importance of making reports as directly relevant to consumers as possible. Further,

in our subsample analyses, these patterns held more strongly for respondents who were already

exposed to CQI, supporting the theory of “learning to use” CQI over time. The modest increases

in applicability of CQI that we see in our data suggests a move towards a more close tailoring of

report content to the prevalence of clinical conditions in the target population. Our estimates

support continued initiatives to match information more closely with clinical needs of

Page 114: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

102

consumers. On the other hand, we fail to turn up any evidence that more credible reports will

induce more consumers to use them. If anything, increased credibility appears to induce more

consumer reluctance to switch doctors based on concerns about quality. If more credible reports

reinforce prior belief patterns in relative quality of providers we may expect to see such a result

but we are unable to test this conjecture. It is possible our failure to find any effects may simply

be because our particular measure of “credibility” is unrelated to consumer response; most

consumers may not care about the source of report or the data used to measure quality or,

plausibly, may not be aware of agencies that validate individual measures. Even so, producers of

CQI may want to make their report cards more credible for reasons unrelated to enhancing

consumer use (e.g., soliciting physician buy-in by using medical records data, pursuing enhanced

standardization to increase comparability to other reports).

Dissemination of report cards to consumers has attracted limited scrutiny. Lacking even

descriptive data on how CQI producing organizations attempt to distribute it to consumers, most

studies of report card effectiveness (in improving health outcomes or healthcare process quality)

have heavily relied on “reduced form” models, sometimes purposefully ignoring the channels

mediating the final effects of quality transparency (Mennemeyer et al., 1997; Mukamel et al.,

2004; Romano and Zhou, 2004; Jha and Epstein, 2006; Mukamel et al., 2008; Fung et al., 2008;

Mehrotra et al., 2012). Exploiting AF4Q program goals that mandate alliances produce and

disseminate CQI, we confirmed their use of a broad spectrum of strategies to “sell” their report

cards while progressively intensifying their efforts over the life of the program. Virtually all

alliances drew on the rich set of relationships they had built among diverse stakeholders to

distribute their reports, thus exploiting a natural comparative advantage possessed by non-profit

multi-stakeholder organizations; most also acknowledged the prime importance of using media

Page 115: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

103

communication strategies to “spread the word” about their public reports. Despite these

initiatives, however, we failed to detect a meaningful impact of proactive alliance dissemination

of CQI on its regional awareness or use. In part this may be due to measurement error on both

sides of the estimating empirical model. Our primary variable was a relatively crude measure of

dissemination whereby each dissemination strategy was coded as a binary variable, intentionally

leaving out variation in ways each strategy was implemented by different alliances. However,

even with an alternative coding approach where we allowed the effect of each dissemination

strategy to vary independently, we failed to turn up findings that would suggest efficacy of

alliance dissemination efforts. Similarly, our dependent variable broadly captures consumer

awareness of virtually any type of CQI available in the region, since we are unable to decompose

the variable to separately identify the effect of dissemination of the alliance-sponsored report

card. Hence, we caution against drawing too broad a conclusion from our findings. Perhaps an

analysis that avoids such measurement error may be better able to capture the specific impact of

alliances’ dissemination efforts on their report cards. We leave such an analysis to a future study.

An important feature missing from earlier literature is the extent to which media outlets

discuss provider quality ratings in their news and editorial columns, including how they “frame”

their discussion of transparency initiatives. Media is by far and away the dominant driver of

public opinion in many kindred areas of public policy importance (e.g., problems associated with

abuse of tobacco, alcohol, and other addictive drugs, heart disease risk factors, risky sex-related

behaviors, cancer screening and prevention), a reality echoed by the success of a slew of well-

funded public information campaigns over past decades and underscored by a steady stream of

research studies that find such campaigns used media attention to effectively raise the salience of

these issues in the public mind (Atkin, 2001; Hornik & Yanovitzky, 2003; McCombs, 2004;

Page 116: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

104

Kiousis, 2004; Smith et al, 2006; Bala et al, 2008; Hornik et al, 2008; Wakefield et al, 2010). We

surveyed hundreds of news items that appeared in nearly a hundred local newspapers published

in fourteen diverse regions of the United States to reveal a consistent pattern of somewhat

lackluster coverage of quality report cards in the print press. Moreover, coverage grew meagerly

for some kind of report cards and even declined for others over the study period. These results,

although surprising when compared to sustained media interest in quality ratings in many other

contexts (e.g., college education ratings, food quality ratings), generally track findings of the

only study in the past literature that investigated media coverage of public reports. In this

somewhat dated study, Mennemeyer and coauthors (1997) explored print media coverage related

to hospital quality in the local press immediately following the release of a hospital report card

by the Health Care Financing Administration (which later became the Centers for Medicare &

Medicaid Services). Authors found some “pickup” of news items related to the report card

following its release but concluded generally that “quality stories were rare”. Nearly two decades

later we come to the same conclusion. Sparse press coverage is all the more striking when we

consider that our period of study brackets the time when a major healthcare reform law (with

provisions explicitly aimed at increasing the scope and breadth of public reporting) was passed

and began to be implemented. Nevertheless, it does highlight the potentially untapped

opportunity for improving the salience of public reporting amongst the general public by

heightening the media footprint of report cards, perhaps as part of mass media campaigns akin to

those in the field of public health and health behavior.

Did media coverage, limited as it was, induce some consumers to start using CQI? Our

answer is a highly qualified yes. For some type of report cards, namely those published by the

CMS (e.g., Hospital Compare, Nursing Home Compare, CAHPS, Home Health Compare),

Page 117: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

105

increased coverage in press yielded modest gains in consumer use (but not awareness).We do not

see consumers responding the same way to media stories focused on non-CMS reports. Media

stories on CMS-sponsored report cards may have been more influential with the general public

because Hospital Compare and Nursing Home Compare are quite possibly the most well-known

websites that carry the imprimatur of the federal government. Moreover, CMS sponsored a well-

publicized initiative at the launch of these websites, including unveiling of the Hospital Compare

website at the Association of Health Care Journalists National Conference (American Hospital

Association, 2002). Similarly, prior to release of Nursing Home Compare measures, CMS

conducted a 6-state pilot study involving aggressive outreach to Medicare beneficiaries through

advertising and other public communication modes. These initiatives followed a template used

during prior campaigns to improve communication with Medicare beneficiaries to inform them

about Medicare coverage policies (“Helping You Help Yourself”); this campaign made use of

bilingual television, radio, and newspaper advertisements as well the Internet to reach Medicare

beneficiaries (Harris & Clauser, 2002). These efforts notwithstanding, CMS has invested much

more resources in “getting the measurement right” and improving the accuracy of the

information than in aiding dissemination of report cards to potential consumers, even to the point

of encouraging providers and hospitals to initiate informational efforts on its behalf. For

instance, an advisory issued by the American Hospital Association to coincide with the launch of

Hospital Compare exhorts hospitals to inform potential customers about their quality rankings

while strongly advising them against using the information to “compare themselves with

competing hospitals” (American Hospital Association, 2002). While CMS’s concern with the

accuracy of reports is laudable, our findings on the modest but significant impact of the scant

media coverage of CMS reports does provide some support for increasing efforts to “spread the

Page 118: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

106

word” about these federal websites, perhaps through more focused mass media campaigns.

However, given the small effects in our results, we hasten to add that such initiatives may start as

limited pilot studies instead of a massive and expensive publicity campaign, which can be

appropriately scaled up if they yield positive results after careful evaluation of their efficacy. Our

finding that “negative framing” of news stories about report cards did not dissuade the public

from using them adds to the importance of a targeted informational campaign to elevate the

media “footprint” of quality transparency movement.

Limitations and Future Directions

A major limitation of the present study is its exclusive focus on print media in assessing the

effect of dissemination of CQI on consumer awareness and behavior. This focus, although not

unusual among media studies of public health issues (Caburnay et al., 2003; Barry et al., 2011;

Chang, 2012), unavoidably misses the full spectrum of media discussion related to the provider

quality transparency and likely introduces a significant measurement error in our estimation

equations. Although print media is still influential, television and radio broadcast have largely

supplanted it as the major sources people turn to for news coverage on current events (National

Health Council, 1997; Gilliam & Iyengar, 2000; Alterman, 2003; Hale et al, 2007). Further, last

decade has seen an veritable explosion of commentary on social issues hosted on a bewildering

variety of social media platforms and Internet-based media, making these technologies a major

mode of communication and social interaction, especially among the younger subgroups of

population (Eyrich et al, 2008; Thackeray et al, 2012; Fox, 2013; Hamm et al, 2013; Moorhead

et al, 2013). Acknowledging this reality, many studies of salient public health issues have

explored the influence of television and social media on public opinion and discourse (Brodie et

al, 1998; Pribble et al, 2006; Slater et al, 2007; Goel et al, 2010; Niederdeppe et al, 2010; Park et

Page 119: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

107

al, 2011; Zhu et al, 2012; Napolitano et al, 2013; Niederdeppe et al, 2013; McDaniel et al, 2014).

Though public discourse on provider quality reporting seems a ripe issue for a more in-depth

exploration with social media platforms and local television broadcasts, major data constraints

continue to hamper such efforts including limited availability of television transcripts for local

TV (Hale, 2007), lack of archival information on Twitter feeds (Kim et al, 2013),

unrepresentative search engine traffic patterns (Ripberger, 2011), and absence of time-stamped

Facebook posts (Glover et al, 2015).

Our investigation is also constrained by our inability to include a variable for consumer

demand of CQI in our estimation equations. Consumer demand for CQI may vary based on

multiple individual-level (e.g., health status, healthcare utilization etc.,) and area-level factors

that may be correlated with both consumer awareness of CQI and its availability to regional

consumers. This makes capturing consumer demand important to allay concerns about omitted

variable bias in our estimates, as well as an important component worthy of study in its own

right. Accounting for differences in ways consumers search for provider quality information to

address their ongoing needs should be a fruitful topic for future research. In principle, one could

capture CQI demand by studying regional variations in search patterns for specific keywords

using major search engines such as Google Trends, as has been done in other contexts (Reis &

Brownstein, 2010; Cook, 2011; Harsha et al, 2014). However, these tools are significantly

limited in reliability and coverage, especially if the search query is sufficiently infrequent (Zhu et

al, 2012; Ripberger, 2011).

A prominent feature of the current provider quality disclosure landscape is the proliferation

of commercial websites that offer largely unsolicited patient feedback about physicians and

hospitals to future customers. These platforms include companies such as Yelp, Angie’s list,

Page 120: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

108

WebMD, Healthgrades, and many other lesser known ventures. We intentionally chose to ignore

such “informal” sources of CQI in developing our measures of availability, applicability, or

credibility, in part owing to concerns about the validity and reliability of their ratings as well as

paucity of clinical-condition specific measures in their ratings. This omission may be

problematic for two reasons: first, our measure of consumer awareness does not distinguish

between formal and informal CQI, and second, many consumers are more familiar with ratings

available at these websites than those in the formal report cards such as Hospital Compare or

Leapfrog. Their popularity with consumers has prompted some investigators to systematically

estimate the degree to which their ratings align with those found in more formal report cards, in

some cases revealing a reasonably high correlation (Lagu et al, 2010; Bardach et al, 2012;

Greaves et al, 2012; Glover et al, 2015). Future studies should attempt to derive regional

measures of availability and applicability of informal CQI and assess its relationship with

consumer awareness and use of provider quality information.

Page 121: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

109

Chapter 7

Page 122: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

110

Conclusion

Are quality report cards reaching consumers and if so by what channels? Is their content

reflective of consumers’ clinical needs and likely to be considered trustworthy? Are report card

sponsors proactive in marketing them to consumers and does that have an effect on consumer

behavior? How do media cover provider quality transparency initiatives and does that shape

public opinion of report cards? These questions have attracted heightened scrutiny in tandem

with the realization that rapid growth of “supply” of provider quality comparisons has not kept

pace with public interest in its use, intensifying interest in defining the precise channels of

dissemination. By compiling a broad spectrum of data sources to capture report card

dissemination and employing rigorous empirical methods, we provide preliminary answers to

these questions and sketch out tentative areas of future research. Consumer interest in CQI

remains tepid even as attitudes towards using CQI to choose providers are broadly favorable.

This at a time when both the quantity and relevance of information has shown impressive

increases, and sponsors seem to have responded to major criticisms of public reporting by

enhancing credibility of reports through more standardized methods in computing comparative

metrics, greater participation of multi-stakeholder and government entities in production and

marketing, and use of data sources better able to capture nuances of treatment process and

outcomes. At the same time, proactive efforts by individual sponsors such as AF4Q alliances

have fallen short of moving regional consumers towards greater use of reports. Media interest in

formal report cards does not appear to be commensurate with the collective resource investment

in quality transparency in recent years and has at most a modest impact on consumer likelihood

of us of public reports.

Page 123: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

111

These findings present challenges as well as opportunities. We adduced tentative evidence

that many consumers may “learn” to use report cards over time after initial exposure;

improvement in reports along multiple margins (i.e., increased availability and applicability) may

tip them over to use. More precise targeting informed by the context of consumers’ information

searches may be helpful. Continued attention to tailoring report card content to population

clinical profile and employing reliable methodologies, accurate sources, and trustworthy

“messengers” will be important. Media has yet to play a role in popularizing report cards similar

to its historic role in other major public health initiatives such as smoking and alcoholism. If

media footprint of formal CQI can be improved, it may yield important gains in use. CMS has

recently expressed interest in a targeted media driven campaign to publicize its report cards and

our findings provide some support for these plans while indicating that it may be prudent to

implement and test smaller scale pilot initiatives before massive large scale and costly promotion

efforts.

Provide quality transparency movement stands at an important crossroad. Its deeper

entrenchment in healthcare delivery landscape now needs to be followed by a higher engagement

of the general public in its promise and potential. Although it may still be able encourage

increased consumer matching with higher quality providers through other channels other than

consumer use (by shaming providers or altering consumer financial incentives via tiered

networks), its true promise may remain unfulfilled if consumers remain detached from its

vaunted goal of fostering greater “consumerism” in healthcare markets.

Page 124: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

112

References

Page 125: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

113

References

Ackerberg, Daniel A., 2003. “Advertising, Learning, and Consumer Choice in Experience Good

Markets: A Structural Empirical Examination,” International Economic Review 44,1007-1040

Adams, E. K., & Herring, B. (2008). Medicaid HMO penetration and its mix: did increased

penetration affect physician participation in urban markets?. Health services research, 43(1p2),

363-383.

Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behavior.

Englewood Cliffs, NJ: Prentice Hall.

Alterman, E. (2003). What liberal media? The truth about bias and the news. New York: Basic

Books.

Althaus, S. L., & Tewksbury, D. (2000). Patterns of Internet and traditional news media use in a

networked community. Political Communication, 17(1), 21-45.

American Hospital Association, 2002. http://www.aha.org/advocacy-issues/tools-

resources/advisory/96-06/050321-quality-adv.pdf

Anderson, J. G., Rainey, M. R., & Eysenbach, G. (2003). The impact of CyberHealthcare on the

physician–patient relationship. Journal of medical systems, 27(1), 67-84.

Andreasen, A. (1995). Marketing social change: Changing behavior to promote health, social

development,and the environment. San Francisco: Jossey-Bass

Atkin, C. K. (2001). Theory and principles of media health campaigns. Public communication

campaigns, 3, 49-67.

Page 126: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

114

Austin, J. M., Jha, A. K., Romano, P. S., Singer, S. J., Vogus, T. J., Wachter, R. M., &

Pronovost, P. J. (2015). National hospital ratings systems share few common scores and may

generate confusion instead of clarity. Health Affairs, 34(3), 423-430.

Bala, M., Strzeszynski, L., & Cahill, K. (2008). Mass media interventions for smoking cessation

in adults. Cochrane Database Syst Rev, 1.

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory.

Englewood Cliffs, NJ: Prentice Hall.

Bardach, N. S., Asteria-Peñaloza, R., Boscardin, W. J., & Dudley, R. A. (2012). The relationship

between commercial website ratings and traditional hospital performance measures in the USA.

BMJ quality & safety, bmjqs-2012.

Barry, C. L., Jarlenski, M., Grob, R., Schlesinger, M., & Gollust, S. E. (2011). News media

framing of childhood obesity in the United States from 2000 to 2009. Pediatrics, peds-2010.

Becker, M. H. (1974). The health belief model and personal health behavior. San Francisco:

Society for Public Health Education.

Behr, R. L., & Iyengar, S. (1985). Television news, real-world cues, and changes in the public

agenda. Public Opinion Quarterly, 49(1), 38-57.

Bhandari N, Shi Y, Jung K. Seeking health information online: does limited healthcare access

matter? J Am Med Inform Assoc. 2014 Jun 19. pii: amiajnl-2013-002350.

Brien SE, Lorenzetti DL, Lewis S, Kennedy J, Ghali WA. Overview of a formal scoping review

on health system report cards. Implement Sci. 2010 Jan 15;5:2

Brodie, M., Brady, L. A., & Altman, D. E. (1998). Media coverage of managed care: is there a

negative bias?. Health Affairs, 17(1), 9-25.

Page 127: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

115

Caburnay, C. A., Kreuter, M. W., Luke, D. A., Logan, R. A., Jacobsen, H. A., Reddy, V. C., ... &

Zayed, H. R. (2003). The news on health behavior: coverage of diet, activity, and tobacco in

local newspapers. Health Education & Behavior, 30(6), 709-722.

Casalino, L. P., Elster, A., Eisenberg, A., Lewis, E., Montgomery, J., & Ramos, D. (2007). Will

pay-for-performance and quality reporting affect health care disparities?. Health Affairs, 26(3),

w405-w414.

Celsi, R. L., & Olson, J. C. (1988). The role of involvement in attention and comprehension

processes. Journal of consumer research, 210-224.

Chang, C. (2012). News coverage of health-related issues and its impacts on perceptions: Taiwan

as an example. Health communication, 27(2), 111-123.

Chernew, M.E. and D.P. Scanlon. (1998). “Health Plan Report Cards and Insurance Choice.”

Inquiry, 35(1), 9-22

Chernew, M., Gowrisankaran, G., & Scanlon, D. P. (2008). Learning and the value of

information: Evidence from health plan report cards. Journal of Econometrics, 144(1), 156-174.

Christianson JB, Volmar KM, Alexander J, Scanlon DP. A report card on provider report cards:

Current status of the health care transparency movement. J Gen Intern Med 2010; 25(11):1235-

1241.

Christianson JB1, Volmar KM, Shaw BW, Scanlon DP. Producing public reports of physician

quality at the community level: the Aligning Forces for Quality initiative experience. Am J

Manag Care. 2012 Sep;18(6 Suppl):s133-40.

Christianson, J. (2015) Looking AheadMRI's blog “They Said What?”: Patient-initiated Internet

Reviews of Physicians.

Page 128: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

116

Cline, R. J., & Haynes, K. M. (2001). Consumer health information seeking on the Internet: the

state of the art. Health education research, 16(6), 671-692.

Cook, S., Conrad, C., Fowlkes, A. L., & Mohebbi, M. H. (2011). Assessing Google flu trends

performance in the United States during the 2009 influenza virus A (H1N1) pandemic. PloS one,

6(8), e23610.

Craigie, M., Loader, B., Burrows, R., & Muncer, S. (2002). Reliability of health information on

the Internet: an examination of experts' ratings. Journal of medical Internet research, 4(1), e2.

Crawford, Gregory S. and Matthew Shum, 2005. “Uncertainty and Learning in Pharmaceutical

Demand.” Econometrica, 73: 1137-1174.

Dafny, L., & Dranove, D. (2008). Do report cards tell consumers anything they don't already

know? The case of Medicare HMOs. The Rand journal of economics, 39(3), 790-821.

Dutta-Bergman M. Trusted online sources of health information:differences in demographics,

health beliefs, and health-information orientation. J Med Internet Res. 2003;5(3):e21.

Dranove, D., Kessler, D., McClellan, M., & Satterthwaite, M. (2002). Is more information

better? The effects of'report cards' on health care providers (No. w8697). National Bureau of

Economic Research.

Edgman-Levitan, S., & Cleary, P. D. (1996). What information do consumers want and need?.

Health affairs, 15(4), 42-56.

Epstein, A. J. (2010). Effects of report cards on referral patterns to cardiac surgeons. Journal of

health economics, 29(5), 718-731.

Erdem, T. and M. Keane, 1996. “Decision Making Under Uncertainty: Capturing Dynamic

Brand Choice Processes in Turbulent Consumer Goods Markets.” Marketing Science 15: 1-20.

Page 129: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

117

Eyrich, N., Padman, M. L., & Sweetser, K. D. (2008). PR practitioners’ use of social media tools

and communication technology. Public relations review, 34(4), 412-414.

Eysenbach G, Köhler C. How do consumers search for and appraise health information on the

world wide web? Qualitative study using focus groups, usability tests, and in-depth interviews.

BMJ. 2002 Mar 9;324(7337):573–7

Eysenbach, G., & Jadad, A. R. (2001). Evidence-based patient choice and consumer health

informatics in the Internet age. Journal of medical Internet research, 3(2), e19.

Findlay, S. D. (2016). Consumers’ Interest In Provider Ratings Grows, And Improved Report

Cards And Other Steps Could Accelerate Their Use. Health Affairs, 35(4), 688-696.

Fox S and Jones S. The social life of health information. Pew Internet & American Life Project

2009. Available from: http://www.pewinternet.org/Reports/2009/8-The-Social-Life-of-Health-

Information.aspx. Accessed August 22, 2013.

Fox, S., & Duggan, M. (2013). Health online 2013. Health, 1-55.

Fung CH, Lim YW, Mattke S, Damberg C, Shekelle P. Systematic review: the evidence that

publishing patient care performance data improves quality of care. Ann Intern Med.

2008;148(2):111–23.

Gerbner, G., Gross, L., Morgan, M., & Signorielli, N. (1982).What TV teaches about physicians

and health. M¨obius, 2, 44–51.

Gerbner, G., Gross, L., Morgan, M., Signorielli, N., & Shanahan, J. (2002). Growing up with

TV: Cultivation processes. In J. Bryant & D. Zillman (Eds.), Media effects: Advances in theory

and research (pp. 43–67).Mahwah, NJ: Erlbaum.

Page 130: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

118

Glover, M., Khalilzadeh, O., Choy, G., Prabhakar, A. M., Pandharipande, P. V., & Gazelle, G. S.

(2015). Hospital Evaluations by social media: A comparative analysis of Facebook ratings

among performance outliers. Journal of general internal medicine, 30(10), 1440-1446.

Goel, S., Hofman, J. M., Lahaie, S., Pennock, D. M., & Watts, D. J. (2010). Predicting consumer

behavior with Web search. Proceedings of the National academy of sciences, 107(41), 17486-

17490.

Goold, S. D., & Klipp, G. (2002). Managed care members talk about trust. Social science &

medicine, 54(6), 879-888.

Greene, J., Fuentes-Caceres, V., Verevkina, N., & Shi, Y. (2015). Who's Aware of and Using

Public Reports of Provider Quality?. Journal of health care for the poor and underserved, 26(3),

873-888.

Greaves, F., Pape, U. J., King, D., Darzi, A., Majeed, A., Wachter, R. M., & Millett, C. (2012).

Associations between Web-based patient ratings and objective measures of hospital quality.

Archives of internal medicine, 172(5), 435-436.

Greaves, F., Laverty, A. A., Cano, D. R., Moilanen, K., Pulman, S., Darzi, A., & Millett, C.

(2014). Tweets about hospital quality: a mixed methods study. BMJ quality & safety, bmjqs-

2014.

Gilliam Jr, F. D., & Iyengar, S. (2000). Prime suspects: The influence of local television news on

the viewing public. American Journal of Political Science, 560-573.

Galloro, V. (2011). Status update. Hospitals are finding ways to use the social media revolution

to raise money, engage patients and connect with their communities. Modern healthcare, 41(11),

6-7.

Page 131: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

119

Halasyamani, L. K., & Davis, M. M. (2007). Conflicting measures of hospital quality: ratings

from “Hospital Compare” versus “Best Hospitals”. Journal of Hospital Medicine, 2(3), 128-134.

Hale, M., Fowler, E. F., & Goldstein, K. M. (2007). Capturing multiple markets: A new method

of capturing and analyzing local television news. Electronic News, 1(4), 227-243.

Hall, M. A., Dugan, E., Zheng, B., & Mishra, A. K. (2001). Trust in physicians and medical

institutions: what is it, can it be measured, and does it matter?. Milbank Quarterly, 79(4), 613-

639.

Hamm, M. P., Chisholm, A., Shulhan, J., Milne, A., Scott, S. D., Given, L. M., & Hartling, L.

(2013). Social media use among patients and caregivers: a scoping review. BMJ open, 3(5),

e002819.

Harsha, A. K., Schmitt, J. E., & Stavropoulos, S. W. (2014). Know your market: use of online

query tools to quantify trends in patient information-seeking behavior for varicose vein

treatment. Journal of Vascular and Interventional Radiology, 25(1), 53-57.

Hibbard, J. H., Slovic, P., & Jewett, J. J. (1997). Informing consumer decisions in health care:

implications from decision‐making research. Milbank Quarterly, 75(3), 395-414.

Hibbard, J. H., & Jewett, J. J. (1997). Will quality report cards help consumers?. Health Affairs,

16(3), 218-228.

Hibbard, J. H., & Jewett, J. J. (1996). What type of quality information do consumers want in a

health care report card?. Medical care research and review, 53(1), 28-47.

Hibbard, J. H., Harris-Kojetin, L., Mullin, P., Lubalin, J., & Garfinkel, S. (2000). Increasing the

impact of health plan report cards by addressing consumers' concerns. Health Affairs, 19(5),

138-143.

Page 132: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

120

Hibbard, J. H. (2008). What can we say about the impact of public reporting? Inconsistent

execution yields variable results. Annals of Internal Medicine, 148(2), 160-161.

Hibbard, J. H., Greene, J., & Daniel, D. (2010). What is quality anyway? Performance reports

that clearly communicate to consumers the meaning of quality of care. Medical Care

Research & Review, 67, 275-293.

Hanauer DA, Zheng K, Singer DC, Gebremariam A, Davis MM. Public Awareness, Perception,

and Use of Online Physician Rating Sites. JAMA. 2014;311(7):734-735.

Harris KM, Buntin MB. Choosing a health care provider: The role of quality information.

Research Synthesis Report No. 14. Princeton, NJ:The Synthesis Project. The Robert Wood

Johnson Foundation, 2008

Harris, Y., & Clauser, S. B. (2002). Achieving improvement through nursing home quality

measurement. Health Care Financing Review, 23(4), 5.

Hearld, L. R., Alexander, J. A., Beich, J., Mittler, J. N., & O'Hora, J. L. (2012). Barriers and

strategies to align stakeholders in healthcare alliances. American Journal of Managed Care,

18(6), S148.

Higashi, T., Nakamura, F., Saruki, N., Takegami, M., Hosokawa, T., Fukuhara, S., ... & Sobue,

T. (2012). Evaluation of newspaper articles for coverage of public reporting data: a case study of

unadjusted cancer survival data. Japanese journal of clinical oncology, hys190.

Hirsch, E., & Silverstone, R. (Eds.). (2003). Consuming technologies: Media and information in

domestic spaces. Routledge.

Hornik, R., & Yanovitzky, I. (2003). Using theory to design evaluations of communication

campaigns: The case of the National Youth Anti‐Drug Media Campaign. Communication

Theory, 13(2), 204-224.

Page 133: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

121

Hornik, R., Jacobsohn, L., Orwin, R., Piesse, A., & Kalton, G. (2008). Effects of the national

youth anti-drug media campaign on youths. American Journal of Public Health, 98(12), 2229-

2236.

Jeffres, L. W., Neuendorf, K., & Atkin, D. J. (2012). Acquiring knowledge from the media in the

Internet age. Communication Quarterly, 60(1), 59-79.

Jha AK, Epstein AM. The predictive accuracy of the New York State coronary artery bypass

surgery report-card system. Health Aff (Millwood). 2006;25: 844-55.

Jung, K., Feldman, R., & Scanlon, D. (2011). Where would you go for your next

hospitalization?. Journal of health economics, 30(4), 832-841.

Kaiser Family Foundation 2000. National Survey on Americans as Health Care Consumers:

An Update on The Role of Quality Information. Available at: http://kff.org/health-costs/poll-

finding/national-survey-on-americans-as-health-care/ Accessed April 7, 2016

Kaiser Family Foundation, Agency for Healthcare Research and Quality, and Harvard School of

Public Health. 2004 Nov. National Survey on Consumers' Experiences with Patient Safety and

Quality Information. Available at: http://www.kff.org/kaiserpolls/pomr111704pkg.cfm Accessed

July 15, 2013.

Kaiser Family Foundation. 2006. 2006 Update on Consumers’ Views of Patient Safety and

Quality Information Available at: http://kff.org/other/poll-finding/summary-and-chartpack-

2006-update-on-consumers/ Accessed July 15, 2013.

Kaiser Family Foundation. 2008 Update on Consumers' Views of Patient Safety and Quality

Information. Available at: http://kff.org/health-reform/poll-finding/2008-update-on-consumers-

views-of-patient-2/ Accessed July 15, 2013.

Page 134: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

122

Kaiser Family Foundation. 2011. Trends in the Use of Hospital and Provider Quality Ratings

Available at: http://kff.org/health-reform/poll-finding/data-note-trends-in-the-use-of/ Accessed

July 15, 2013.

Kim, A. E., Hansen, H. M., Murphy, J., Richards, A. K., Duke, J., & Allen, J. A. (2013).

Methodological considerations in analyzing Twitter data. Journal of the National Cancer

Institute. Monographs, 2013(47), 140-146.

Kiousis, S. (2004). Explicating media salience: A factor analysis of New York Times issue

coverage during the 2000 US presidential election. Journal of Communication, 54(1), 71-87.

Kwak, H., Lee, C., Park, H., & Moon, S. (2010, April). What is Twitter, a social network or a

news media?. In Proceedings of the 19th international conference on World wide web (pp. 591-

600). ACM.

Lagu, T., Hannon, N. S., Rothberg, M. B., & Lindenauer, P. K. (2010). Patients’ evaluations of

health care providers in the era of social networking: an analysis of physician-rating websites.

Journal of general internal medicine, 25(9), 942-946.

Lewis, V. A., Colla, C. H., Carluzzo, K. L., Kler, S. E., & Fisher, E. S. (2013). Accountable care

organizations in the United States: market and demographic factors associated with formation.

Health services research, 48(6pt1), 1840-1858.

Long, M., Slater, M. D., Boiarsky, G., Stapel, L., & Keefe, T. (2005). Obtaining nationally

representative samples of local news media outlets. Mass Communication & Society, 8(4), 299-

322.

Lopez-Escobar, E., Llamas, J. P., & McCombs, M. (1998). Agenda setting and community

consensus: First and second level effects. International Journal of Public Opinion Research,

10(4), 335-348.

Page 135: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

123

Lopez-Escobar, E., Llamas, J. P., McCombs, M., & Lennon, F. R. (1998). Two levels of agenda

setting among advertising and news in the 1995 Spanish elections. Political Communication,

15(2), 225-238.

Luca, M. (2011). Reviews, reputation, and revenue: The case of Yelp. com. Com (September 16,

2011). Harvard Business School NOM Unit Working Paper, (12-016).

Luft, H. S., Garnick, D. W., Mark, D. H., Peltzman, D. J., Phibbs, C. S., Lichtenberg, E., &

McPhee, S. J. (1990). Does quality influence choice of hospital?. Jama, 263(21), 2899-2906.

Mangold, W. G., & Faulds, D. J. (2009). Social media: The new hybrid element of the promotion

mix. Business horizons, 52(4), 357-365.

Marshall, M. N., Shekelle, P. G., Leatherman, S., & Brook, R. H. (2000). The public release of

performance data: what do we expect to gain? A review of the evidence. Jama, 283(14), 1866-

1874.

McCombs, M. (2004). Setting the agenda: The mass media and public opinion. Malden, MA:

Blackwell.

McCombs, M. E., & Shaw, D. L. (1972). The agenda-setting function of mass media. Public

opinion quarterly, 36(2), 176-187.

McCombs, M., Llamas, J. P., Lopez-Escobar, E., & Rey, F. (1997). Candidate images in Spanish

elections: Second-level agenda-setting effects. Journalism & Mass Communication Quarterly,

74(4), 703-717.

McCombs, M. E., Shaw, D. L., & Weaver, D. H. (2014). New Directions in Agenda-Setting

Theory and Research. Mass Communication and Society, 17(6), 781-802.

Page 136: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

124

McDaniel, P. A., Offen, N., Yerger, V. B., & Malone, R. E. (2014). “A Breath of Fresh Air

Worth Spreading”: media coverage of retailer abandonment of tobacco sales. American journal

of public health, 104(3), 562-569.

McGuire, W.J. The communication/persuasion matrix. In: Lipstein, B., and McGuire, W.J., eds.

Evaluating Advertising: A Bibliography of the Communication Process. New York: Advertising

Research Foundation, 1978.

Mehrotra, A., Hussey, P. S., Milstein, A., & Hibbard, J. H. (2012). Consumers’ and providers’

responses to public cost reports, and how to raise the likelihood of achieving desired results.

Health Affairs, 31(4), 843-851.

Mennemeyer ST, Morrisey MA, Howard LZ. Death and reputation: how consumers acted upon

HCFA mortality information. Inquiry. 1997;34:117-2

Mitchell, S., & Schlesinger, M. (2005). Managed care and gender disparities in problematic

health care experiences. Health services research, 40(5p1), 1489-1513.

Mittler, J. N., Volmar, K. M., Shaw, B. W., Christianson, J. B., & Scanlon, D. P. (2012). Using

websites to engage consumers in managing their health and healthcare. American Journal of

Managed Care, 18(6), eS177.

Moorhead, S. A., Hazlett, D. E., Harrison, L., Carroll, J. K., Irwin, A., & Hoving, C. (2013). A

new dimension of health care: systematic review of the uses, benefits, and limitations of social

media for health communication. Journal of medical Internet research, 15(4), e85.

Muhlestein, D. B., Wilks, C. E., & Richter, J. P. (2013). Limited use of price and quality

advertising among American hospitals. Journal of medical Internet research, 15(8).

Page 137: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

125

Mukamel DB, Weimer DL, Spector WD, Ladd H, Zinn JS. Publication of quality report cards

and trends in reported quality measures in nursing homes. Health Serv Res. 2008

Aug;43(4):1244-62

Mukamel, D. B., Weimer, D. L., Zwanziger, J., Gorthy, S. F. H., & Mushlin, A. I. (2004).

Quality report cards, selection of cardiac surgeons, and racial disparities: a study of the

publication of the New York State Cardiac Surgery Reports. INQUIRY: The Journal of Health

Care Organization, Provision, and Financing, 41(4), 435-446.

Mukamel DB1, Weimer DL, Mushlin AI. Interpreting market share changes as evidence for

effectiveness of quality report cards. Med Care. 2007 Dec;45(12):1227-32.

Napolitano, M. A., Hayes, S., Bennett, G. G., Ives, A. K., & Foster, G. D. (2013). Using

Facebook and text messaging to deliver a weight loss program to college students. Obesity,

21(1), 25-31.

National Health Council. Americans Talk About Science and Medical News.

Washington, DC: National Health Council; December 1997

Niederdeppe, J., Gollust, S. E., Jarlenski, M. P., Nathanson, A. M., & Barry, C. L. (2013). News

coverage of sugar-sweetened beverage taxes: pro-and antitax arguments in public discourse.

American journal of public health, 103(6), e92-e98.

Niederdeppe, J., Fowler, E. F., Goldstein, K., & Pribble, J. (2010). Does local television news

coverage cultivate fatalistic beliefs about cancer prevention?. Journal of Communication, 60(2),

230-253.

Ornstein C. (2013). Should Hospital Ratings Be Embraced — or Despised? Available at:

https://www.propublica.org/article/should-hospital-ratings-be-embraced-or-despised Accessed

on : April 10, 2016

Page 138: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

126

Oskamp, S. (1965). Overconfidence in case-study judgments. Journal of consulting psychology,

29(3), 261.

Otte-Trojel, T., de Bont, A., Rundall, T. G., & van de Klundert, J. (2014). How outcomes are

achieved through patient portals: a realist review. Journal of the American Medical Informatics

Association, 21(4), 751-757.

Park, H., Rodgers, S., & Stemmle, J. (2011). Health organizations’ use of Facebook for health

advertising and promotion. Journal of interactive advertising, 12(1), 62-77.

Peters, E., Dieckmann, N., Dixon, A., Hibbard, J. H., & Mertz, C. K. (2007). Less is more in

presenting quality information to consumers. Medical Care Research and Review, 64(2), 169-

190.

Pratkanis, A. R., & Greenwald, A. G. (1993). Consumer involvement, message attention, and the

persistence of persuasive impact in a message‐dense environment. Psychology & Marketing,

10(4), 321-332.

Pribble, J. M., Goldstein, K. M., Fowler, E. F., Greenberg, M. J., Noel, S. K., & Howell, J. D.

(2006). Medical news for the public to use? What’s on local TV news. Am J Manag Care, 12(3),

170-176.

Prochaska, J., & Velicer, W. (1997). The Transtheoretical Model of health behavior change.

American Journal of Health Promotion, 12, 38–48.

Rau J. (2013) Kaiser Health News. Looking For D.C.’s Best Hospitals? Here’s A Little Advice

Available at: http://khn.org/news/washington-hospital-ratings-and-how-they-differ/ Accessed

on: April 10, 2016

Reis, B. Y., & Brownstein, J. S. (2010). Measuring the impact of health policies using Internet

search patterns: the case of abortion. BMC public health, 10(1), 1.

Page 139: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

127

Ripberger J. Capturing Curiosity: Using Internet Search Trends to Measure Public Attentiveness.

Policy Studies Journal. 2011;39(2):239–259.

Robinowitz, D. L., & Dudley, R. A. (2006). Public reporting of provider performance: can its

impact be made greater?. Annu. Rev. Public Health, 27, 517-536.

Rogers, E. M., & Storey, J. D. (1987). Communication campaigns.

Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York: Free Press.

Romano PS, Zhou H. Do well-publicized risk-adjusted outcomes reports affect hospital volume?

Med Care. 2004;42:367-77.

Roski J and Kim M. Current efforts of regional and national performance measurement

initiatives around the United States. Am J Med Qual. 2010;25(4):249–54.

Rothberg, M. B., Morsi, E., Benjamin, E. M., Pekow, P. S., & Lindenauer, P. K. (2008).

Choosing the best hospital: the limitations of public quality reporting. Health Affairs, 27(6),

1680-1687.

Scanlon, D.P., Chernew, M.E., Sheffler, S., and A.M. Fendrick. (1998). “Health Plan Report

Cards: Exploring Differences in Plan Ratings.” Journal on Quality Improvement, 24(1), 5-20.

Scanlon, D.P. and M.E. Chernew. (1999). “HEDIS Measures and Managed Care Enrollment.”

Medical Care Research and Review, 56(2), 60-84.

Scanlon, D.P., Chernew, M.E., McLaughlin, C.M., and G. Solon. (2002). “The Impact of Health

Plan Report Cards on Managed Care Enrollment.” The Journal of Health Economics, 21(1), 119-

42.

Scanlon, D.P., Darby, C., Rolph, E., and H.E. Doty. (2001). "The Role of Performance Measures

for Improving Quality in Managed Care Organizations.” Health Services Research, 36(3), 619-

641.

Page 140: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

128

Scanlon, D.P., Lindrooth, R.C., and J.B. Christianson. (2008). “Steering Patients to Safer

Hospitals? The Effect of a Tiered Hospital Network on Hospital Admissions.” Health Services

Research, 43(5 Pt 2), 1849-1868

Scanlon DP, Beich J, Alexander JA, et al. The aligning forces for quality initiative: background

and evolution from 2005 to 2012. Am J Manag Care. 2012;18(6 Suppl):s115-25.

Scanlon, D. P., Shi, Y., Bhandari, N., & Christianson, J. B. (2015). Are Healthcare Quality

“Report Cards” Reaching Consumers? Awareness in the Chronically Ill Population. Am J Manag

Care, 21(3), 236-244.

Schlesinger, M., Mitchell, S., & Elbel, B. (2002). Voices unheard: barriers to expressing

dissatisfaction to health plans. Milbank Quarterly, 80(4), 709-755.

Schlesinger, M., Kanouse, D. E., Rybowski, L., Martino, S. C., & Shaller, D. (2012). Consumer

response to patient experience measures in complex information environments. Medical care, 50,

S56-S64.

Schneider, E. C., & Lieberman, T. (2001). Publicly disclosed information about the quality of

health care: response of the US public. Quality in Health Care, 10(2), 96-103.

Shahian, D. M., Yip, W., Westcott, G., & Jacobson, J. (2000). Selection of a cardiac surgery

provider in the managed care era. The Journal of thoracic and cardiovascular surgery, 120(5),

978-989.

Shaller, D., Kanouse, D. E., & Schlesinger, M. (2014). Context-based strategies for engaging

consumers with public reports about health care providers. Medical Care Research and Review,

71(5 suppl), 17S-37S.

Shanahan, J., & Morgan,M. (1999). Television and its viewers: Cultivation theory and research.

Cambridge, U.K.: Cambridge University Press.

Page 141: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

129

Shekelle PG1, Ortiz E, Rhodes S, Morton SC, Eccles MP, Grimshaw JM, Woolf SH. Validity of

the Agency for Healthcare Research and Quality clinical practice guidelines: how quickly do

guidelines become outdated? JAMA. 2001 Sep 26;286(12):1461-7.

Shi, Y., Scanlon, D. P., Bhandari, N., & Christianson, J. (Forthcoming). Is Anyone Paying

Attention to Physician Report Cards? The Impact of Increased Availability on consumers’

Awareness and Use of Physician Quality Information. Health Services Research

Sick B, Abraham JM. Seek and ye shall find: consumer search for objective health care cost and

quality information. Am J Med Qual. 2011 Nov-Dec;26(6):433-40

Slater, M., Long, M., Bettinghaus, E., & Reineke, J. (2007). News coverage of cancer in the US:

A representative national sample of newspapers, television, and magazines. In annual meeting of

the International Communication Association, San Francisco, CA.

Slovic, P. (1982). Toward understanding and improving decisions. Human performance and

productivity, 2, 157-183.

Smith, S. W., Atkin, C. K., & Roznowski, J. (2006). Are" drink responsibly" alcohol campaigns

strategically ambiguous?. Health communication, 20(1), 1-11.

Snyder, L. B. (2007). Health communication campaigns and their impact on behavior. Journal of

Nutrition Education and Behavior, 39(2), S32-S40.

Thackeray, R., Neiger, B. L., Smith, A. K., & Van Wagenen, S. B. (2012). Adoption and use of

social media among public health departments. BMC public health, 12(1), 1.

Tu, T. H., & Lauer, J. R. (2008). Word of mouth and physician referrals still drive health care

provider choice. Center for Studying Health System Change.

Page 142: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

130

Verhoef, L. M., Van de Belt, T. H., Engelen, L. J., Schoonhoven, L., & Kool, R. B. (2014).

Social media and rating sites as tools to understanding quality of care: a scoping review. Journal

of medical Internet research, 16(2), e56.

Vladeck BC, Goodwin EJ, Myers LP, Sinisi M. Consumers and hospital use: the HCFA “death

list”. Health Aff (Millwood). 1988;7:122-5.

Wakefield, M. A., Loken, B., & Hornik, R. C. (2010). Use of mass media campaigns to change

health behaviour. The Lancet, 376(9748), 1261-1271.

Watt, J. H., Mazza, M., & Snyder, L. (1993). Agenda-setting effects of television news coverage

and the effects decay curve. Communication research, 20(3), 408-435.

Weiner, S. G. (2013). Advertising emergency department wait times. Western Journal of

Emergency Medicine, 14(2).

Winter, J. P., & Eyal, C. H. (1981). Agenda setting for the civil rights issue. Public opinion

quarterly, 45(3), 376-383.

Ziebland, S. U. E., & Wyke, S. (2012). Health and illness in a connected world: how might

sharing experiences on the internet affect people's health?. Milbank Quarterly, 90(2), 219-249.

Zhu, J. J., Wang, X., Qin, J., & Wu, L. (2012, June). Assessing public opinion trends based on

user search queries: validity, reliability, and practicality. In The Annual Conf. of the World

Association for Public Opinion Research.

Page 143: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

131

Appendix A: Methods

Page 144: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

132

Table A-1 Process Used To Identify And Classify Alliance Dissemination Strategies

Alliance Excerpts From KII transcripts and Public Reporting

Summaries Dissemination category

Cincinnati

“We’ve been very fortunate to have a loaned executive

from Proctor and Gamble for the last two years. She’s a

branch manager, has been working in health care marketing

for Proctor for the last 28 years, I guess it is now. She is

going to be retiring from Proctor in December, and because

of the Bethesda, Inc. grant, we have been able to offer her a

position to join us full time. So she is going to be joining

us full time. And she is really excited to come on board;

we’re thrilled to have her and so she is really the one who is spearheading all of the consumer research around how do

you talk to consumers about health care and the issues that

we want to present in the public reports.”

Hiring A Public

Relations/Communications Expert

“Bridget, our vice-president of Strategic Partnerships, very,

very strong ties with the media. And whenever we have

new information, she puts it out there. We have frequent

press releases on Things, which is our local business

obligation. We have been on the radio, we’re on -we’ve

had lots of ((seconds 46:27)) on different radio channels.

We are on TV occasionally, and we cultivate these

relationships with the media.”

Press Release/ Media Aided

Dissemination

“The Collaborative carried out several rounds of market

research to make sure the positioning was right and made

sure to talk about the positioning from a visual standpoint.”

Consumer Research

“A lot of the marketing distribution will be through the

employer community; the Collaborative is having regular

meetings with local employers to ask them to incorporate

information into their wellness programs and to distribute

the information to employees at open enrollment.”

Collaboration With Community Based

Organizations/Stakeholders To

Disseminate Quality Reports

Memphis

I: OK MOVING ON AND THEN THINKING ABOUT

YOUR PUBLIC REPORTS, DO YOU HAVE A FORMAL

PLAN FOR MARKETING AND DISSEMINATING

YOUR PUBLIC REPORTS?

Renee: Oh yeah, I mean first of all we probably went

further than anybody else with the patient experience

survey. We did our own Consumer Reports magazine with

that; we disseminated that to multiple distribution points in

the community to include the library, physician offices,

beauty shops, a couple of barber shops. We’ve done

distribution at various health fairs, various community

functions, social activities. So we have done distribution of

this magazine, we’ve focused on patient experience, which

was the first of its type, there’s nothing like this in this

market. It’s kind of a Consumer Reports approach to

patient experience.

Publishing Reports in Consumer-Focused

Magazines

Page 145: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

133

Table A-2 Print Media Sources By Alliance Region

Region AF4Q Counties 2007 2011

Hamilton, Clermont,

Warren, Butler, Brown, Adams, Clinton, Highland, Boone, Kenton, Campbell, Grant,

Dearborn, Ripley

Number

Of Print

Media

Sources

Name Of newspaper(s)

Number

Of Print

Media

Sources

Name Of newspaper(s)

Cincinnati, OH 8

Cincinnati Post, Fairfield Echo, Grant County

News and Express, JournalNews, Middletown

Journal, Oxford Press, Pulse-Journal, Western

Star

8

Batesville Herald-Tribune, Fairfield Echo,

Grant County News and Express,

JournalNews, Middletown Journal, Oxford

Press, Pulse-Journal, Western Star

Cleveland, OH Cuyahoga 1 The Cleveland Plain Dealer 1 The Cleveland Plain Dealer

Detroit, MI

Wayne, Oakland, Macomb, St.

Clair, Livingston, Washtenaw,

Monroe

22

Advertiser Times, Ann Arbor News, Bedford

Now, Birmingham-Bloomfield Eagle, Detroit

Jewish News, Detroit News, Fraser-Clinton

Township Chronicle, Grosse Pointe Times,

Macomb Township Chronicle, Michigan

Chronicle, Monroe Evening News, Mount

Clemens-Clinton-Harrison Journal, Rochester

Post, Roseville-Eastpointe Eastsider, Royal

Oak Review, Shelby-Utica News, Southfield

Sun, Sterling Heights Sentry, Troy Times,

Warren Weekly, West Bloomfield Beacon,

Woodward Talk

21

Advertiser Times, Bedford Now,

Birmingham-Bloomfield Eagle, Detroit News,

Farmington Press, Fraser-Clinton Township

Chronicle, Grosse Pointe Times, Macomb

Township Chronicle, Michigan Chronicle,

Monroe Evening News, Mount Clemens-

Clinton-Harrison Journal, Rochester Post,

Roseville-Eastpointe Eastsider, Royal Oak

Review, Shelby-Utica News, Southfield Sun,

Sterling Heights Sentry, Troy Times, Warren

Weekly, West Bloomfield Beacon, Woodward

Talk

Humboldt County, CA Humboldt 3 Humboldt Beacon, Redwood Times, Times-

Standard 3

Humboldt Beacon, Redwood Times, Times-

Standard

Kansas City, MO Johnson, Wyandotte, Jackson,

Platte, Clay 4

Jackson County Advocate, Kansas City Star,

Northeast News, Olathe News 5

Blue Springs Journal, Jackson County

Advocate, Kansas City Star, Northeast News,

Olathe News

Maine All counties in state 4

Bangor Daily News,

Portland Press Herald, Kennebec Journal,

Morning Sentinel

6

Bangor Daily News,

Portland Press Herald, Kennebec Journal,

Morning Sentinel, St. John Valley Times, Sun-

Journal

Memphis, TN Shelby County 2 Commercial Appeal, Tri-State Defender 2 Commercial Appeal, Tri-State Defender

Page 146: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

134

Table A-2 (Contd.) Print Media Sources By Region

Region AF4Q Counties 2007 2011

All counties in state

Number

of sources Name Of newspaper(s)

Number

of sources Name Of newspaper(s)

Minnesota 19

Budgeteer News, Chanhassen Village, Chaska Herald,

Crookston Daily Times, Duluth News Tribune, Eden

Prairie News, Finance & Commerce, Hutchinson

Leader, Jordan Independent, Journal, Lake County

News-Chronicle, Litchfield Independent Review, Pine

Journal, Post-Bulletin, Prior Lake American, Savage

Pacer, Shakopee Valley News, St. Paul Pioneer Press,

Star Tribune

23

Budgeteer News, Chanhassen Village, Chaska Herald,

Crookston Daily Times, Daily Globe, Duluth News

Tribune, Eden Prairie News, Finance & Commerce,

Free Press, Hutchinson Leader, Jordan Independent,

Journal, Lake County News-Chronicle, Litchfield

Independent Review, Pine Journal, Pioneer, Post-

Bulletin, Prior Lake American, Savage Pacer,

Shakopee Valley News, St. Paul Pioneer Press, Star

Tribune, West Central Tribune

Puget sound, WA King, Kitsap, Pierce,

Snohomish, Thurston 9

Daily Herald, King County Journal, Kitsap Sun, News

Tribune, Olympian, Peninsula Gateway, Puyallup

Herald, Seattle Post-Intelligencer, Seattle Times

8

Daily Herald, Kitsap Sun, News Tribune, Olympian,

Peninsula Gateway, Puyallup Herald, Seattle Post-

Intelligencer, Seattle Times

South Central, PA Adams, York 4 Evening Sun, York Daily Record/York Sunday News,

York Dispatch, York Weekly Record 4

Evening Sun, York Daily Record/York Sunday News,

York Dispatch, York Weekly Record

West Michigan

Mason, Lake, Osceola,

Oceana,Newaygo, Mecosta,

Montcalm, Muskegon,

Ottawa, Kent, Ionia,

Allegan, Barry

11

Grand Rapids Press, Holland Sentinel, Lake County

Star, Ludington Daily News, Muskegon Chronicle,

Oceana's Herald-Journal, Pioneer - Osceola Edition,

Pioneer, Sentinel-Standard, Shelby-Utica New, White

Lake Beacon

10

Grand Rapids Press, Holland Sentinel, Ludington Daily

News, Muskegon Chronicle, Oceana's Herald-Journal,

Pioneer - Osceola Edition, Pioneer, Sentinel-Standard,

Shelby-Utica New, White Lake Beacon

Western New York

Cattaraugus, Alleghany,

Erie, Genesee, Niagara,

Orleans, Wyoming,

Chautauqua

4 Buffalo New, Daily News, Journal-Register, Union-Sun

& Journal 6

Buffalo New, Daily News, Journal-Register, Niagara

Gazette, Tonawanda News, Union-Sun & Journal

Willamette Valley,

OR

Multnomah, Washington,

Marion, Polk, Yamhill,

Clackamas, Linn, Benton,

Lane

4 Capital Press, Keizertimes, Oregonian, Register-Guard 4 Capital Press, Keizertimes, Oregonian, Register-Guard

Wisconsin All counties in state 11

Capital Times, Freeman, Janesville Gazette, La Crosse

Tribune, Leader-Telegram, Milwaukee Journal Sentinel, News

Graphic, Packer Plus, Superior Telegram, Washington County

Daily News, Wisconsin State Journal

13

Capital Times, Freeman, Janesville Gazette, La Crosse Tribune,

Leader-Telegram, Milwaukee Journal Sentinel, News Graphic,

Oconomowoc Enterprise, Packer Plus, Superior Telegram,

Times Press, Washington County Daily News, Wisconsin State

Journal

Page 147: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

135

Table A-3 Types Of Media Coverage And Corresponding Search Terms (Keywords)

Media Coverage Variable Search Term

Alliance-Sponsored CQI

“Quality Counts”(Maine)

CMS-Sponsored CQI

“Hospital Compare”

“HCAHPS”

“Nursing Home Compare”

“Home Health Compare”

General CQI “Quality AND Physicians”

“Quality AND hospitals”

Media Coverage of Patient Safety

“Patients AND Safety”

“Medical Errors”

“Medical Malpractice”

Page 148: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

136

Figure A-1 Stepwise Algorithm To Guide Selection Of Articles Relevant To Comparative Quality

Information

Does the article reference providers?

Does the article reference quality of

healthcare?

Does the article reference comparison of

quality of healthcare among providers?

Does the article reference collecting (e.g.,

data) or disclosing information on

quality of healthcare to consumers

(patients, employers, insurers, or doctors)?

Reject

Reject

Reject

Reject

Accept

No

No

No

No

Yes

Yes

Yes

Yes

Page 149: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

137

Table A-4 Definition Of Key Coding categories for Media coverage of CQI

Coding category Definition Definition of key terms

Discussion of health quality

transparency/disclosure

Any discussion of transparency or disclosure of quality, cost, or efficiency

of healthcare provision by healthcare

providers to patients and other

stakeholders

“Any” refers to the fact that the

discussion may range from brief to

extensive and may or may not be the

focus of the news article

“Efficiency” means the provision of

higher quality at lower cost

“Healthcare provision” refers to any form

of health care provided by healthcare providers

“Healthcare provider” refers to any type

of healthcare provider including

physicians, surgeons, nurses, dentists,

physician assistants, nurse practitioners,

pharmacists

“Other stakeholders” refers to providers,

insurers, drug and device manufacturers,

consumer advocacy groups, public sector

payers like Medicare and Medicaid, employers, and non-profit groups

involved in improving healthcare quality

Discussion of variation in

quality across health providers

Any explicit discussion of variation in

quality, cost, or efficiency of healthcare

across providers at a local, state, or

national level

“Local” refers to the city or county of

publication of the newspaper

Explicit means the article has to say (and

not merely hint or imply) that quality of

healthcare is variable or uneven across

providers

Provides direct comparisons

between healthcare providers

Provides a direct comparison between

providers/group of providers in terms of

quality, cost, or efficiency of healthcare

“Direct comparison” refers to a head-to-head comparisons between specific

providers/group of providers

“Group of providers” may refer to

providers collectively at the local, state,

or national level i.e., comparisons

between cities, counties, states , or

nations as a whole

Web linkage to CQI source Provides the web address or a hyperlink

to the public reporting website -

Page 150: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

138

Table A-5 Definition Of Key Coding categories for Media coverage of Patient Safety

Coding category Definition Definition of key terms

Discussion of issues related

to patient safety in healthcare

delivery

Provides description of patient safety

practices of healthcare providers

“Patient safety practices” means any

actions taken by the healthcare provider

that has implications for safety of

patients when providing healthcare.

Exclude if the practice or action is linked

to drug or device manufacturers e.g.,

safety of drugs in clinical trials except when it is directly linked in some way to

actions of healthcare providers

Discussion of “Sentinel”

events

Provides description of an egregious medical error by specific healthcare

provider(s)

“Egregious” refers to an error in healthcare delivery that leads to serious

bodily or mental harm.

Page 151: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

139

Figure A-2 Stepwise Algorithm To Guide Assignment Of Valence Weights To Discussion Of Quality

Transparency

Discussion of healthcare quality

transparency/disclosure

Does it express any doubts about

measurement?

Does it express any doubts about

relevance to consumers?

Does it say something positive about

healthcare quality

transparency/disclosure?

Negative Valence

Negative Valence

Neutral Valence

Positive Valence

Yes

Yes

No

Yes

No

No

Yes

Page 152: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

140

Figure A-3 Stepwise Algorithm To Guide Assignment Of Valence Weights To Discussion Of Quality

Variation

Discussion of variation in quality across

healthcare providers

Does it say that disclosing variation in

quality may be confusing to consumers?

Does it say that consumers need to be

aware of variation in quality?

Negative Valence

Positive Valence

Neutral Valence

Yes

Yes

Yes

No

No

Page 153: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

141

Figure A-4 Stepwise Algorithm To Guide Assignment Of Valence Weights To Discussion Of Patient Safety

Practices Of Healthcare Providers

Discussion of issues related to patient

safety practices of healthcare providers

Does it say anything negative about

patient safety practices of providers?

Does it something positive about patient

safety practices of providers?

Positive Valence

Negative Valence

Yes

Yes

No

Yes

Page 154: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

142

Table A-6 Illustrative Examples Of Code Application And Valence Weight Assignment

Coding category Illustrative text Valence weight applied

Discussion of health quality

transparency/disclosure

Mainers now have an easy and reliable way to compare the quality of doctors and hospitals around the state, say the creators of a new website.

It allows patients to enter medical conditions or procedures and see which are the highest-rated

doctors and hospitals in and near their communities. The ratings are based on voluntarily reported

data such as infection rates and protocols for preventing medication errors.

Positive

Discussion of health quality transparency/disclosure

Agwunobi said he began the tour - Maine is the sixth stop - by pushing for electronic medical

records and "transparency" - the term used in health care circles to describe making information

about the cost and quality of health care easily available to the public. But he described switching

to a more passive role after hearing doctorsê concerns.

Members of the Maine group were particularly leery of the so-called "pay-for-performance"

programs that insurers use to reward doctors who meet certain standards. They expressed worries

about physicians being unfairly assessed. What if Doctor X's patients are just a sicker bunch than

Doctor Y's?

Negative

Discussion of variation in

quality across health providers

''Unfortunately, we know all health care is not created equal. There is variation in quality,'' said

Elizabeth Mitchell, chief executive officer of the foundation. ''We need that information. We need

it not only to make more informed choices, we need it to improve care.''

Positive

Discussion of variation in quality across health providers

Hospitals and clinics will post signs saying patients can request the data. Many people might find

the information confusing: Making sense of it involves factoring in cost shifting, negotiated

discounts, tiered co-pays and widely variable insurance plans. Also, fees for lab work, X-rays and

other extras usually won't be included. An example of the complexity: Meriter Hospital had a

median charge of $42,377 for a hip replacement in a recent 12-month period. That was much

higher than St. Mary's Hospital's charge of $26,608 and UW Hospital's charge of $32,821. But

insurers typically paid Meriter only about $3,800 more than they paid the other hospitals, and it's

possible a patient's out-of-pocket cost was no higher or even lower at Meriter.

Negative

Page 155: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

143

Table A6 (Contd.) Illustrative Examples Of Code Application And Valence Weight Assignment

Coding category Illustrative text Valence weight applied

Web linkage to CQI source

The Maine Health Management Coalition Foundation, made up of hospitals, medical practices,

insurers and large employers, introduced its ratings website - www.getbettermaine.org - during a

news conference at the State House on Tuesday.

-

Provides direct comparisons

between healthcare providers

According to private assessments and surveys completed by its very own patients, Down East

Community Hospital has been named in the top 25 percent of hospitals in New England. The

recognition comes from the Harvard Pilgrim Hospital Honor Roll and is an indication of how far

the facility has come in the last three years.

-

Discussion of patient safety in

healthcare delivery

At least 34 patients died as a result of preventable mistakes in Oregon hospitals last year, the

same number reported in 2009 to the Oregon Patient Safety Commission. While the number is

small in comparison with the tens of thousands of people safely restored to health in hospitals

each year, it is one of several indicators of stalled progress in reducing serious medical errors. "The truth is, the culture of patient safety is not where it needs to be," said Bethany Higgins,

administrator of the Oregon Patient Safety Commission.

Positive

Discussion of patient safety in

healthcare delivery

At St. Agnes Hospital in Baltimore, heart attack patients are receiving faster treatment. Doctors,

nurses and other hospital staff recently slashed by 22 percent the time it takes for an angioplasty

to begin after the patient's arrival in the emergency room. Instead of 119 minutes, patients wait

roughly 93 minutes. It's an important improvement, given studies that have linked faster

treatment with a lower mortality rate, and the state requires 80 percent of patients to be treated

within two hours. To be more efficient, St. Agnes staff members are employing a practice known

as Lean Management to cut costs, reduce patient waiting times and improve safety.

Negative

Discussion of “Sentinel” events

The nursing supervisor who allowed a disoriented 61-year-old patient to leave the Down East

Community Hospital during a severe snowstorm in January 2008 has lost his nursing license. The patient was found dead in a nearby snowbank the next day.

-

Page 156: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

144

Table A-7 Calculation Of Normalized Unweighted And Weighted Scores For Each News Article

A B C D E F G H I Normalized Unweighted

score

Normalized Weighted

score

Article 1: CQI

Unweighted score 1 1 1 1 - - - - - 4/4=1

Weighted score 1 1 1 1 - - 1 1 1 - 7/7=1

Article 2: CQI

Unweighted scores 1 0 1 0 - - - - - 2/4=0.5

Weighted scores 1 0 1 0 - - 1 .5 .5 - 4/7=0.57

Article 3:

Patient Safety

Unweighted scores - - - - 1 1 - - - 2/4=0.5

Weighted scores - - - - 0.5 1 1 1 1 - 4.5/7=0.64

Key: A= discussion of health quality transparency/disclosure; B = discussion of variation in quality across health providers; C= web link for a CQI source; D= direct comparison between providers; E=discussion of issues related to patient safety in healthcare delivery; F= discussion of “sentinel” events; G= text of the article title included the keyword; H=

location of article in the paper; I= space devoted to story A, B, and E are the valence weight eligible codes

Page 157: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

145

Table A-8 Assignment Of Alliance Coding Among Author And Raters

Alliance Author Rater 1 Rater 2

Cincinnati, OH - x -

Cleveland, OH x - -

Detroit, MI - - x

Humboldt County, CA x - -

Kansas City, MO - x -

Maine x - -

Memphis, TN - - x

Minnesota x - -

Puget sound, WA - x -

South Central, PA x - -

West Michigan - - x

Western New York - x -

Willamette Valley, OR - - x

Wisconsin x - -

Page 158: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

146

Table A-9 Results Of Inter-Rater Agreement For Selection, Coding, And Weighting Of Media Articles

Stage Raters Date Alliance Period Keyword

Inter-Rater

Agreement

(%)

Selection For Full

Text Review

Author/Rater 1 07/09/2015 Kansas City, MO 2010-11 Quality AND Hospitals 98

Author/Rater 1 07/15/2015 Puget sound, WA 2006-07 Patients AND Safety 91

Author/Rater 1 08/11/2015 Kansas City, MO 2006-07 Quality AND

Physicians 85

Author/Rater 2 07/10/2015 Detroit, MI 2006-07 Quality AND

Physicians 98

Author/Rater 2 07/15/2015 Memphis, TN 2010-11 Quality AND Hospitals 87

Author/Rater 2 06/09/2015 West Michigan 2006-07 Patients AND Safety 97

Coding

Author/Rater 1 08/08/2015 Kansas City, MO 2006-07 Quality AND

Physicians 92

Author/Rater 1 08/19/2015 Kansas City, MO 2010-11 Quality AND Hospitals 91

Author/Rater 1 08/19/2015 Kansas City, MO 2010-11 Patients AND Safety 80

Author/Rater 2 08/19/2015 Detroit, MI 2010-11 Quality AND

Physicians 95

Author/Rater 2 08/19/2015 Memphis, TN 2010-11 Quality AND Hospitals 89

Author/Rater 2 10/19/2015 West Michigan 2010-11 Patients AND Safety 90

Valence Weighting

Author/Rater 1 08/11/2015 Kansas City, MO 2006-07 Quality AND

Physicians 85

Author/Rater 1 08/19/2015 Kansas City, MO 2010-11 Quality AND Hospitals 70

Author/Rater 1 08/19/2015 Kansas City, MO 2010-11 Patients AND Safety 60

Author/Rater 2 08/19/2015 Detroit, MI 2010-11 Quality AND

Physicians 83

Author/Rater 2 08/19/2015 Memphis, TN 2010-11 Quality AND Hospitals 75

Author/Rater 2 10/19/2015 West Michigan 2010-11 Patients AND Safety 74

Page 159: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

147

Appendix B: Results

Page 160: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

148

Table B-1 Availability Of Quality Reports, By Type Of Measure And Alliance

Alliance

Any report

Reports with diabetes

measures

Reports with

hypertension measures Reports with heart

disease measures

Reports with asthma

measures

Reports with depression

measures

2007 2011 2007 2011 2007 2011 2007 2011 2007 2011 2007 2011

All alliances Combined 92 128 9 25 3 9 64 82 9 32 5 7

By alliance

Cincinnati, OH 6 9 0 2 0 0 4 6 0 1 0 0

Cleveland, OH 5 8 0 1 0 1 3 5 0 1 0 0

Detroit, MI 7 9 1 3 0 1 6 5 1 5 1 1

Humboldt County, CA 9 13 1 2 0 0 7 9 2 3 0 0

Kansas City, MO 3 5 0 1 0 0 2 2 0 3 0 1

Maine 7 7 1 1 0 1 5 5 0 2 0 0

Memphis, TN 3 7 0 2 0 0 1 3 0 2 0 0

Minnesota 12 16 4 5 1 4 9 11 4 5 3 4

Puget Sound, WA 3 5 0 1 0 0 2 4 0 2 0 1

South Central PA 6 9 0 1 0 0 4 7 0 1 0 0

West Michigan 6 9 1 3 1 1 4 5 1 3 1 0

Western New York 10 12 0 1 0 0 8 8 0 1 0 0

Willamette Valley, OR 6 9 0 1 0 0 4 6 0 2 0 0

Wisconsin 9 10 1 1 1 1 5 6 1 1 0 0

Page 161: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

149

Table B-2 Availability Of Credible Quality Reports , By Alliance

Alliance

Reports Sponsored By A

Non-Profit Agency Or A

Government Entity

Reports Constructed Using

Medical Records Or Patient

Experience Survey Data

Reports Containing At Least

1 Measure Endorsed By the

National Quality Forum

Total Number Of Credible

Reports

2006-7 2010-11 2006-7 2010-11 2006-7 2010-11 2006-7 2010-11

All Alliances Combined 81 117 13 52 53 82 3 39

By Alliance

Cincinnati, OH 6 9 0 3 6 8 0 3

Cleveland, OH 5 8 0 2 5 7 0 2

Detroit, MI 5 7 0 2 5 7 0 2

Humboldt County, CA 9 13 2 6 3 6 1 3

Kansas City, MO 3 5 0 3 3 5 0 3

Maine 6 6 3 4 4 6 1 3

Memphis, TN 3 7 0 3 2 5 0 3

Minnesota 6 10 0 8 5 9 0 5

Puget sound, WA 3 5 1 3 3 5 0 2

South Central, PA 6 9 1 4 2 4 0 4

West Michigan 4 7 1 2 3 4 0 2

Western New York 10 12 4 6 4 6 0 2

Willamette Valley, OR 6 9 0 3 5 7 0 2

Wisconsin 9 10 1 3 3 3 1 3

Page 162: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

150

Table B-3 Distribution Of Media Articles For Alliance-Sponsored CQI Media Coverage, By Alliance

Alliance Search term(s) 2007 2011

2007 2011

Number of

results

Number of

articles

selected for

full text

review

Number of

articles

coded

Number of

results

Number of

articles

selected for

full text

review

Number of

articles

coded

Cincinnati, OH - www.yourhealthmatters.org

0 0 0 0 0 0

Cleveland, OH - www.betterhealthcleveland.org

0 0 0 0 0 0

Detroit, MI www.gdahc.org www.mycarecompare.org

0 0 0 2 2 2

Humboldt County, CA - www.aligningforceshumboldt.org

0 0 0 0 0 0

Kansas City, MO - qualityhealthtogether.org

0 0 0 0 0 0

Maine www.mhmc.info www.getbettermaine.org

0 0 0 8 8 8

Memphis, TN - www.healthcarequalitymatters.org

0 0 0 2 2 2

Minnesota “Minnesota Community

Measurement”/www.mncm.org

“Minnesota Community Measurement”/www.mnhealthsco

res.org 11 11 11 10 10 10

Puget sound, WA - www.wacommunitycheckup.org

0 0 0 0 0 0

South Central, PA - www.aligning4healthpa.org

0 0 0 2 2 2

West Michigan - www.rethinkhealthy.org

0 0 0 0 0 0

Western New York - www.rx4excellence.org

0 0 0 0 0 0

Willamette Valley, OR - www.partnerforqualitycare.org

0 0 0 1 1 2

Wisconsin www.wchq.org

www.wchq.org/www.wisconsinhe

althreports.org 1 1 1 2 2 2

Page 163: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

151

Table B-4 Distribution Of Media Articles For CMS-Sponsored CQI Media Coverage, By Alliance

Alliance Search term(s)

2007 2011

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Cincinnati, OH

“Hospital Compare” 0 0 0 0 0 0

“HCAHPS” 0 0 0 0 0 0

“Nursing Home Compare” 0 0 0 0 0 0

“Home Health Compare” 0 0 0 0 0 0

Cleveland, OH

“Hospital Compare” 1 1 1 2 2 2

“HCAHPS” 0 0 0 0 0 0

“Nursing Home Compare” 1 1 1 0 0 0

“Home Health Compare” 0 0 0 0 0 0

Detroit, MI

“Hospital Compare” 0 0 0 0 0 0

“HCAHPS” 0 0 0 0 0 0

“Nursing Home Compare” 0 0 0 0 0 0

“Home Health Compare” 0 0 0 0 0 0

Humboldt County, CA

“Hospital Compare” 0 0 0 0 0 0

“HCAHPS” 0 0 0 0 0 0

“Nursing Home Compare” 0 0 0 0 0 0

“Home Health Compare” 0 0 0 0 0 0

Page 164: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

152

Table B-4 (Contd.) Distribution Of Media Articles For CMS-Sponsored CQI Media Coverage, By Alliance

Alliance Search term(s)

2007 2011

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Kansas City, MO

“Hospital Compare” 0 0 0 2 2 2

“HCAHPS” 0 0 0 0 0 0

“Nursing Home Compare” 0 0 0 0 0 0

“Home Health Compare” 0 0 0 0 0 0

Maine

“Hospital Compare” 0 0 0 2 2 2

“HCAHPS” 0 0 0 0 0 0

“Nursing Home Compare” 0 0 0 0 0 0

“Home Health Compare” 0 0 0 0 0 0

Memphis, TN

“Hospital Compare” 0 0 0 1 1 1

“HCAHPS” 0 0 0 0 0 0

“Nursing Home Compare” 0 0 0 0 0 0

“Home Health Compare” 0 0 0 0 0 0

Minnesota

“Hospital Compare” 2 2 1 2 2 2

“HCAHPS” 0 0 0 1 1 1

“Nursing Home Compare” 3 3 3 0 0 0

“Home Health Compare” 0 0 0 0 0 0

Page 165: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

153

Table B-4 (Contd.) Distribution Of Media Articles For CMS-Sponsored CQI Media Coverage, By Alliance

Alliance Search term(s)

2007 2011

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Puget sound, WA

“Hospital Compare” 1 1 1 0 0 0

“HCAHPS” 0 0 0 0 0 0

“Nursing Home Compare” 0 0 0 0 0 0

“Home Health Compare” 0 0 0 0 0 0

South Central, PA

“Hospital Compare” 0 0 0 0 0 0

“HCAHPS” 0 0 0 0 0 0

“Nursing Home Compare” 0 0 0 0 0 0

“Home Health Compare” 0 0 0 0 0 0

West Michigan

“Hospital Compare” 1 1 1 3 3 3

“HCAHPS” 0 0 0 1 1 1

“Nursing Home Compare” 0 0 0 1 1 1

“Home Health Compare” 0 0 0 0 0 0

Western New York

“Hospital Compare” 0 0 0 0 0 0

“HCAHPS” 0 0 0 0 0 0

“Nursing Home Compare” 0 0 0 1 1 1

“Home Health Compare” 0 0 0 1 1 1

Page 166: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

154

Table B-4 (Contd.) Distribution Of Media Articles For CMS-Sponsored CQI Media Coverage, By Alliance

Alliance Search term(s)

2007 2011

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Willamette Valley, OR

“Hospital Compare” 0 0 0 0 0 0

“HCAHPS” 0 0 0 0 0 0

“Nursing Home Compare” 0 0 0 0 0 0

“Home Health Compare” 0 0 0 0 0 0

Wisconsin

“Hospital Compare” 1 1 1 1 1 1

“HCAHPS” 0 0 0 0 0 0

“Nursing Home Compare” 0 0 0 0 0 0

“Home Health Compare” 0 0 0 1 1 1

Page 167: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

155

Table B-5 Distribution Of Media Articles For Non-Alliance Non-CMS CQI Media Coverage, By Alliance

Alliance Search term(s)

2007 2011

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Cincinnati, OH Quality AND Physicians 36 7 7 24 2 2

Quality AND hospitals 74 16 15 57 11 11

Cleveland, OH Quality AND Physicians 15 2 1 39 4 4

Quality AND hospitals 39 6 5 71 7 7

Detroit, MI Quality AND Physicians 107 20 20 44 9 8

Quality AND hospitals 128 36 28 75 19 18

Humboldt County, CA Quality AND Physicians 6 0 0 14 2 2

Quality AND hospitals 12 3 3 71 10 4

Page 168: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

156

Table B-5 (Contd.) Distribution Of Media Articles For Non-Alliance Non-CMS CQI Media Coverage, By Alliance

Alliance Search term(s)

2007 2011

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Kansas City, MO Quality AND Physicians 75 10 10 18 3 1

Quality AND hospitals 137 23 22 49 6 6

Maine Quality AND Physicians 53 5 4 81 18 13

Quality AND hospitals 85 7 7 113 15 15

Memphis, TN Quality AND Physicians 38 10 7 52 7 5

Quality AND hospitals 70 16 12 60 16 15

Minnesota Quality AND Physicians 102 27 25 134 12 10

Quality AND hospitals 197 22 20 250 19 18

Page 169: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

157

Table B-5 (Contd.) Distribution Of Media Articles For Non-Alliance Non-CMS CQI Media Coverage, By Alliance

Alliance Search term(s)

2007 2011

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Puget sound, WA Quality AND Physicians 104 18 16 40 5 5

Quality AND hospitals 151 28 28 74 19 17

South Central, PA Quality AND Physicians 13 4 2 17 4 4

Quality AND hospitals 33 12 10 21 10 9

West Michigan Quality AND Physicians 62 9 6 89 16 6

Quality AND hospitals 89 13 10 118 18 17

Western New York Quality AND Physicians 62 12 5 83 8 4

Quality AND hospitals 134 22 12 112 27 14

Page 170: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

158

Table B-5 (Contd.) Distribution Of Media Articles For Non-Alliance Non-CMS CQI Media Coverage, By Alliance

Alliance Search term(s)

2007 2011

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Willamette Valley, OR Quality AND Physicians 38 1 1 38 2 2

Quality AND hospitals 85 10 10 62 10 10

Wisconsin Quality AND Physicians 104 15 7 89 12 10

Quality AND hospitals 197 28 26 168 22 21

Page 171: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

159

Table B-6 Distribution Of Media Articles For Media Coverage Of Patient Safety, By Alliance

Alliance Search term(s) 2007 2011

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Cincinnati, OH

Patients AND Safety 107 14 13 51 12 6

“Medical Errors” 3 1 1 2 0 0

“Medical Malpractice” 3 2 2 2 0 0

Cleveland, OH

Patients AND Safety 43 6 6 59 1 1

“Medical Errors” 1 1 1 3 3 3

“Medical Malpractice” 14 4 4 12 1 1

Detroit, MI

Patients AND Safety 109 10 9 82 14 11

“Medical Errors” 14 3 1 5 4 2

“Medical Malpractice” 14 2 1 13 1 0

Humboldt County, CA

Patients AND Safety 11 0 0 44 8 8

“Medical Errors” 0 0 0 1 1 1

“Medical Malpractice” 0 0 0 1 0 0

Page 172: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

160

Table B-6 (Contd.) Distribution Of Media Articles For Media Coverage Of Patient Safety, By Alliance

Alliance Search term(s)

2007 2011

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Kansas City, MO

Patients AND Safety 171 22 21 73 13 10

“Medical Errors” 17 6 6 4 1 1

“Medical Malpractice” 16 2 2 5 0 0

Maine

Patients AND Safety 101 25 18 158 46 45

“Medical Errors” 12 4 4 18 9 9

“Medical Malpractice” 13 1 1 26 2 2

Memphis, TN

Patients AND Safety 117 21 16 81 8 7

“Medical Errors” 8 2 2 12 3 3

“Medical Malpractice” 41 1 1 36 3 3

Minnesota

Patients AND Safety 347 71 54 372 63 61

“Medical Errors” 35 26 26 9 5 5

“Medical Malpractice” 25 4 4 33 3 3

Page 173: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

161

Table B-6 (Contd.) Distribution Of Media Articles For Media Coverage Of Patient Safety, By Alliance

Alliance Search term(s)

2007 2011

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Puget sound, WA

Patients AND Safety 333 61 47 192 38 33

“Medical Errors” 22 10 8 30 18 16

“Medical Malpractice” 50 10 9 23 0 0

South Central, PA

Patients AND Safety 54 14 12 36 6 5

“Medical Errors” 7 2 2 1 1 1

“Medical Malpractice” 12 0 0 7 3 3

West Michigan

Patients AND Safety 134 11 9 139 15 13

“Medical Errors” 8 2 2 4 4 4

“Medical Malpractice” 27 2 0 12 1 0

Western New York

Patients AND Safety 81 11 7 136 23 15

“Medical Errors” 7 1 1 6 5 5

“Medical Malpractice” 11 2 1 39 5 5

Page 174: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

162

Table B-6 (Contd.) Distribution Of Media Articles For Media Coverage Of Patient Safety, By Alliance

Alliance Search term(s)

2007 2011

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Number of

results

Number of

articles

selected for

full text

review

Number of

articles coded

Willamette Valley, OR

Patients AND Safety 129 13 8 98 17 14

“Medical Errors” 7 3 3 6 5 5

“Medical Malpractice” 19 6 5 9 1 1

Wisconsin

Patients AND Safety 242 82 77 203 46 45

“Medical Errors” 46 34 32 8 4 3

“Medical Malpractice” 90 5 5 108 6 6

Page 175: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

163

Table B-7 Impact Of Dissemination Of CQI On Awareness And Use (Standard Errors Clustered On Individuals)

Variable Baseline For

Interpretation

Awareness Of

CQI

Use Of CQI

In Choosing Providers

Discussion of CQI

With Provider

Baseline For Interpretation Mean (2008) 0.31 0.09 0.04

Availability Of Report

Applicability Of Reports

6.42 0.008 0.024** 0.002

1.17 0.005 0.013*** 0.003

Credibility Of Reports 0.17 -0.014 -0.018 -0.009

Alliance Dissemination 0.68 -0.003 0.002 0.005

Media Coverage Of Alliance CQI 0.58 0.014 0.011 -0.009

Media Coverage Of CMS CQI 0.43 0.002 0.016* 0.003

Media Coverage Of “Other” CQI 10.05 0.000 0.002 0.003**

Media Coverage Of Patient Safety Issues 18.06 0.000 0.000 0.000

Population With Access To Health Plan Report‡ (%) 21.55 0.000 0.001 0.001***

Family Income¶ (In 1000 $) 45.40 0.001 0.000 0.000

Education (College Or More) 0.63 0.0088 0.008 -0.025*

Employed 0.49 0.017 0.013 0.002

Health Insurance Status

Private Insurance 0.41 Reference Reference Reference

Uninsured 0.07 -0.055 -0.046* -0.009

Public Insurance 0.52 0.052** -0.006 0.007

Self-Rated Health Status 2.96 -0.003 -0.010 -0.005

PAM∫ Score 65.70 0.002*** 0.001 0.000

Type Of Chronic Condition

Diabetes 0.29 -0.007 -0.033 -0.007

Hypertension 0.66 0.023 0.012 -0.009

Heart Disease 0.16 -0.046 -0.077** -0.006

Asthma 0.17 0.040 0.000 0.015

Depression 0.27 0.018 -0.019 -0.002

Number Of Physicians Per Capita, By County 3.16 0.050 0.036 0.032

Overall Satisfaction With Health Care Received 8.33 0.005 0.001 0.000

Page 176: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

164

Table B-8 Impact of Dissemination of CQI on Attitudes Towards CQI (Standard Errors Clustered On Individuals)

Variable Baseline For

Interpretation

Perceived

Importance

Of CQI

Agreement that doctors

differ in quality

Willingness To Switch

Doctors

Based On quality

Baseline For Interpretation Mean (2008) 0.91 0.66 0.60

Availability Of Report

Applicability Of Reports

6.42 -0.006 0.000 -0.016

1.17 0.001 -0.009 0.012**

Credibility Of Reports 0.17 0.011 -0.013 -0.030**

Alliance Dissemination 0.68 -0.001 -0.000 0.006

Media Coverage Of Alliance CQI 0.58 0.003 -0.016 -0.012

Media Coverage Of CMS CQI 0.43 0.004 -0.001 0.009

Media Coverage Of “Other” CQI 10.05 -0.001 0.004 0.000

Media Coverage Of Patient Safety Issues 18.06 0.000 0.001 0.001

Population With Access To Health Plan Report‡ (%) 21.55 -0.001* 0.000 0.001

Family Income¶ (In 1000 $) 45.40 0.000 0.000 -0.001

Education (College Or More) 0.63 0.013 0.048* 0.011

Employed 0.49 -0.011 0.022 0.010

Health Insurance Status

Private Insurance 0.41 Reference Reference Reference

Uninsured 0.07 -0.016 -0.022 0.009

Public Insurance 0.52 -0.004 0.015 0.008

Self-Rated Health Status 2.96 0.016** -0.010 -0.021*

PAM∫ Score (Range 1-100) 65.70 0.000 -0.001** -0.001

Type Of Chronic Condition

Diabetes 0.29 0.006 -0.013 0.011

Hypertension 0.66 0.010 0.030 -0.050**

Heart Disease 0.16 -0.004 0.030 -0.007

Asthma 0.17 -0.002 0.001 -0.031

Depression 0.27 0.040** 0.019 0.007

Number Of Physicians Per Capita, By County 3.16 -0.039 -0.004 -0.009

Overall Satisfaction With Health Care Received 8.33 -0.008** 0.015*** -0.053***

Page 177: AWARENESS AND USE OF COMPARATIVE PROVIDER QUALITY

Vita: Neeraj Bhandari

Education

2010 - PhD Candidate Health Administration and Policy

The Pennsylvania State University, University Park PA

1994- 1997 MD General Medicine

Government Medical College, Amritsar, India

1987 - 1991 MBBS

Government Medical College, Amritsar, India

Employment

2006 – 2010 Intensive Care Unit Resident Physician

Handa Nursing Home, New Delhi, India

Refereed Publications

Bhandari, N., Shi, Y., Hearld, L. R., & McHugh, M. (2016). Impact of emergency department

visit on disease self-management in adults with depression. Journal of health psychology,

1359105316650275.

Bhandari, N., Shi, Y., & Jung, K. (2016). Patient Experience Of Provider Refusal Of Medicaid

Coverage And Its Implications. Journal of health care for the poor and underserved, 27(2), 479-

494.

Scanlon, D. P., Shi, Y., Bhandari, N., & Christianson, J. B. (2015). Are health care quality

“report cards” reaching consumers? Awareness in the chronically ill population. Am J Manag

Care, 21(3), 236-44.

Bhandari, N., Shi, Y., & Jung, K. (2014). Seeking health information online: does limited

healthcare access matter?. Journal of the American Medical Informatics Association, 21(6),

1113-1117.

Selected Peer Reviewed Conference Presentations

Bhandari N., Shi Y., Jung K. “Better Informed? Work in a Health Care Facility and Receipt of

Recommended Primary Preventive Care”, Academy Health (poster), 2015

Bhandari N., Shi Y., Hearld L., McHugh M., Scanlon D. “Teachable Moment? Impact Of

Emergency Department Visit(s) On Disease Self- Management In Depressed Adults”, Annual

HHD/SON Interdisciplinary Research Forum and Social (poster), 2014

Bhandari N., Shi Y., Hearld L., McHugh M., Scanlon D. “Teachable Moment? Impact Of

Emergency Department Visit(s) On Disease Self- Management In Depressed Adults”, Academy

Health (poster), 2014