do financial incentives increase the use of electronic ... · the electronic health records...
TRANSCRIPT
Do Financial Incentives Increase the Use of Electronic Health Records?
Findings from an Experiment
September 2013
BY LORENZO MORENO, SUZANNE FELT-LISK, AND STACY DALE
WORKING PAPER 20
1
ABSTRACT
Background
The Electronic Health Records Demonstration (EHRD), implemented by the Centers for
Medicare & Medicaid Services (CMS), provided financial incentives to physician practices to
use a certified EHR. Practices that met minimum EHR use requirements received payments on a
graduated scale, increasing for more use of EHR functions.
Methods
The demonstration was implemented in four sites and targeted practices with 20 or fewer
providers supplying primary care to at least 50 fee-for-service (FFS) Medicare beneficiaries. The
demonstration was expected to operate for five years (June 1, 2009–May 31, 2014); however, it
was canceled in August 2011 because 43 percent of the practices did not meet program
requirements. The evaluation used a stratified, experimental design—412 treatment and 413
control practices—to estimate the impacts of the payments on adoption and use of EHR
functionalities.
Results
In June 2011, treatment group practices were, on average, 9 to 18 percentage points more
likely than control group practices to report using 13 EHR functionalities queried by providers at
baseline (2008). The payments increased a summary score of EHR use, which ranged from 1 to
100, by more than 11 points, on average, than that of the control group (54 versus 43).
2
Conclusion
Moderate incentive payments did not lead to universal EHR adoption and use in a two-year
time frame. However, the demonstration showed that incentives can influence physician use of
EHRs. Although these results are encouraging for the potential effectiveness of the Medicare
EHR Incentive Program, they also suggest that meaningful use of EHRs on a national scale may
take longer than anticipated.
3
For more than a decade, the Institute of Medicine, the federal government, and other
influential stakeholders have envisioned health information technology (health IT) as a
promising tool for improving quality of health care and reducing costs.1,2,3,4,5 This consensus is
likely to have influenced the decision by Congress to enact the Health Information Technology
for Economic and Clinical Health (HITECH) Act of the American Recovery and Reinvestment
Act of 2009.6
HITECH created programs to promote the adoption and use of electronic health records
(EHRs) and electronic exchange of information by eligible providers.7 These programs provide
technical assistance and other support to the target population of eligible professionals and
hospitals to achieve “meaningful use” of EHRs.8 The largest of these programs—the Medicare
and Medicaid EHR Incentive Programs, are charged with providing financial incentives to
providers who voluntarily join the program for the meaningful use of certified EHRs.9
The Electronic Health Records Demonstration (EHRD), funded and implemented by the
Centers for Medicare & Medicaid Services (CMS), was designed to evaluate whether providing
financial incentives increases physician practices’ adoption and use of EHRs.10 CMS expected
the use of this technology to result in structural and organizational changes that would improve
the quality of care delivered to chronically ill patients with fee-for-service (FFS) Medicare
coverage, while reducing the costs of care and improving practices’ financial performance.
Lessons from the EHRD evaluation could have direct implications for primary care providers
who have joined, or are considering joining, the ongoing Medicare EHR Incentive Program and,
therefore, may be of considerable interest to them, health IT policymakers, and other
stakeholders. This paper focuses on the impact of the demonstration on the adoption and use of
EHRs; findings on the impacts of EHRD on quality of care and costs will be reported elsewhere.
4
METHODS
Study Design
CMS initially planned to implement the demonstration in 12 sites in two phases one year
apart. The agency chose four sites for Phase I: Louisiana, Maryland and the District of Columbia,
southwestern Pennsylvania, and South Dakota,. Phase II was to have consisted of eight more
sites starting a year later. However, CMS canceled Phase II before it began because of the
passage of HITECH. Therefore, EHRD consisted only of the four Phase I sites. On behalf of
CMS, 14 community partners recruited 900 interested practices, which CMS screened for
eligibility. The demonstration was expected to operate for five years (June 1, 2009–May 31,
2014); however, CMS canceled it in August 2011 because 43 percent of practices left the
program or did not meet program requirements.11
The demonstration targeted practices serving at least 50 traditional FFS Medicare
beneficiaries with certain chronic conditions for whom the practices provided primary care.
Under the original design, primary care providers (physicians, as well as nurse practitioners and
physician assistants who provide primary care) in practices with 20 or fewer providers were
eligible to earn incentive payments for (1) using at least the minimum functions of a certified
EHR (a systems payment, with increasing rewards for increasing use); (2) reporting 26 quality
measures for congestive heart failure, coronary artery disease, diabetes, and preventive health
services (a reporting payment); and (3) achieving specified standards on clinical performance
measures during the demonstration period (a performance payment, with increasing rewards for
better adherence to recommended care guidelines). All incentive payments under the
demonstration were to be made in addition to the FFS Medicare payments practices receive for
submitted claims. Physicians could have received up to $13,000 and practices up to $65,000 over
5
the first two years of the demonstration. Because the demonstration was terminated, the reporting
and performance payments were never made; CMS made only the systems payment for the first
two years of the demonstration in fall 2010 and fall 2011, which totaled $4.5 million.
The EHRD evaluation used a stratified, experimental design to allocate 825 eligible
practices that volunteered for Phase I of EHRD to treatment and control groups (Figure 1). This
design was used to achieve balance on practice characteristics that are important predictors of
adoption and use of EHRs (Table 1). In February 2009, practices from the four sites were
randomized in equal proportions into treatment and control practices within strata, defined by
site, number of primary care physicians, and whether the practice was in a medically underserved
area (MUA).
The evaluation also included site visits to systematically, purposively selected practices in
each of the four sites (four treatment practices and two control practices in each site), as well as
telephone interviews with seven practices that voluntarily left the demonstration. A two-person
team visited the practices during May and June 2010. A semistructured protocol was used during
the discussions (which lasted one to two hours per practice) with at least one physician and an
administrative staff member knowledgeable about the demonstration.
Data Sources
Key measures for the evaluation, constructed from a web-based Office Systems Survey
(OSS), were (1) practices’ adoption and use of EHRs and other health IT, and (2) a summary
(composite) score that quantifies EHR use for the calculation of the incentive payment.
Soon after the start of the demonstration, CMS determined that seven of the treatment
practices and one of the control practices were ineligible because they failed to meet the terms
6
and conditions of the demonstration (Figure 1). An additional 43 treatment practices voluntarily
discontinued participation in the intervention. Between April and June 2010 and 2011, the OSS
was administered to treatment practices; for control practices, it was administered only in 2011.
The OSS collected information on practice characteristics, provider characteristics, and use of
EHRs and other health IT. All practices that had been randomized to the treatment or control
group, even those that left the intervention, were asked to participate. The final response rates
were 87 and 68 percent for treatment and control group practices, respectively.
To calculate EHR summary scores for practices currently using a certified EHR, the OSS
measured 53 functions (for example, prescribing medications, ordering laboratory tests and other
procedures, and care management and coordination) thought to be connected to improved care
(although, for many, a causative link is not yet empirically proven). These functions can also be
sorted into five domains: (1) completeness of information, (2) communication about care outside
the practice, (3) clinical decision support, (4) use of the system to increase patient
engagement/adherence, and (5) medication safety (Supplementary Appendix Table A.1). If
practices were to use all 53 functions for three-fourths or more of their patients, the total
composite score would equal 100. In addition to calculating this score, composite scores were
calculated for the five OSS domains.12 (Baseline scores cannot be estimated because application
data on EHR/other health IT use are available for only 13 of the 53 functions.) Based on the total
composite score for each treatment practice, CMS calculated payments during each
demonstration year. Practices received their payments in the fall following the end of each
demonstration year.
Figure 1. EHRD Flow Chart
Statistical Analysis
All randomized practices were included in
from all practices that completed the 2011 OSS, t
IT use and use of each of the 13
conducted a similar analysis for the overall OSS
7
were included in an intent-to-treat analysis (Figure 1)
from all practices that completed the 2011 OSS, treatment-control differences in any EHR/health
13 EHR functions were estimated using separate
similar analysis for the overall OSS summary score and the five OSS domain scores.
(Figure 1). Using data
control differences in any EHR/health
separate regressions. We
score and the five OSS domain scores.
8
The regressions adjusted for the stratifying variables and the baseline measure of the 13
functions. Inclusion of these variables adjusts for any differences between treatment and control
groups due to survey nonresponse. Observations were weighted to adjust for survey nonresponse
and nonrandom demonstration attrition. We conducted sensitivity tests to confirm that the results
were similar in regressions that did not use baseline control variables and in regressions that did
not use weights (Supplementary Appendix Table A.2). Analyses were conducted using
STATA.13
9
RESULTS
Participation
Practices were required to implement and use EHR minimum functions in a certified EHR
each year to qualify for system payment (Table 2). Of the 412 originally randomized treatment
practices, 57 percent complied with these requirements by the end of the second year of the
demonstration (Table 3). However, the remaining practices either refused to respond to the OSS
or had left the demonstration voluntarily (went out of business or merged practices), or more
commonly, because they failed to meet demonstration requirements.
Impacts on Selected Measures of Health IT Use
The analysis of the 2011 OSS data found statistically and substantively significant impacts
on any EHR/health IT use. Treatment group practices were nearly 10 percentage points more
likely than control group practices (89 versus 80 percent) to report any EHR/health IT use (p <
0.001), controlling for use in 2008 and the stratifying variables (Table 4). Treatment practices
also were 9 to 18 percentage points more likely than control practices to report using the
following functions: maintaining electronic patient visit notes, keeping electronic patient
problem lists, using automated patient-specific alerts and reminders, using electronic disease-
specific patient registries, disseminating patient-specific educational materials, making online
referrals to other providers, viewing lab tests online, printing and faxing prescriptions, and
digitally transmitting prescriptions to pharmacies. In particular, large treatment-control
differences exist for use of automated patient-specific alerts and reminders, and for electronic
disease-specific patient registries (18 percentage point treatment-control difference in both cases;
p < 0.001). These treatment-control differences were similar in magnitude and statistical
10
significance, regardless of the use of baseline controls or the application of nonresponse weights
(not shown).
Impacts on Health IT Summary Score
EHRD had a statistically significant and positive impact on practices’ overall OSS scores,
which ranged from 1 to 100, and all five OSS domain scores (Table 5). After controlling for
practice characteristics and baseline health IT use, treatment group practices’ overall OSS scores
were more than 11 points higher than those of control group practices, on average (54 for
treatment versus 43 for control group practices; p < 0.001). In addition, treatment group
practices’ scores on all five domains were at least 1.5 points higher than those of control group
practices (with a maximum score of between 17 and 22 points in each domain; p < 0.001). There
were notably large impacts on the completeness of information in the EHR and medication safety
domains (2.4 and 3.4 points, respectively). In analyses that limited the sample to EHR users
(excluding the 96 practices without an EHR), positive impacts on health IT use were present
regarding the completeness of information and on medication safety; however, there were no
significant treatment-control differences in communication of care outside the practice, clinical
decision support, or increasing patient engagement (not shown).
Limitations
Although the EHRD evaluation relied on an experimental design—making it a rigorous
study—it had several limitations. First, treatment practices could have overstated their EHR use
because the level of the incentive payment was determined by the level of use they reported in
the OSS. Although these simple attestations were confirmed by a second set of requests for
11
screenshots and more detailed responses for a random sample of respondents, there was no full,
independent verification. Second, the exclusion of eight practices originally classified as eligible
but later determined to be ineligible after randomization may have introduced a small degree of
selection bias to the OSS intention-to-treat impact estimates. Third, because of differential
response rates and nonrandom attrition between the treatment and control groups in the OSS, the
comparison between these two groups could also be unreliable, despite the use of nonresponse
and attrition analytic weights to minimize these biases. Finally, national generalizations cannot
be made because the sample of practices was purposively selected from only four sites.
Furthermore, the EHRD practices were probably more advanced in their thinking about and use
of EHRs than other small practices nationally. In fact, nearly two-fifths of treatment and control
group practices (43 and 44 percent, respectively) used an EHR at the time of application to the
demonstration (Table 4). In contrast, a national estimate for the same year (2008) suggested that
only 10 to 13 percent of practices (albeit defined slightly differently) used an EHR.14
12
DISCUSSION
Site visits and interviews with practices that stopped participating in the demonstration
suggest two main reasons for the high attrition. First, it is difficult to implement an EHR.
Second, many practices lacked some or all of the conditions needed to surmount the difficulties:
project management skills; time, labor, and upfront financial resources; and a Medicare FFS
caseload large enough to realize sizable incentive payments. In contrast, practices that met
demonstration requirements and continued to participate seemed to already have the wherewithal
and intention to implement an EHR soon, and the financial incentives of EHRD motivated them
to accelerate the process. These findings are consistent with other qualitative studies of EHR
implementation.15,16,17,18,19
Lessons Learned
This evaluation provides some evidence about the health IT experience of a limited sample
of small- to medium-size primary care practices serving Medicare FFS beneficiaries. Because of
the demonstration’s termination, the evidence must be interpreted cautiously. If the
demonstration had run for the original five-year term, the effects could have been different from
those estimated from the current analysis. Nonetheless, we learned two policy lessons from this
limited evaluation.
First, moderate financial incentive levels can influence physician practice use of EHRs, but
that level of the incentives cannot achieve universal adoption and use in a two-year time frame.
Although more than half the practices responded to the financial incentives for implementing and
using an EHR system, many practices were not able or willing to do so within the time frame the
13
demonstration required. Their decision to not respond to the incentives raises the important
question of whether the incentives should have been larger.
Second, targeting the incentives to individual practitioners instead of practices might be
more effective. Site visits found considerable variation within practices in individual
practitioners’ use of EHRs; often, decision making on EHR use was at the individual level.
However, incentive payments for a practice often were not passed through to individual
practitioners; rather, they were used for overall support of the practice or its EHR system. In the
HITECH Incentive Program, eligible professionals who receive the incentive payment can assign
it to the practice; however, it remains untested whether payment to the practice or to the
individual might be more effective.
Policy Context
The demonstration results must be interpreted cautiously, not only because of the early
termination of the demonstration, but also because of the rapid, concurrent changes in health IT
policy, including financial incentives and available technical assistance. Efforts that overlapped
with demonstration goals had the potential to support and encourage treatment and control group
practices’ adoption and use of EHRs.
Beginning in 2011, eligible providers could begin receiving payments under the HITECH
Incentive Program for demonstrating “meaningful use” of EHR, which included meeting a core
set of required criteria, as well as several selected criteria. There was a four-month overlap
between EHRD and the HITECH Incentive Program. In fact, a sizable number of treatment and
control practices that responded to the OSS reported changing decisions regarding EHR adoption
or the practice’s care delivery processes due to the Incentive Program by spring 2011 (44 and 41
14
percent, respectively). It is unclear whether the demonstration would have had as much influence
on EHR adoption and use in an environment unaccompanied by additional EHR-related
incentives.
Other federal, state, and local projects had goals similar to those of EHRD. Although many
of these initiatives may have enhanced the effects of EHRD, those in the early stages of
development seemed to have also made adoption and implementation more complicated. Based
on the site visits, some of the most actively participating practices reported they were delaying
initial decisions until they could determine how to meet the requirements for multiple program
opportunities with a single set of practice changes.
In sum, the demonstration had favorable impacts on EHR use, even though demonstration
participation for more than two-fifths of the practices was terminated, mostly because they did
not implement or sufficiently use an EHR within the time frame the demonstration required.
These positive findings are encouraging for the potential impact of the HITECH Incentive
Program, but also cautionary regarding the expectation of rapid conversion to meaningful use on
a national scale.
15
Table 1. Summary of Baseline Characteristics for Treatment and Control Group Practices (Percentages) Practice Baseline Characteristics
Treatment Group
Control Group
Difference†
Site* Louisiana 24.2 25.2 -1.0 Maryland 31.0 30.8 0.2 Pennsylvania 34.1 33.5 0.6 South Dakota 10.7 10.4 0.3
Practice Size* 1 to 2 physicians 52.5 52.2 0.3 3 to 5 physicians 28.6 29.6 -1.0 6 or more physicians 18.9 18.2 0.7
Located in an MUA* Yes 29.5 29.1 0.4 No 70.5 70.9 -0.4
Practice Affiliation Unaffiliated 38.1 44.8 -6.7 Affiliated‡ 61.9 55.3 6.6
Located in a Rural Area Yes 14.8 16.8 -2.0 No 85.2 83.3 1.9
Participating in Another EHR, Quality Improvement, or Quality Reporting Program
Yes 77.6 84.3 -6.7 No 22.4 15.7 6.7
Number of Practices 412 413
Source: EHRD practice application database, 2008.
* Stratifying variables. † For all comparisons of baseline characteristics between the treatment and control groups, p > 0.05. ‡Owned by a hospital, hospital system, or larger medical group, or affiliated with a larger medical group, independent practice association, physician hospital organization, or other entity. EHR = electronic health record; MUA = medically underserved area.
16
Table 2. Demonstration Minimum Requirements
Requirement 1. Implement a certified* EHR by the end of the second demonstration year (May 31, 2011) 2. Use the EHR for
Entering patient clinical notes Recording/entering laboratory and other diagnosis test orders Entering laboratory and other diagnosis test results Documenting the ordering of prescription medications (new and refills)
*Valid June 2009 or later. Certification by the old Certification Commission for Health Information Technology or other certification organizations approved by the Office of the National Coordinator for Health Information Technology. EHR = electronic health record.
Table 3. Summary of Practice Participation in the Demonstration
Status Treatment Group Control Group Practices randomized at the start of the demonstration 412
(100%) 413
(100%) Practices eligible for the year 2 OSS* 346
(84%) 389
(94%) Completed the year 2 OSS 311†
(76%) 267
(65%) Reported having an EHR in the year 2 OSS 264†
(64%) 188
(46%) Met minimum requirements for payment at the end of year 2 232
(57%)‡ n.a.
(n.a.) Source: OSS, conducted in spring and summer 2010 and 2011. Numbers in parentheses correspond to the percentage of the practices in each status
category relative to the total number of practices randomized to the treatment or control group.
*Excludes practices that went out of business or merged practices, and withdrawn or terminated practices that refused to be contacted. (Most withdrawn or terminated practices remained in the survey sample.)
†Three practices that were asked to complete an OSS validation module but did not complete it or failed to provide the requested screenshots are considered to not have completed the OSS.
‡The denominator for this estimate is equal to the total number practices randomized to the treatment group (row 1), except for seven treatment practices and one control practice determined by CMS to be ineligible soon after the start of the demonstration because they failed to meet the terms and conditions of the demonstration. EHR = electronic health record; n.a. = not applicable; OSS = Office Systems Survey.
17
Table 4. Impacts of EHRD on Health IT Use, by Function
EHR/Health IT Function
Treatment Group Mean at Baseline
(Fall 2008) †
Control Group
Mean at Baseline
(Fall 2008)†
Treatment Group
Adjusted Mean
(Spring–Summer 2011)
Control Group
Adjusted Mean
(Spring–Summer)
2011) Impact Any EHR/Health IT Use 43.4 44.4 89.8 80.2 9.6*** Electronic Patient Visit Notes 41.9 44.4 83.8 68.6 15.2*** Electronic Patient Problem Lists 41.6 44.1 84.5 70.4 14.1*** Automated Patient-Specific Alerts and Reminders 33.4 31.0 63.1 45.3 17.9*** Electronic Disease-Specific Patient Registries 14.0 19.2 70.9 53.2 17.7*** Patients’ Email 8.5 7.8 30.8 29.8 1.0 Patient-Specific Educational Materials 33.7 34.6 58.6 42.2 16.4*** Online Referrals to Other Providers 15.4 16.7 70.2 57.7 12.5*** Laboratory Tests:
Online order entry 26.7 33.2 35.7 35.1 0.6 Online results viewing 41.3 43.8 68.3 58.5 9.8*
Radiology Tests: Online order entry 14.8 15.7 19.5 22.4 -2.9 Online results viewing
(reports and/or digital films) 33.9 30.6 46.5 40.5 6.1
E-Prescribing: Printing and/or faxing Rx 50.8 50.3 82.2 69.1 13.1*** Online Rx transmission
to pharmacy 32.5 27.6 86.8 71.8 14.9*** Number of Practices (Weighted) 405 412 405 412 Number of Practices (Unweighted) 324 268 324 268 Sources: OSS, conducted in spring and summer 2011, and data drawn from the applications
practices submitted to EHRD in fall 2008. †From fall 2008 application data.
Notes: Reported means are regression-adjusted. Regressions control for the stratifying variables (state, MUA, practice size); and the health IT functions practices reported on the application to the demonstration listed above. The baseline value of any
18
EHR/health IT use is included as control for any EHR/health IT use the end of year 2; similarly, the baseline value of each health IT function is included as control for the corresponding health IT function at the end of year 2. Observations for treatment and control group practices are adjusted for nonresponse to the 2011 OSS and for demonstration attrition. The weighted sample reflects all randomized practices, except for seven treatment practices and one control practice that were determined by CMS to be ineligible soon after the start of the demonstration because they failed to meet the terms and conditions of the demonstration. * p < 0.05; ** p < 0.01; *** p < 0.001.
CMS = Centers for Medicare & Medicaid Services; EHRD = Electronic Health Records Demonstration; MUA = medically underserved area; OSS = Office Systems Survey; Rx = prescription.
19
Table 5. Impacts of EHRD on OSS Scores, by Domain
OSS Score (Means)
Treatment Group Adjusted Mean
(Spring–Summer 2011)
Control Group Adjusted Mean
(Spring–Summer 2011) Difference
Overall OSS score 54.4 42.8 11.4*** OSS Score Domains
1. Completeness of information in the EHR 11.7 9.3 2.4***
2. Communication of care outside the practice 10.9 9.0 1.9***
3. Clinical decision support 10.8 8.5 2.3*** 4. Increasing patient engagement 5.8 4.5 1.4*** 5. Medication safety 14.7 11.3 3.4*** Number of Practices (Weighted) 405 412 Number of Practices (Unweighted) 324 268 Sources: OSS, conducted in spring and summer 2011, and data drawn from applications
practices submitted to EHRD in 2008. Notes: Reported means are regression-adjusted. Regressions control for the stratifying
variables (state, MUA, practice size); and health IT functions practices reported on the application to the demonstration (listed in Table 4). Because the OSS score could not be calculated for the baseline period from the application to the demonstration, the 13 health IT functions measured at baseline are used as a proxy for this score. Observations for treatment and control group practices are adjusted for nonresponse to the 2011 OSS and for demonstration attrition. The weighted sample reflects all randomized practices, except for seven treatment practices and one control practice that were determined by CMS to be ineligible soon after the start of the demonstration because they failed to meet the terms and conditions of the demonstration. * p < 0.05; ** p < 0.01; *** p < 0.001.
CMS = Centers for Medicare & Medicaid Services; EHRD = Electronic Health Records Demonstration; MUA = medically underserved area; OSS = Office Systems Survey.
20
REFERENCES
1Institute of Medicine. Crossing the Quality Chasm: A New System for the 21st Century.
Washington, DC: National Academies Press, 2007.
2Blumenthal, D. “Health Information Technology: What Is the Federal Government Role?” Publication no. 907. Washington, DC: Commonwealth Fund, 2006.
3Shekelle, P.M., S.C. Morton, E.B. Keeler, et al. “Costs and Benefits of Health Information Technology. Evidence Report/Technology Assessment 132.” AHRQ Publication no. 06-E006. Rockville, MD: Agency for Healthcare Research and Quality, 2006.
4Blumenthal, D., and J.P. Glaser. “Information Technology Comes to Medicine.” New England Journal of Medicine, vol. 356, no. 24, 2007, pp. 2527–2534.
5Congressional Budget Office. “Evidence on the Costs and Benefits of Health Information Technology.” Publication no. 2976. Washington, DC: CBO, 2008.
6U.S. Congress. American Recovery and Reinvestment Act of 2009. P.L. 111-5. Feb 17, 2009.
7Redhead, C.S. “The Health Information Technology for Economic and Clinical and Health (HITECH) Act.” Congressional Research Service Report to Congress 7-5700. 2009.
8Blumenthal, D. “Stimulating the Adoption of Health Information Technology.” New England Journal of Medicine, vol. 360, no. 15, 2009, pp. 1477–1479.
9Jha, A.K. “Meaningful Use of Electronic Health Records. The Road Ahead.” Journal of the American Medical Association, vol. 304, no. 15, 2010, pp. 1709–1710.
10Centers for Medicare & Medicaid Services. “Electronic Health Records (EHR) Demonstration. Demonstration Summary.” Accessed July 25, 2012, at http://www.cms.gov/ Medicare/Demonstration-Projects/DemoProjectsEvalRpts/downloads/EHR _DemoSummary.pdf.
11Centers for Medicare & Medicaid Services. Update: On August 11, 2011, CMS announced that the demonstration would end early. Accessed July 24, 2012, at http://www.cms.gov/Medicare/Demonstration-Projects/DemoProjectsEvalRpts/Medicare-Demonstrations-Items/CMS1204776.html.
21
12Felt-Lisk, S., R. Shapiro, C. Fleming, et al. “Evaluation of the Electronic Health Record
Demonstration: Implementation Report 2010, Appendix A.” Princeton, NJ: Mathematica Policy Research, 2012. Accessed October 5, 2012, at http://www.cms.gov/Research-Statistics-Data-nd-Systems/Statistics-Trends-and-Reports/Reports/downloads/Felt-Lisk_EHRD_Final_2010.pdf.
13STATA software, release 11. College Station, TX: StataCorp, 2012.
14DesRoches, C.M., E.G. Campbell, R.R. Rao, K. Donelan, T.G. Ferris, A. Jha, R. Kaushal, D.E. Levy, S. Rosenbaum, A.E. Shields, and D. Blumenthal. “Electronic Health Records in Ambulatory Care—A National Survey of Physicians.” New England Journal of Medicine, vol. 359, no. 1, 2008, pp. 50–60.
15Fernandopulle, R., and N. Patel. “How the Electronic Health Record Did Not Measure Up to the Demands of Our Medical Home Practice.” Health Affairs, vol. 29, no. 4, 2010, pp. 622–628.
16Baron, R.J., E.L. Fabens, M. Schiffman, et al. “Electronic Health Records: Just Around the Corner? Or Over the Cliff?” Annals of Internal Medicine, vol. 143, no. 3, 2005, pp. 222–226.
17Miller, R.H., and I. Sim. “Physicians’ Use of Electronic Medical Records: Barriers and Solutions.” Health Affairs, vol. 23, no. 2, 2004, pp. 116–126.
18Frisse, M.E. “Health Information Technology: One Step at a Time.” Health Affairs, vol. 28, no. 2, 2009, pp. w-379–w-384.
22
SUPPLEMENTARY APPENDIX
Table A.1. EHR Functions Associated with the Five OSS Domains
Domain Functions
Complete-ness of Information
• Paper records transitioned to the EHR system
• Paper charts pulled for recent visits • Method to transition paper records • Clinical notes for individual patients • Allergy lists for individual patients • Problem/diagnosis lists for individual patients
• Patient demographics • Patient medical histories • Record that instructions/educational information was given to patients
• Record/enter new prescriptions and refills
• Record/enter lab orders • Scan paper lab results • Review lab results electronically • Record/enter imaging orders • Scan paper imaging results • Review imaging results electronically
Communica-tion About Care Outside the Practice
• Print/fax lab orders • Fax lab orders electronically from system
• Transmit lab orders electronically directly from system to facility with capability to receive
• Print/fax imaging orders • Fax imaging orders electronically from system
• Transmit imaging orders electronically directly from system to facility with capability to receive
• Transfer electronic lab results (received in non-machine-readable format) directly into system
• Enter electronic lab results manually • Receive electronically transmitted lab results directly into system
• Transfer electronic imaging results (received in non-machine-readable format) directly into system
• Enter electronic imaging results manually into electronic system (whether received by fax, mail, or telephone)
• Receive electronically transmitted imaging results directly into system
• Enter requests for referrals/consultation
• Transmit medication lists/information
• Transmit lab results (machine-readable)
• Transmit imaging results (machine-readable)
• Receive electronically transmitted reports directly into system
• Print prescriptions, fax to pharmacy/hand to patient
• Fax prescription orders electronically from system
• Transmit prescription orders electronically directly from system to pharmacy with capability to receive
23
Domain Functions Clinical Decision Support
• Enter clinical notes into templates • View graphs of height/weight data over time
• View graphs of vital signs data over time
• Flag incomplete/overdue test results • Highlight out-of-range test levels • View graphs of lab/test results over time
• Prompt clinicians to order tests/studies • Review and act on reminders at the time of the patient encounter
• Reference information on medications
• Reference guidelines when prescribing
• Search for or generate a list of patients: - Requiring a specific intervention - on a specific medication - Who are due for a lab or other test - Who fit a set of criteria (age, for example)
Use of the System to Increase Patient Engagement/ Adherence
• Manage telephone calls • Exchange secure messages with patients
• Patients view records online • Patients update information online • Patients request appointments online (not scored)
• Patients request referrals online (not scored)
• Produce hard-copy or electronic reminders for patients about needed tests, studies, or other services
• Generate written or electronic educational information to help patients* understand their condition or medication
• Create written care plans to help guide patients* in self-management
• Prompt provider to review patient self-management plan with patient* during a visit
• Modify self-management plan as needed following a patient* visit
• Identify generic or less expensive brand alternatives at time of prescription entry
• Reference drug formularies to recommend preferred drugs
Medication Safety
• Maintain medication list • Generate new prescriptions • Generate prescription refills • Select medication (from a drop-down list, for example)
• Calculate appropriate dose/frequency
• Screen prescriptions for drug allergies, drug-drug interactions, drug-lab interactions, and drug-disease interactions
* Congestive heart failure, coronary artery disease, diabetes, and preventive care patients.
OSS = Office Systems Survey.
24
Table A.2. Description of Sensitivity Tests
Sensitivity Test Rationale for the Test Compare respondents to the 2011 OSS to the full sample of randomized practices eligible to respond to the OSS
To assess nonresponse bias
Estimate model with and without nonresponse weights
To test if results were sensitive to nonresponse weights
Estimate model with and without baseline control variables
To test if results were sensitive to baseline control (stratifying and use of health IT functions) variables
OSS = Office Systems Survey.
Improving public well-being by conducting high-quality, objective research and surveys
www.mathematica-mpr.com
PRINCETON, NJ - ANN ARBOR, MI - CAMBRIDGE, MA - CHICAGO, IL - OAKLAND, CA - WASHINGTON, DC
Mathematica® is a registered trademark of Mathematica Policy Research, Inc.