crowdsourcing - iqvia

14
White paper Connecting insights Better outcomes Executive summary Organizations and entrepreneurs have tapped into the power of crowds for many generations. With the pervasiveness of social platforms have come new, simpler, and more economical methods to draw upon the capabilities of a truly global crowd. In this paper we present three case studies in which crowdsourcing was used in the healthcare industry. From these scenarios we present guidance and considerations in successfully planning and conducting crowdsourcing programs. Crowdsourcing: Experience & application in the healthcare domain Joseph Bastante, Head of Strategy & Enterprise Architecture, Quintiles Global IT Joseph Goodgame, Head of Research and Development Information and Data, Quintiles Global IT Greg Klopper, Enterprise Architect/Data Scientist, Quintiles Global IT Angela Hill, PharmD, Sr. Director, IT

Upload: others

Post on 08-May-2022

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Crowdsourcing - IQVIA

White paper

Connectinginsights

Betteroutcomes

Executive summary

Organizations and entrepreneurs have tapped into the power of crowds for many generations. With the pervasiveness of social platforms have come new, simpler, and more economical methods to draw upon the capabilities of a truly global crowd. In this paper we present three case studies in which crowdsourcing was used in the healthcare industry. From these scenarios we present guidance and considerations in successfully planning and conducting crowdsourcing programs.

Crowdsourcing:Experience & application in the healthcare domain

Joseph Bastante, Head of Strategy & Enterprise Architecture, Quintiles Global ITJoseph Goodgame, Head of Research and Development Information and Data, Quintiles Global ITGreg Klopper, Enterprise Architect/Data Scientist, Quintiles Global ITAngela Hill, PharmD, Sr. Director, IT

Page 2: Crowdsourcing - IQVIA

Executive summary 1

Introduction 3

Crowdsourcing scenarios 5

Insights gained 8

Supportive factors 12

Summary 13

About the authors 15

Table of contents

Page 3: Crowdsourcing - IQVIA

3 | www.quintiles.com

Introduction

The term “crowdsourcing” may initially seem trendy or even trite, but its impact has been felt by all of us. If you have eaten canned food, watched television, or enjoyed M&Ms, your life has been touched by the power of crowds. For example, food preservation using metal cans was introduced in response to a public “crowdsourcing” contest issued by Napoleon Bonaparte in the 18th century, television ratings and programming are determined by assessment or self reporting of mass viewing habits, and the beloved blue M&M was selected by way of public votes to a toll free telephone number (Glatz, 2010) (Webster, 2008).

In this paper, focus is given not to crowdsourcing in theory, but crowdsourcing in practice, specifically within the healthcare domain. We present three uses cases, describe the purpose, approach, and outcomes of each, and then share practical considerations and guidance in effectively employing crowdsourcing for like purposes. Before reviewing the case studies, let us address two matters of relevance: a brief definition of crowdsourcing and a review of the crowdsourcing process.

Jeff Howe of WIRED Magazine is credited for introducing the term “crowdsourcing” in 2006 (Howe, 2006). In his seminal article “The Rise of Crowdsourcing,” he explained how web site iStockPhoto changed the landscape of professional photography by offering inexpensive stock photography from crowds of photographers and photo enthusiasts. The result: stock photos could be purchased for as little as $1 instead of hundreds of dollars traditionally paid to professional photographers. As articulated in the article by a professional photographer reflecting on the rise of iStockPhoto, “I see it as the first hole in the dike.”

For purposes of this paper, crowdsourcing is defined as tapping into the capabilities of the public or targeted crowds to complete business tasks that an organization would normally complete internally or through a partner. This broad definition allows for the inclusion of crowdsourcing events that preceded the Internet, such as the examples given at the opening of this section. The presence of technology is not a requirement for crowdsourcing, though it has no doubt accelerated its progress.

Case studies presented in this paper relied on two crowdsourcing platforms, Amazon Mechanical Turk and InnoCentive. As Amazon’s platform was more widely used in the case studies and the industry broadly, we provide a brief review of the crowdsourcing process using Amazon Mechanical Turk. Note that this section approaches the crowdsourcing process from a tool-oriented perspective. The intent is to provide important background and context for the crowdsourcing scenarios. Later sections of this paper provide a broader view of crowdsourcing including specific recommendations that transcend the tools.

Amazon Mechanical Turk is a web-based marketplace connecting workers with work providers. Mechanical Turk divides participants into two broad categories: Requesters and Workers. Prior to defining tasks for the crowd, Requesters must establish and fund an account. Once funded, Requesters allocate work to Workers via one or more “human intelligence tasks” or HITs. Each HIT is defined by the Requester and includes information such as the task detail, specific data to be collected, worker qualifications, payment amount, answer timing, and number of Workers accepted per question. Once defined, HITs are published, which makes them available to the Worker pool. As Workers complete HITs, responses are collected by Turk and made available to the Requester. A comma separated value (CSV) file is also available for bulk export of the HIT responses. Note that the Requester has the ability to reject a response, which prevents payment from being issued to the associated Worker and makes the HIT available to the Worker pool once again.

Photography was neither the first nor the last industry to feel the impact of crowdsourcing. The encyclopedia industry in the U.S. dropped in sales from $800 million in 1989 to $300 million in 2003 (Wong, 2009). Meanwhile, crowdsourced Wikipedia has enjoyed stellar growth, with 14.5 billion page views in April of 2012.

Crowdsourcing providers reported a 75% increase in revenues in 2011 to $376 million. During that same period, crowdsourcing workers increased by 100% (Loten, 2012).

75%

Food preservation using metal cans was introduced in response to a public “crowdsourcing” contest issued by Napoleon Bonaparte in the 18th century, television ratings and programming are determined by assessment or self reporting of mass viewing habits, and the beloved blue M&M was selected by way of public votes to a toll free telephone number.

Page 4: Crowdsourcing - IQVIA

4 | www.quintiles.com

More complex scenarios are possible and were used in the case studies. For example, Mechanical Turk supports parameterized tasks (i.e., tasks that may be submitted multiple times while varying a key component of the question for each submission). Refer to the “Public Healthcare System Analysis” section later in this paper for an example of parameterized tasks. Amazon Turk also supports full programmatic access to the services, for example, to define or publish tasks.

Crowdsourcing scenarios

Mathematical models of patient burdenThis first case study demonstrates the use of crowdsourcing for complex and highly specialized human tasks. Crowdsourcing was used in this case to solicit a proposed mathematical model of patient burden in clinical trials. Patient burden is a key consideration in designing medicinal products and especially in designing clinical trials for those products. For example, a given clinical trial may require subjects to participate in long or uncomfortable tests, have blood drawn frequently, and require weekly physician visits. Such studies may have difficulty recruiting subjects and may experience delays in enrolling subjects. The crowdsourcing challenge presented was to identify the key factors affecting patient burden and propose a model for how these factors interact to yield an estimated quantification of patient burden.

The following table includes a summary of the key parameters for the crowdsourcing challenge.

Figure 1 Amazon Turk crowdsourcing process overview

Payment info & timing

HIT task information

Worker qualifications

HIT

HIT

HIT

HIT

Requester

sets up

account

Requester

defines HITs

Requester

publishes

HITs

HITs

answered

and data

collected

Results made

available for

analysis via

CSV file

Crowdsourcing scenarios presented in this paper:

• Mathematical models of patient burden• Patient burden scoring • Public healthcare system analysis

Page 5: Crowdsourcing - IQVIA

5 | www.quintiles.com

As depicted in Table 1, the InnoCentive platform was used as it is well suited for such specialized tasks. A substantial reward was offered commensurate with the level of insight and analysis expected from the winning respondents. One might expect that such specialized tasks are less likely to receive responses that are irrelevant or those of no value. In fact, we found such specialized tasks to be twice as likely to receive low relevance responses as compared with simpler human tasks described in later scenarios. Given the nature of the responses, response quality and relevance was checked manually, unlike later scenarios which include a predetermined quality approach including both automated and manual activities.

In contrast to the crowdsourcing process described earlier for Amazon Mechanical Turk, the replies were captured as documents for review, not as data compiled in a CSV file. This is appropriate given that InnoCentive is focused on analytical challenges rather than high-volume simple tasks.

At the end of the challenge, financial reward was partially and unequally distributed among three respondents whose results did not constitute a complete answer, but provided sufficient insight and ideas to jump start the modeling process, which was completed internally. Compensated respondents invested considerable thought and effort in their proposed models.

It is difficult to know whether a different formulation of the challenge would have led to better results. This proved to be a core difficulty with InnoCentive challenge—few responses come in slowly over several months, and it is expensive and likely counter-productive to try to run several versions of the challenge at the same time. It is therefore critical that the best possible challenge formulation is sought and presented the first time.

Patient burden scoringThe prior crowdsourcing challenge yielded model constructs describing patient burden variables and their interrelationships. To finalize and validate the relative impact of individual patient burden factors, we employed Amazon Mechanical Turk. Respondents (or “Workers” in the Turk lexicon) were asked to rank patient burden factors, complete pair-wise comparisons of factors to determine which represented a higher burden, and answers subjective questions related to clinical trials. Table 2 summarizes this scenario’s parameters.

Table 1 Patient burden models – crowdsourcing structureMathematical models of patient burden

Type of activity Complex, specialized analysis

Crowdsourcing platform InnoCentive

Total reward offered $15,000

Total reward paid $10,000

Number of questions presented 1

Time from posting to closed 4 months

Number of responses Approximately 25

Compensated responses 3

Page 6: Crowdsourcing - IQVIA

6 | www.quintiles.com

Given the simplicity, low cost, and higher volume of this scenario, the crowdsourcing process varied in two key ways as compared with the prior scenario. First, the crowdsourcing questions and the reward level were tested through a series of smaller runs, first seeking only 10 responses and then 50. The purpose was to ensure that the questions were clear and that the reward level was sufficient to gain interest among workers. Second, a strategy was devised to automatically detect unreliable responses. Though it is not feasible to automatically detect all untrustworthy responses, we found it possible to detect many. Internal consistency was used as a first validation technique. Questions were structured such that answering one question implies an answer to another. Similar questions were presented, for example, comparing factor “A” to “B” in one question, then comparing factor “B” to “A” in another. In addition, a computer program was written to check for common patterns in unreliable responses such as selecting the first or same response number each time and overly brief responses. Some answers were flagged by the program for human review based on missing answers, inconsistency in responses, and falling too far out of known variability of response range, which by themselves are not necessarily bad, but require a human judgment call.

Remarkably, many of the responses were received within minutes of publishing the task. The majority of the responses, about 85%, satisfied both automated and manual quality checks. In a matter of hours, we received hundreds of responses from a truly global workforce.

A final note on this case study: Though we used Amazon Mechanical Turk, we also used a traditional web survey tool to seek similar input from a controlled, internal crowd. This provided another set of data points to seed initial variability parameters, fine tune questions, and assess survey intuitiveness. As demonstrated in this paper, crowdsourcing transcends any one tool. Success comes by structuring the task and incentives well, using the best tool for the task, engaging the right crowd, and ensuring data reliability and quality.

Public healthcare system analysisThe purpose of this case study was to collect basic information about public healthcare programs in approximately 60 countries. Though such work could have been completed using internal staff, the supposition was that it would be faster and more economical to tap into the global crowd through Amazon Mechanical Turk. Each respondent answered three questions pertaining to one of the 60 countries.

Table 2 Patient burden scoring – crowdsourcing structurePatient burden scoring

Type of activity Complex, specialized analysis

Type of activity Relatively simple, subjective responses

Crowdsourcing platform Amazon Mechanical Turk

Total reward About $300

Reward per respondent About 40-75 cents

Number of responses sought 500 in the final round

Time from posting to closed 12 hours

Actual number of responses 500

Page 7: Crowdsourcing - IQVIA

7 | www.quintiles.com

As experiences have accumulated, we have learned of the importance of response data quality and the need to plan appropriately to minimize, detect, and address quality issues.

First, does the given country have a public, private, or mixed healthcare program? Are there any limitations in healthcare program participation (e.g., offered only to poor or elderly)? Finally, the respondent was asked to provide a reference to a web-based resource substantiating the response. Table 3 summarizes this case study.

As shown in Table 3, three responses were allowed for each country. Mechanical Turk parameterization features were employed to establish a single set of questions yet submit 60 HITs, once for each country. As with other scenarios, prior thought was given to ensuring data integrity. Collecting three responses per country enabled consistency checking among responses. Agreement among all three responses for a given country increased our confidence in the results. (No individual was permitted to respond more than once per country.) Requiring one or more reference hyperlinks allowed a greater level of quality checking. About 30% of respondents were rejected due to their lack of relevant references. Questions for the rejected responses were reposted. Once the responses were obtained, approximately 85% of the total responses were consistent (i.e., each of the three responses for a country was in agreement).

As listed in Table 3, 180 responses were received within about 17 minutes. Not only was crowdsourcing cost effective in this case but it also greatly reduced the time needed to collect public healthcare system information.

Insights gained

Experiences have accumulated and perspectives have changed as we’ve progressed through the scenarios documented in this paper as well as others not presented. We’ve acquired a greater appreciation for the breadth of information needs that could be effectively satisfied through crowdsourcing approaches. The more frequently crowdsourcing was applied, the more readily new ideas for crowdsourcing emerged. We have learned of the importance of response data quality and the need to plan appropriately to minimize, detect, and address quality issues. In the scenarios presented in this paper, we have found approximately 10-25% of responses to be clearly unreliable, with the number increasing for higher reward, more complex tasks. Properly structuring crowdsourcing tasks, piloting tasks, and fine-tuning tasks and rewards were essential in increasing the value of information returned.

Table 3 Public healthcare analysis – crowdsourcing structurePublic healthcare system analysis

Type of activity Complex, specialized analysis

Crowdsourcing platform Amazon Mechanical Turk

Total reward About $120

Reward per respondent About 60 cents

Number of countries analyzed 60

Number of responses sought per country

3

Time from posting to closed 17 minutes

Number of responses (including rejects) 180

Page 8: Crowdsourcing - IQVIA

8 | www.quintiles.com

Crowd sourcing guidance frameworkFigure 2 depicts the fundamental components of a crowdsourcing event as well as supportive factors. Guidance and considerations for each are presented in this section. The model shown in Figure 2 is not necessarily exhaustive but intended to be sufficient to convey learning from our crowdsourcing work.

The needOpportunities to exploit crowdsourcing are truly vast. The more experience one gains with such approaches, the more readily new opportunities are identified. Given the broad applicability, only a few recommendations need be made.

• Clarity of need – The need must be clear, bounded, and unambiguously articulated to the target audience.

• Independence of need – in general, crowdsourcing efforts are more successful when needs are discrete and have minimal interdependencies. For example, in a software development setting, crowdsourcing techniques are often used to implement software components. However, such development may be dependent upon other potentially crowdsourced tasks, such as proposing a component design. Such dependencies increase complexity and coordination requirements. Companies have emerged (e.g., crowdflower.com) that will divide a complex task into component tasks, manage the individual task submissions, and consolidate results. In effect, the crowdsourcing process can be crowdsourced to address complex, interdependent tasks.

The crowdAn essential facet of the crowdsourcing process is identifying the target crowd and ensuring that the responses are obtained from the intended crowd.

• Non-random distribution of the crowd – One must be aware of the constitution of a particular crowd and factors that may bias crowd makeup. For example, given the fast turnaround time for simple tasks (often minutes), time of day may influence crowd makeup as a result of time zone differences. Participant specialties and demographics often vary widely across platforms. Examples of such demographics include education level, country of residence, and professional qualifications.

• Qualifications of the crowd – Platforms such as Amazon Mechanical Turk seek to provide mechanisms to prevent unqualified respondents from replying. Qualification questionnaires may be used, which require

Figure 2 Crowdsourcing guidance framework components

Crowdsourcing elements

Supportive factors

Results quality

Confidentiality Ethics

The task

The crowd

The platform

The need

The results

Page 9: Crowdsourcing - IQVIA

9 | www.quintiles.com

prospective respondents to demonstrate competence prior to actual response submission. Platforms normally allow users to view the performance history of respondents and automatically screen out poor performers. Use of such features is advisable.

• Diversity of the crowd – Given the global nature of crowdsourcing platforms, one must be sensitive to the cultural differences and norms of the diverse audience. Questions and material must be widely understood and developed with consideration for the global audience. Certain tools allow work to be distributed among workers by country or language, which can help ensure that work is best suited for each audience segment.

• Motivation of the crowd – Motivating factors must be determined and honored to ensure participation of the target crowd. Financial compensation is most often used to elicit a task response. For low complexity tasks, we have found it valuable to conduct pilots with a limited number of tasks to assess adequacy of compensation and interest of the crowd.

The tasksProper structuring and articulation of the tasks are among the most important elements of an effective crowdsourcing program.

• Task clarity – Responses are most useful and fit-for-purpose when tasks are discrete and clear. When possible, we seek to divide complex tasks into simpler tasks and use basic and clear vocabulary.

• Response integrity – Avoid use of compound questions and minimize the use of free-form data entry fields when possible. Responses should be understandable and require no subjective interpretation.

The platformNo single platform is best for all needs. Considerations in selecting a platform include: participant specialization and makeup, platform purpose or niche, capabilities of tools for defining tasks, capability of tools for collecting and analyzing responses, ability to screen and target respondents, ability to automate task generation for similar tasks, and flexibility of payment controls. Examples of payment controls are automated approvals and payments, payment denial, and payment distribution over a set of respondents. Costs are also a consideration in selecting a platform. As a point of reference, Amazon Mechanical Turk charges a commission of 10% on all amounts paid to respondents.

The resultsThe nature of the results affects a number of crowdsourcing program dimensions such as the choice of target platform. Crowdsourcing programs that assemble a large number of responses are highly dependent on platform capabilities to constrain and validate data as well as collect and distribute data in bulk. As a rule of thumb, complex tasks that are not easily represented in a structured form are often candidates for a specialized crowdsourcing platform (e.g., platforms for graphic design, software development, and complex research & development challenges).

Supportive factors

Results qualityResults’ validity, accuracy, and sufficiency are essential considerations in conducting a crowdsourcing program. These considerations must be understood and sufficiently planned for. Examples of data quality assurance techniques have been discussed within the scenarios. The list below includes a number of such techniques for consideration when designing a crowdsourcing data quality plan.

• Task redundancy – As demonstrated in the third scenario, the same task can be submitted to multiple respondents, which provides a basis for consistency checking across responses. A consistent response among respondents does not guarantee quality but may provide a greater level of assurance when combined with other techniques.

An essential facet of the crowdsourcing process is identifying the target crowd and ensuring that the responses are obtained from the intended crowd.

Page 10: Crowdsourcing - IQVIA

10 | www.quintiles.com

• Task internal consistency – Tasks that are structured via forms or questionnaires may take advantage of traditional surveying techniques whereby an answer to one question implies an answer to another, thus forming a basis for ensuring thoughtful and adequate responses.

• Results evidence – References, test results, or evidence can be required of respondents to confirm validity of work.

• Crowd validation – As circular as it sounds, one crowd can be used to validate prior work of another crowd. For example, if the first round of crowdsourcing responses yielded references for quality checking, a second round of crowdsourcing can be used to validate applicability of references. The second round is also subject to quality planning. When used properly, this approach will improve, though not necessarily guarantee, quality.

• Respondent qualification – Platforms such as Amazon Mechanical Turk allow task submitters to specify respondent qualifications, such as a minimum approval rate (Amazon.com, 2011). Use of such features is highly recommended. Additionally, worker questionnaires or simple “qualification tests” can be issued to test worker fitness. Future work can be directly routed to such qualified individuals.

• Results sampling – Finally, in most cases it will be necessary to review at least a portion of the results. The proportion to review will vary depending on a number of factors such as the required accuracy and confidence levels, and the impact of inaccurate data.

ConfidentialityConfidentiality is a key consideration and concern for many types of crowdsourcing projects. In some cases such concerns can be mitigated by restructuring questions or dividing work into sufficiently granular components such that a clear view of the confidential information does not emerge. In our case, we have sought to maintain confidentiality with a willingness to incur a modest level of risk to best tap into the power of crowds.

EthicsFinally, we encourage organizations to consider ethical dimensions of crowdsourcing. Examples include privacy and fairness of compensation. Remuneration for tasks must be sufficient to elicit responses but should also be fair. In fact, we estimate an hourly payment rate from our tasks to ensure payment falls within a fair compensation range. Others are encouraged to do likewise. While arriving at the ideal payment range is a complex matter given the global audience, effort is warranted to ensure fairness.

Summary

Crowdsourcing has become more feasible and economical due to the emergence and adoption of crowdsourcing platforms. We have demonstrated the broad applicability of crowdsourcing by way of simple and complex scenarios within the healthcare domain.

A critical component of a crowdsourcing program is effective planning. Effective planning will ensure that the proper crowd is engaged, tasks are clear, compensation is appropriate, and results are trustworthy and fit-for-purpose. Before embarking on a crowdsourcing program, an organization is advised to consider supportive factors such as information privacy, confidentiality, and the ethical nature of the program and compensation.

Finally, the most effective approach to understanding crowdsourcing is through an actual crowdsourcing pilot. The barrier to entry is low and requirements are few: a simple need, a credit card, and a few hours of time. A low cost, low risk pilot is the best first step.

Crowdsourcing programs that assemble a large number of responses are highly dependent on platform capabilities to constrain and validate data as well as collect and distribute data in bulk.

Page 11: Crowdsourcing - IQVIA

11 | www.quintiles.com

References

1. Amazon.com. (2011). Amazon Mechanical Turk Requester Best Practice Guide. Amazon Web Services LLC.

2. Glatz, J. (2010, June 3). Canning food, from Napoleon to now. Illinois Times.

3. Howe, J. (2008). Crowdsourcing Why the Power of the Crowd is Driving the Future of Business. The International Achievement Institute.

4. Howe, J. (2006, June). The Rise of Crowdsourcing. WIRED Magazine.

5. Loten, A. (2012, Feb 28). Small firms, Start-ups Drive Crowdsourcing Growth. The Wall Street Journal.

6. Thomas, S. (2011, Sep 15). 9 examples of crowdsourcing, before ‘crowdsourcing’ existed. memeburn.

7. Webster, J. G. (2008). Nielsen Ratings. International Encyclopedia of Communication Online.

8. Wong, M. (2009, Feb 11). Pity The Poor Encyclopedia. CBS News.

Page 12: Crowdsourcing - IQVIA

12 | www.quintiles.com

Joseph BastanteHead of Strategy & Enterprise Architecture, Quintiles Global ITFor over twenty years Joe has provided guidance and consultation in the strategic use of technology specifically focusing on the healthcare domain for the past eleven years. In his current role and with the support of his team, Joe is responsible for the overarching global IT strategy and roadmap, providing guidance and oversight for enterprise technology investments, and conducting architecture planning for major business initiatives. Joe is a former senior consultant and a past leader and veteran of a Fortune 50 healthcare company. Joe holds an M.S. in Computer Science as well as a number of certifications from organizations such as The American Society for Quality, The Open Group, The Project Management Institute, and Toastmasters International.

Joseph GoodgameHead of Research and Development Information and Data, Quintiles Global ITJoe has been working extensively in the Data and Information space for over 20 years, starting his career in support functions for analytics and databases he quickly moved into development and architecture facing roles. Spending time in Federal/Central government he move into outsourcing with Sema Group focusing on large data and Geospacial Mapping systems before moving into a startup focused on fraud detection using Neural Networks and rules based profiling of millions of call records and accounts in real time, this project also included one of the early multi terabyte Oracle installations.

About the authors

Page 13: Crowdsourcing - IQVIA

13 | www.quintiles.com

Greg KlopperEnterprise Architect/Data Scientist, Quintiles Global ITFor the last fifteen years Greg has been involved in a variety of cross-domain projects at all phases from inception to implementation, in fields as diverse as automotive, digital signage, cloud computing, GIS, and healthcare. His current responsibilities include work with “Big Data”, mathematical models, and creating original solutions to a variety of challenges within the process of clinical trial design, in-silico trial simulation, and EHR data mining. Greg studied Engineering Mathematics and Computer Sciences at the University of Louisville and invests his time regularly to further his education in the areas of Machine Learning, Process Modeling, Artificial Intelligence, and Cloud Computing.

Angela HillPharmD, Sr. Director, ITAfter receiving her Doctor of Pharmacy (PharmD) degree from Butler University, Angela spent 14 years in the pharmaceutical industry, with a focus on Phase IV clinical research. During that time, she designed and oversaw clinical trials, advised on commercialization strategies, trained US and global physicians, and co-authored 15 publications. Her last two years in pharma were spent as an integral part of an innovative IT R&D initiative creating analytical insights to enable better, knowledge-based decision making in clinical trial planning and design. She joined Quintiles in 2012 to lead a new IT R&D development initiative using big data to create innovative solutions.

About the authors

Page 14: Crowdsourcing - IQVIA

Contact usToll free: 1 866 267 4479 Direct: +1 973 850 7571Website: www.quintiles.com Email: [email protected] C

opyr

ight

© 2

014

Qui

ntile

s. A

ll rig

hts

rese

rved

. 15

.001

8-1-

10.1

4