“the whole is greater than the sum of its parts ... · “the whole is greater than the sum of...

17
“The Whole Is Greater Than the Sum of Its Parts” : Optimization in Collaborative Crowdsourcing Habibur Rahman , Senjuti Basu Roy , Saravanan Thirumuruganathan ,Sihem Amer-Yahia , Gautam Das . UT Arlington, UW Tacoma, CNRS, LIG [email protected], {habibur.rahman,saravanan.thirumuruganathan}@mavs.uta.edu, [email protected], [email protected] ABSTRACT In this work, we initiate the investigation of optimization opportunities in collaborative crowdsourcing. Many pop- ular applications, such as collaborative document editing, sentence translation, or citizen science resort to this spe- cial form of human-based computing, where, crowd workers with appropriate skills and expertise are required to form groups to solve complex tasks. Central to any collabora- tive crowdsourcing process is the aspect of successful col- laboration among the workers, which, for the first time, is formalized and then optimized in this work. Our formal- ism considers two main collaboration-related human factors, affinity and upper critical mass, appropriately adapted from organizational science and social theories. Our contribu- tions are (a) proposing a comprehensive model for collabo- rative crowdsourcing optimization, (b) rigorous theoretical analyses to understand the hardness of the proposed prob- lems, (c) an array of efficient exact and approximation al- gorithms with provable theoretical guarantees. Finally, we present a detailed set of experimental results stemming from two real-world collaborative crowdsourcing application us- ing Amazon Mechanical Turk, as well as conduct synthetic data analyses on scalability and qualitative aspects of our proposed algorithms. Our experimental results successfully demonstrate the efficacy of our proposed solutions. 1. INTRODUCTION The synergistic effect of collaboration in group based ac- tivities is widely accepted in socio-psychological research and traditional team based activities [19, 18, 4]. The very fact that the collective yield of a group is higher than the sum of the contributions of the individuals is often described as “the whole is greater than the sum of its parts” [19, 18]. Despite its immense potential, the transformative effect of “collaboration” remains largely unexplored in crowdsourc- ing [29] complex tasks (such as document editing, product design, sentence translation, citizen science), which are ac- knowledged as some of the most promising areas of next generation crowdsourcing. In this work, we investigate the optimization aspects of this specific form of human-based computation that involves people working in groups to solve complex problems that require collaboration and a variety of skills. We believe our work is also the first to formalize optimization in collaborative crowdsourcing. The optimization goals of collaborative crowdsourcing are akin to that of its traditional micro-task based counter- parts [16, 21] - quickly maximize the quality of the com- pleted tasks, while minimizing cost, by assigning appropriate tasks to appropriate workers. However, the “plurality opti- mization” based solutions, typically designed for the micro- task based crowdsourcing are inadequate to optimize col- laborative tasks, as the latter requires workers with certain skills to work in groups and “build” on each other’s contribu- tions for tasks that do not typically have “binary” answers. Prior work in collaborative crowdsourcing has proposed the importance of human factors to characterize workers, such as workers’ skills and wages [42, 43]. Additional human factors, such as worker-worker affinity [47, 30], is also ac- knowledged to quantify workers collaboration effectiveness. Similarly, social theories widely underscore the importance of upper critical mass [27] for group collaboration, which is a constraint on the size of groups beyond which the collabo- ration effectiveness diminishes [27, 39]. However, no further attempts have been made to formalize these variety of hu- man factors in a principled manner to optimize the outcome of a collaborative crowdsourcing environment. Our first significant contribution lies in appropri- ately incorporating the interplay of these variety of complex human factors into a set of well-formulated optimization problems. To achieve the aforementioned optimization goals, it is therefore essential to form, for each task, a group of workers who collectively hold skills required for the task, collectively cost less than the task’s budget, and collaborate effectively. Using the notions of affinity and upper critical mass, we formalize the flat model of work coordination [26] in collaborative crowdsourcing as a graph with nodes representing workers and edges labeled with pair- wise affinities. A group of workers is a clique in the graph whose size does not surpass the critical mass imposed by a task. A large clique (group) may further be partitioned into subgroups (each is a clique of smaller size satisfying critical mass) to complete a task because of the task’s magnitude. Each clique has an intra and an inter-affinity to measure re- spectively the level of cohesion that the clique has internally and with other cliques. A clique with high intra-affinity im- plies that its members collaborate well with one another. Two cliques with a high inter-affinity between them implies that these two groups of workers work well together. Our optimization problem reduces to finding a clique that max- imizes intra-affinity, satisfies the skill threshold across mul- tiple domains, satisfies the cost limit, and maximizes inter- affinity when partitioned into smaller cliques. We note that no existing work on team formation in social networks [3, 33] or collaborative crowdsourcing [29, 47, 30] has attempted similar formulations . Our second endeavor is computational. We show arXiv:1502.05106v2 [cs.DB] 12 Apr 2015

Upload: vuque

Post on 26-May-2018

224 views

Category:

Documents


0 download

TRANSCRIPT

  • The Whole Is Greater Than the Sum of Its Parts :Optimization in Collaborative Crowdsourcing

    Habibur Rahman, Senjuti Basu Roy, Saravanan Thirumuruganathan,Sihem Amer-Yahia, Gautam Das.UT Arlington, UW Tacoma, CNRS, LIG

    [email protected], {habibur.rahman,saravanan.thirumuruganathan}@mavs.uta.edu,[email protected], [email protected]

    ABSTRACTIn this work, we initiate the investigation of optimizationopportunities in collaborative crowdsourcing. Many pop-ular applications, such as collaborative document editing,sentence translation, or citizen science resort to this spe-cial form of human-based computing, where, crowd workerswith appropriate skills and expertise are required to formgroups to solve complex tasks. Central to any collabora-tive crowdsourcing process is the aspect of successful col-laboration among the workers, which, for the first time, isformalized and then optimized in this work. Our formal-ism considers two main collaboration-related human factors,affinity and upper critical mass, appropriately adapted fromorganizational science and social theories. Our contribu-tions are (a) proposing a comprehensive model for collabo-rative crowdsourcing optimization, (b) rigorous theoreticalanalyses to understand the hardness of the proposed prob-lems, (c) an array of efficient exact and approximation al-gorithms with provable theoretical guarantees. Finally, wepresent a detailed set of experimental results stemming fromtwo real-world collaborative crowdsourcing application us-ing Amazon Mechanical Turk, as well as conduct syntheticdata analyses on scalability and qualitative aspects of ourproposed algorithms. Our experimental results successfullydemonstrate the efficacy of our proposed solutions.

    1. INTRODUCTIONThe synergistic effect of collaboration in group based ac-

    tivities is widely accepted in socio-psychological researchand traditional team based activities [19, 18, 4]. The veryfact that the collective yield of a group is higher than the sumof the contributions of the individuals is often described asthe whole is greater than the sum of its parts [19, 18].Despite its immense potential, the transformative effect ofcollaboration remains largely unexplored in crowdsourc-ing [29] complex tasks (such as document editing, productdesign, sentence translation, citizen science), which are ac-knowledged as some of the most promising areas of nextgeneration crowdsourcing. In this work, we investigate theoptimization aspects of this specific form of human-basedcomputation that involves people working in groups to solvecomplex problems that require collaboration and a varietyof skills. We believe our work is also the first to formalizeoptimization in collaborative crowdsourcing.

    The optimization goals of collaborative crowdsourcing areakin to that of its traditional micro-task based counter-parts [16, 21] - quickly maximize the quality of the com-pleted tasks, while minimizing cost, by assigning appropriate

    tasks to appropriate workers. However, the plurality opti-mization based solutions, typically designed for the micro-task based crowdsourcing are inadequate to optimize col-laborative tasks, as the latter requires workers with certainskills to work in groups and build on each others contribu-tions for tasks that do not typically have binary answers.Prior work in collaborative crowdsourcing has proposed theimportance of human factors to characterize workers, suchas workers skills and wages [42, 43]. Additional humanfactors, such as worker-worker affinity [47, 30], is also ac-knowledged to quantify workers collaboration effectiveness.Similarly, social theories widely underscore the importanceof upper critical mass [27] for group collaboration, which isa constraint on the size of groups beyond which the collabo-ration effectiveness diminishes [27, 39]. However, no furtherattempts have been made to formalize these variety of hu-man factors in a principled manner to optimize the outcomeof a collaborative crowdsourcing environment.

    Our first significant contribution lies in appropri-ately incorporating the interplay of these variety ofcomplex human factors into a set of well-formulatedoptimization problems. To achieve the aforementionedoptimization goals, it is therefore essential to form, for eachtask, a group of workers who collectively hold skills requiredfor the task, collectively cost less than the tasks budget,and collaborate effectively. Using the notions of affinity andupper critical mass, we formalize the flat model of workcoordination [26] in collaborative crowdsourcing as a graphwith nodes representing workers and edges labeled with pair-wise affinities. A group of workers is a clique in the graphwhose size does not surpass the critical mass imposed by atask. A large clique (group) may further be partitioned intosubgroups (each is a clique of smaller size satisfying criticalmass) to complete a task because of the tasks magnitude.Each clique has an intra and an inter-affinity to measure re-spectively the level of cohesion that the clique has internallyand with other cliques. A clique with high intra-affinity im-plies that its members collaborate well with one another.Two cliques with a high inter-affinity between them impliesthat these two groups of workers work well together. Ouroptimization problem reduces to finding a clique that max-imizes intra-affinity, satisfies the skill threshold across mul-tiple domains, satisfies the cost limit, and maximizes inter-affinity when partitioned into smaller cliques. We note thatno existing work on team formation in social networks [3, 33]or collaborative crowdsourcing [29, 47, 30] has attemptedsimilar formulations .

    Our second endeavor is computational. We show

    arX

    iv:1

    502.

    0510

    6v2

    [cs

    .DB

    ] 1

    2 A

    pr 2

    015

  • that solving the complex optimization problem explainedabove is prohibitively expensive and incurs very high ma-chine latency. Such high latency is unacceptable for a real-time crowdsourcing platform. Therefore, we propose analternative strategy Grp&Splt that decomposes the overallproblem into two stages and is a natural alternative to ouroriginal problem formulation. Even though this staged for-mulation is also computationally intractable in the worstcase, it allows us to design instance optimal exact algorithmsthat work well in the average case, as well as efficient ap-proximation algorithms with provable bounds. In stage-1(referred to as Grp), we first form a single group of work-ers by maximizing intra-affinity, while satisfying the skill andcost thresholds. In stage-2 (referred to as Splt), we de-compose this large group into smaller subgroups, such thateach satisfies the group size constraint (imposed by criticalmass) and the inter-affinity across sub-groups is maximized.Despite being NP-hard [14], we propose an instance optimalexact algorithm OptGrp and a novel 2-approximation algo-rithm ApprxGrp for the stage-1 problem. Similarly, we provethe NP-hardness and propose a 3-approximation algorithmMin-Star-Partition for a variant of the stage-2 problem.

    Finally, we conduct a comprehensive experimentalstudy with two different applications (sentence translationand collaborative document editing) using real world datafrom Amazon Mechanical Turk and present rigorous scala-bility and quality analyses using synthetic data. Our experi-mental results demonstrate that our formalism is effective inaptly modeling the behavior of collaborative crowdsourcingand our proposed solutions are scalable.

    In summary, this work makes the following contributions:

    1. Formalism: We initiate the investigation of optimiza-tion opportunities in collaborative crowdsourcing, iden-tify and incorporate a variety of human factors in wellformulated optimization problems.

    2. Algorithmic contributions: We present comprehensivetheoretical analysis of our problems and approaches.We analyze the computational complexity of our prob-lems, and propose a principled staged solution. Wepropose exact instance optimal algorithms as well asefficient approximation algorithms with provable ap-proximation bounds.

    3. Experiments: We present a comprehensive set of exper-imental results (two real applications as well as syn-thetic experiments) that demonstrate the effectivenessof our proposed solutions.

    The paper is organized as follows. Sections 2, 3, and 4 dis-cuss a database application of collaborative crowdsourcing,our data model, problem formalization, and initial solutions.Sections 5 and 6 describe our theoretical analyses and pro-posed algorithmic solutions. Experiments are described in 7,related work in Section 8, and conclusion are presented inSection 9. Additional results are presented in appendix.

    2. AN APPLICATIONSentence translation [7, 47, 30] is a frequently encoun-

    tered application of collaborative crowdsourcing, where theobjective is to use the crowd to build a translation databaseof sentences in different languages. Such databases later onserve as the training dataset for supervised machine learn-ing algorithms for automated sentence translation purposes.

    u1 u2 u3 u4 u5 u6d1 0.66 1.0 0.53 0.0 0.13 0.0d2 0.0 0.0 0.66 0.73 0.66 0.13d3 0.0 0.33 0.53 0.0 0.8 0.93Wage 0.4 0.3 0.7 0.8 0.5 0.8

    Table 1: Workers skill and wage table

    As a running example for this paper, consider a transla-tion task t designed for translating a English video clip toFrench. Typically, such translation tasks follows a 3-stepprocess [47, 30]: English speakers first translate the video inEnglish, professional editors edit the translation, and finallyworkers with proficiency in both English and French trans-late English to French. Consequently, such task requiresskills in 3 different domains: English comprehension (d1),English editing (d2), and French Translation ability (d3).

    In our optimization setting, each task t has a require-ment of minimum skill per domain and maximum cost bud-get, and workers should collaborate with each other (e.g.,to correct each others mistakes [47]), and the collaborationeffectiveness is quantified as the affinity of the group. Someaspects of our formulation has similarities with team forma-tion problems in social networks [3]. The notion of affinityhas been identified in the related work on sentence transla-tion tasks [47, 30], as well as team formation problems [3].

    However, if the group is too large, the effectiveness ofcollective actions diminishes [27, 39] while undertaking thetranslation task, as an unwieldy group of workers fail tofind effective assistance from their peers [47, 30]. Therefore,each task t is associated with a corresponding upper criti-cal mass constraint on the size of an effective group, i.e., alarge group may need to be further decomposed into multi-ple subgroups in order to satisfy that constraint. A study ofthe importance of the upper critical mass constraint in thecrowdsourcing context, as well as how to set its (application-specific) value, are important challenges that are best left todomain experts; however, we experimentally study this issuefor certain applications such as sentence translation.

    When this task arrives, imagine that there are 6 workersu1, u2, . . . , u6 available in the crowdsourcing platform. Eachworker has a skill value on each of the three skill domainsdescribed above, and a wage they expect. Additionally, theworkers cohesiveness or affinity is also provided. These hu-man factors of the workers are summarized in Tables 1 and2, and the task requirements of t (including thresholds on ag-gregated skill for each domain, total cost, and critical mass)are presented in Table 3 and are further described in thenext section.

    The objective is to form a highly cohesive group G ofworkers that satisfies the lower bound of skill of the task andupper bound of cost requirements. Due to the upper criticalmass constraint, G may further be decomposed into multiplesubgroups. After that, each sub-group undertakes a subsetof sentences to translate. Once all the subgroups finish theirrespective efforts, their contributions are merged. Therefore,both the overall group and its subgroups must be cohesive.Incorporation of upper critical mass makes our problem sig-nificantly different from the body of prior works [3], as wemay have to create a group further decomposed into mutiplesubgroups, instead of a single group.

  • u1 u2 u3 u4 u5 u6u1 0.0 1.0 0.66 0.66 0.85 0.66u2 1.0 0.0 0.66 0.85 0.66 0.85u3 0.66 0.66 0.0 0.4 0.66 0.40u4 0.66 0.85 0.4 0.0 0.4 0.0u5 0.85 0.66 0.66 0.4 0.0 0.4u6 0.66 0.85 0.4 0.0 0.4 0.0

    Table 2: Workers Distance Matrix

    Q1 Q2 Q3 C K1.8 1.4 1.66 3.0 3

    Table 3: Task Description

    3. DATA MODELWe introduce our data model and preliminaries that will

    serve as a basis for our problem definition.

    3.1 PreliminariesDomains: We are given a set of domainsD = {d1, d2, . . . , dm}

    denoting knowledge topics. Using the running example inSection 2, there are 3 different domains - English compre-hension (d1), English editing (d2), and French Translationability(d3).

    Workers: We assume a set U = {u1, u2, . . . , un} of nworkers available in the crowdsourcing platform. The ex-ample in Section 2 describes a crowdsourcing platform with6 workers.

    Worker Group: A worker group G consists of a subsetof workers from U i.e. G U .

    Skills: A skill is the knowledge on a particular skill do-main in D, quantified in a continuous [0, 1] scale. It is associ-ated with workers and tasks. The skill of a worker representsthe workers expertise/ability on a topic. The skill of a topicrepresents the minimum knowledge requirement/quality forthat task. A value of 0 for a skill reflects no expertise of aworker for that skill. For a task, 0 reflects no requirementfor that skill.

    How to learn the skill of the workers is an important andindependent research problem in its own merit. Most relatedwork has relied on learning skill of the workers from gold-standard or benchmark datasets using pre-qualification tests[10, 20]. As we describe in Section 7.1 in detail, we also learnthe skill of the workers by designing pre-qualification testsusing benchmark datasets.

    Collaborative Tasks: A collaborative task t has the fol-lowing characteristics - a minimum knowledge threshold Qiper domain di in D, a maximum cost budget C for hiringworkers to achieve t, and an upper critical mass K, denot-ing the maximum number of workers who can effectivelycollaborate inside a group to complete t. Specifically, t ischaracterized by a vector, Q1, Q2, . . . , Qm, C,K, of lengthm + 2. For the example in Section 2, there are 3 domains(m = 3) and their respective skill requirements, its cost C,and critical mass K of the task is described in Table 3. Atask is considered complete if it attains its skill requirementover all domains and satisfies all the constraints.

    3.2 Human FactorsA worker is described by a set of human factors. We con-

    sider two types of factors - factors that describe individual

    workers characteristics and factors that characterize an in-dividuals ability to work with fellow workers. Our contribu-tion is in appropriately adapting these factors in collabora-tive crowdsourcing from multi-disciplinary prior works suchas team formation [3, 33] and psychology research [27, 39].

    3.2.1 Individual Human Factors: Skill and WageIndividual workers in a crowdsourcing environment are

    characterized by their skill and wage.

    Skill: For each knowledge domain di, udi [0, 1] is theexpertise level of worker u in di. Skill expertise reflects thequality that the workers contribution has on a task accom-plished by that worker.

    Wage: wu [0, 1] is the minimum amount of compen-sation for which a worker u is willing to complete a task.We choose a simple model where a worker specifies a singlewage value independent of the task at-hand.

    Table 1 presents the respective skill of the 6 workers in 3different domains and their individual wages for the runningexample.

    3.2.2 Group-based Human Factors: AffinitiesAlthough related work in collaborative crowdsourcing ac-

    knowledges the importance of workers affinity to enable ef-fective collaboration [47, 30], there is no attempt to for-malize the notion any further. A workers effectiveness incollaborating with her fellow workers is measured as affin-ity. We adopt an affinity model similar to group formationproblems in social networks [34, 3], where the atomic unit ofaffinity is pairwise, i.e., a measure of cohesiveness betweenevery pair of workers. After that, we propose different waysto capture intra-group and inter-group affinities.

    Pairwise affinity: The affinity between two workers uiand uj , aff (ui, uj), can be calculated by capturing the sim-ilarity between workers using simple socio-demographic at-tributes, such as region, age, gender, as done in previouswork [47], as well as more complex psychological character-istics [40]. For our purpose, we normalize pairwise affinityvalues to fit in [0, 1] and use a notion of worker-worker dis-tance instead, i.e., where dist(ui, uj) = 1aff (ui, uj). Thusa smaller distance between workers ensures a better collab-oration. Table 2 presents the pair-wise distance of all 6workers for running example in Section 2. As will be clearlater, the notion of distance rathey than affinity enables thedesign of better algorithms for our purposes.

    Intra-group affinity: For a group G, its intra-groupaffinity measures the collaboration effectiveness among theworkers in G. Here again we use distance and compute intra-group distance in one of two natural ways: computing thediameter of G as the largest distance between any two work-ers in G, or aggregating all-pair worker distances in G:

    DiaDist(G) = Maxui,ujGdist(ui, uj)SumDist(G) = ui,ujGdist(ui, uj)

    For both definitions, smaller value is better.Inter-group affinity: When a group violates the upper

    critical mass constraint [27], it needs to be decomposed intomultiple smaller ones. The resulting subgroups need to worktogether to achieve the task. Given two subgroups G1, G2split from a large group G, their collaboration effectivenessis captured by computing their inter-group affinities. Hereagain, we use distance instead of affinity. More concretely,the inter-group distance is defined in one of two natural

  • ways: either the largest distance between any two workersacross the sub-groups, or the aggregation of all pair-wiseworkers distances across subgroups:

    DiaInterDist(G1, G2) = MaxuiG1,ujG2dist(ui, uj)

    SumInterDist(G1, G2) = uiG1,ujG2dist(ui, uj)

    This can be generalized to more than two subgroups: if thereare x subgroups, overall inter-group affinity is the summa-tion of inter-group affinity for all possible pairs (xC2).

    4. OPTIMIZATIONProblem Settings: For each collaborative task, we in-

    tend to form the most appropriate group of workers fromthe available worker pool. A collaborative crowdsourcingtask has skill requirements in multiple domains and a costbudget, which is similar to the requirements of collaborativetasks in team formation problems [34]. Then, we adapt theflat-coordination models of worker interactions, which isconsidered important in prior works in team formation [3]as the coordination cost, or in collaborative crowdsourc-ing [47] itself, as the turker-turker affinity model. How-ever, unlike previous work, we attempt to fully explore thepotential of group synergy [45] and how it yields the max-imum qualitative effects in group based efforts by maximiz-ing affinity among the workers (or minimizing distance).Finally, we intend to investigate the effect of upper criti-cal mass in the context of collaborative crowdsourcing as aconstraint on group size, beyond which the group must bedecomposed into multiple subgroups that are cohesive in-side and across. Indeed, our objective function is designedto form a group (or further decomposed into a set of sub-groups) to undertake a specific task that achieves the highestqualitative effect, while satisfying the cost constraint.

    (1) Qualitative effect of a group: Intuitively, the overallqualitative effect of a formed group to undertake a specifictask is a function of the skill of the workers and their collab-oration effectiveness. Learning this function itself is chal-lenging, as it requires access to adequate training data anddomain knowledge. In our initial effort, we therefore make areasonable simplification, where we seek to maximize groupaffinity and pose quality as a hard constraint1. Existing lit-erature (indicatively [45]) informs us that aggregation is amechanism that turns private judgments (in our case indi-vidual workers contributions) into a collective decision (inour case the final translated sentences), and is one of the fourpillars for the wisdom of the crowds. For complex tasks likesentence translation or document editing, there is no widelyaccepted mathematical function of aggregation. We choosesum to aggregate the skill of the workers that must satisfythe lower bound of the quality of the task. This simplestand yet most intuitive functions for transforming individ-ual contributions into a collective result has been adoptedin many previous works [3, 34, 13]. Moreover, this simplerfunction allows us to design efficient algorithms. Explor-ing other complex functions (e.g., multiplicative function)or learning them is deferred to future work.

    (2)Upper critical mass: Sociological theories widely sup-port the notion of critical mass[27, 39] by reasoning thatlarge groups are less likely to support collective action. How-ever, whether the effect of critical mass should be imposed

    1Notice that posing affinity as a constraint does not fully exploitthe effect of group synergy.

    as a hard constraint, or it should have more of a gradualdiminishing return effect, is itself a research question. Forsimplicity, we consider upper critical mass as a hard con-straint and evaluate its effectiveness empirically for differ-ent values. Exploring more sophisticated function to capturecritical mass is deferred to future work.

    Problem 1. AffAware-Crowd: Given a collaborativetask t, the objective is to form a worker group G, further par-titioned into a set of x subgroups G1, G2, ....Gx (if needed)for the task t that minimizes the aggregated intra-distance ofthe workers, as well as the aggregated inter-distance acrossthe subgroups of G, and G must satisfy the skill and costthresholds of t, where each subgroup Gi must satisfy the up-per critical mass constraint of t. Of course, if the group Gitself satisfies the critical mass constraint, no further parti-tioning in G is needed, giving rise to a single worker group.As explained above, quality of a task is defined as an aggre-gation (sum) of the skills of the workers [3, 34]. Similarly,cost of the task is the additive wage of all the workers in G.

    4.1 Optimization ModelsGiven the high-level definition above, we propose multi-

    ple optimization objective functions based on different inter-and intra-distance measures defined in Section 3.

    For a group G, we calculate intra-distance in one of thetwo possible ways: DiaDist(),SumDist(). If G is further par-titioned to satisfy the upper critical mass constraint, thenwe also want to enable strong collaboration across the sub-groups by minimizing inter-distance. For the latter, inter-distance is calculated using one of DiaInterDist(),SumInterDist().Even though there may be many complex formulations tocombine these two factors, in our initial effort our overall ob-jective function is a simple sum of these two factors that wewish to minimize. This gives rise to 4 possible optimizationobjectives.

    DiaDist(),DiaInterDist():

    Minimize {DiaDist(G) +Max{Gi, Gj G DiaInterDist(Gi, Gj)}}

    SumDist(),DiaInterDist():

    Minimize {SumDist(G) +Max{Gi, Gj G DiaInterDist(Gi, Gj)}}

    DiaDist(),SumInterDist():

    Minimize {DiaDist(G) +

    Gi,GjG

    SumInterDist(Gi, Gj)}

    SumDist(),SumInterDist():

    Minimize {SumDist(G) +

    Gi,GjG

    SumInterDist(Gi, Gj)}

  • where, each of these objective function has to satisfy thefollowing three constraints on skill, cost, and critical massrespectively, as described below:

    uiGudi Qi diuGwu C

    |Gi| K i = {1, 2, . . . , x}

    For brevity, the rest of our discussion only considers DiaDist()on intra-distance and SumInterDist() on inter-distance. Werefer to this variant of the problem as AffAware-Crowd. Wenote that our proposed optimal solution in Section 4 couldbe easily extended to other combinations as well.

    Theorem 1. Problem AffAware-Crowd is NP-hard [14].

    The detailed proof is provided in the appendix inside Sec-tion B.

    4.2 Algorithms for AffAware-CrowdOur optimization problem attempts to appropriately cap-

    ture the complex interplay among various important factors.The proof of Theorem 1 in Section B in the appendix showsthat even the simplest variant of the optimization problem isNP-hard. Despite the computational hardness, we attemptto stay as principled as possible in our technical contribu-tions and algorithms design. Towards this end, we proposetwo alternative directions: (a) We investigate an integer lin-ear programming (ILP) [44] formulation to optimally solveour original overarching optimization problem. We note thateven translating the problem to an ILP is non-trivial, be-cause the subgroups inside the large group are also unknownand are determined by the solution. ( b) Since ILP is pro-hibitively expensive (as our experimental results show), wepropose an alternative strategy that is natural to our originalformulation, referred to as Grp&Splt. Grp&Splt decomposesthe original problem into two phases: in the Grp phase, asingle group is formed that satisfies the skill and cost thresh-old, but ignores the upper critical mass constraint. Then,in the Splt phase, we partition this large group into a setof subgroups, each satisfying the upper critical mass con-straint, such that the sum of all pair inter-distance is mini-mized. Note that, for many tasks, the Grp stage itself maybe adequate, and we may never need to execute Splt. Wepropose a series of efficient polynomial time approximationalgorithms for each phase, each of which has a provable ap-proximation factor. Of course, this staged solution com-bined together may not have any theoretical guarantees forour original problem formulation. However, our experimen-tal results demonstrate that this formulation is efficient, aswell as adequately effective.

    4.2.1 ILP for AffAware-Crowd

    minimize D = Max{ei,i dist(ui, ui)} +Gi,GjG

    uiGi,ujGj

    ei,jdist(ui, uj)

    subject ton

    i=1

    xj=1

    u(i,Gj) uidl Ql l [1,m]

    ni=1

    xj=1

    u(i,Gj) wiu C

    ni=1

    u(i,Gj) K j [1, x]

    xj=1

    u(i,Gj) 1 i [1, n]

    ei,i =

    {1 j [1, x] & u(i,Gj) = 1 & u(i,Gj) = 10 otherwise

    x {0, 1, . . . , n}u(i,Gj) {0, 1} i [1, n],j [1, x]

    (1)We discuss the ILP next as shown in Equation 1. Let

    e(i,i) denote a boolean decision variable of whether a userpair ui and u

    i would belong to same sub-group in group G or

    not. Also, imagine that a total of x groups (G1, G2, . . . , Gx)would be formed for task t, where 1 x n (i.e., at least thesubgroup is G itself, or at most n singleton subgroups couldbe formed). Then, which subgroup the worker pair shouldbe assigned must also be determined, where the number ofsubgroups is unknown in the first place. Note that trans-lating the problem to an ILP is non-trivial and challenging,as the formulation deliberately makes the problem linear bytranslating each worker-pair as an atomic decision variable(as opposed to a single worker) in the formulation, and italso returns the subgroup where each pair should belong to.Once the ILP is formalized, we use a general-purpose solverto solve it. Although the Max operator in the objective func-tion (expresses DiaDist()) must be translated appropriatelyfurther in the actual ILP implementation, in our formalismbelow, we preserve that abstraction for simplicity.

    The objective function returns a group of subgroups thatminimizes DiaDist(G)+Gi,Gj SumInterDist(Gi, Gj). Thefirst three constraints ensure the skill, cost and upper criticalmass thresholds, whereas the last four constraints ensure thedisjointedness of the group and the integrality constraints ondifferent Boolean decision variables.

    When run on the example in Section 2, the ILP generatesthe optimal solution and creates group G = {u1, u2, u3, u4, u6}with two subgroups, G1 = {u1, u2, u4}, and G2 = {u3, u6}.The distance value of the optimization objective is 4.23,which equals to DiaDist(G) + InterDist(G1, G2).

    4.2.2 Grp&Splt : A Staged ApproachOur proposed alternative strategy Grp&Splt works as fol-

    lows: in the Grp stage, we attempt to form a single workergroup that minimizes DiaDist(G), while satisfying the skilland cost constraints (and ignoring the upper critical massconstraint). Note that this may result in a large group, vio-lating the upper critical mass constraints. Therefore, in theSplt phase, we partition this big group into multiple smaller

  • sub-groups, each satisfying the upper critical mass constraintin such a way that the aggregated inter-distance between allpair of groups Gi,Gj SumInterDist(Gi, Gj) is minimized.As mentioned earlier, there are three primary reasons fortaking this alternative route: (a) In many cases we may noteven need to execute Splt, because the solo group formedin Grp phase abides by the upper critical mass constraintleading to the solution of the original problem. (b) Theoriginal complex ILP is prohibitively expensive. Our exper-imental results demonstrate that the original ILP does notconverge in hours for more than 20 workers. (c) Most impor-tantly, Grp&Splt allows us to design efficient approximationalgorithms with constant approximation factors as well asinstance optimal exact algorithms that work well in prac-tice, as long as the distance between the workers satisfiesthe metric property (triangle inequality in particular) [12,41]. We underscore that the triangle inequality assumptionis not an overstretch, rather many natural distance mea-sures (Euclidean distance, Jaccard Distance) are metric andseveral other similarity measures, such as Cosine Similarity,Pearson and Spearman Correlations could be transformedto metric distance [46]. Furthermore, this assumption hasbeen extensively used in distance computation in the relatedliterature [2, 3]. Without metric property assumptions, theproblems remain largely inapproximable [41].

    5. ENFORCING SKILL & COST : GRPIn this section, we first formalize our proposed approach in

    Grp phase, discuss hardness results, and propose algorithmswith theoretical guarantees. Recall that our objective isto form a single group G of workers that are cohesive (thediameter of that group is minimized), while satisfying theskill and the cost constraint.

    Definition 1. Grp: Given a task t, form a single groupG of workers that minimizes DiaDist(G), while satisfyingthe skill and cost constraints, i.e., uGudi Qi,di , &uGwu C.

    Theorem 2. Problem Grp is NP-hard.

    The detailed proof is discussed in Section B in appendix.Proposed Algorithms for Grp: We discuss two algo-

    rithms at length - a) OptGrp is an instance optimal algo-rithm. b) ApprxGrp algorithm has a 2-approximation fac-tor, as long as the distance satisfies the triangle inequal-ity property. Of course, an additional optimal algorithm isthe ILP formulation itself (referred to as ILPGrp in experi-ments), which could be easily adapted from Section 4. BothOptGrp and ApprxGrp invoke a subroutine inside, referred toas GrpCandidateSet(). We describe a general frameworkfor this subroutine next.

    5.1 Subroutine GrpCandidateSet()Input to this subroutine is a set of n workers and a task t

    (in particular the skill and the cost constraints of t) and theoutput is a worker group that satisfies the skill and cost con-straints. Notice that, if done naively, this computation takes2n time. However, Subroutine GrpCandidateSet() uses ef-fective pruning strategy to avoid unnecessary computationsthat is likely to terminate much faster. It computes a bi-nary tree representing the possible search space consideringthe nodes in an arbitrary order, each node in the tree isa worker u and has two possible edges (1/0, respectively

    stands for whether u is included in the group or not). Aroot-to-leaf path in that tree represents a worker group.

    At a given node u, it makes two estimated bound com-putation : a) it computes the lower bound of cost (LBC) ofthat path (from the root upto that node), b) it computesthe upper bound of skill of that path (UBdi) for each do-main. It compares LBC with C and compares UBdi withQi,di. If LBC > C or UBdi < Qi for any of the domains,that branch is fully pruned out. Otherwise, it continues thecomputation. Figure 1 has further details.

    u6#

    u4#

    u3#

    u5#

    u1#

    u2#

    1#

    0#

    1#0#

    0#

    0#

    0#

    0#1#

    1#

    0#

    0#

    0#1#

    1#

    1#

    1# 1#0#

    1#

    u1#

    u2#u2#u2#

    u2# u2#

    u1#

    u5#

    u1#

    u2# u2#

    1#

    1#

    1#

    0#

    0#

    0# 0#1#

    1#

    1# 1#0# 0#

    0#

    u3#

    u4#

    LBC=#3.2#UBd1#=#2.32#

    Figure 1: A partially constructed tree of GrpCandidateSet()using the example in Section 2. At node u1 = 1, LBC = wu6 +wu4 + wu3 + wu5 + wu1 = 3.2 and UBd1 = u

    6d1

    + u4d1 + u3d1

    +

    u5d1 + u1d1

    + u2d1 = 2.32. The entire subtree is pruned, since

    LBC(3.2) > C.

    ApprxGrp() uses this subroutine to find the first valid an-swer, whereas, Algorithm OptGrp() uses it to return all validanswers.

    5.2 Further Search Space OptimizationWhen the skill and cost of the workers are arbitrary, a

    keen reader may notice that Subroutine GrpCandidateSet()may still have to explore 2n potential groups at the worstcase. Instead, if we have only a constant number of costsand arbitrary skills, or a constant number of skill valuesand any arbitrary number of costs, interestingly, the searchspace becomes polynomial. Of course, the search space ispolynomial when both are constants.

    We describe the constant cost idea further. Instead ofany arbitrary wage of the workers, we now can discretizeworkers wage apriori and create a constant number of k dif-ferent buckets of wages (a worker belongs to one of thesebuckets) and build the search tree based on that. Whenthere are m knowledge domains, this gives rise to a totalof mk buckets. For our running example in Section 2, forsimplicity if we consider only one skill (d1), this would meanthat we discretize all 6 different wages in k (let us assumek = 2) buckets. Of course, depending on the granularityof the buckets this would introduce some approximation inthe algorithm as now the workers actual wage would be re-placed by a number which may be lesser or greater than theactual one. However, such a discretization may be realistic,since many crowdsourcing platforms, such as AMT, allowonly one cost per task.

    For our running example, let us assume, bucket 1 repre-sents wage 0.5 and below, bucket 2 represents wage between0.5 and 0.8. Therefore, now workers u3, u4, u6 will be partof bucket 2 and the three remaining workers will be part ofbucket 1. After this, one may notice that the tree will nei-ther be balanced nor exponential. Now, for a given bucket,the possible ways of worker selection is polynomial (they will

  • always be selected from most skilled ones to the least skilledones), making the overall search space polynomial for a con-stant number of buckets. In fact, as opposed to 26 possiblebranches, this modified tree can only have (3 + 1) (3 + 1)possible branches. Figure 2 describes the idea further.

    Once this tree is constructed, our previous pruning algo-rithm GrpCandidateSet() could be applied to enable furtherefficiency.

    u2#

    u1#

    u5#

    u3#

    u4#

    u6#

    1#

    0#

    1#

    0#

    1#

    1#

    1#

    1#

    0#

    0#

    0#

    0#

    u3#

    u4#

    u6#

    1#

    1#

    1#

    0#

    0#

    u3#

    u4#

    u6#

    1#

    1# 0#

    0#

    u3#

    u4#

    u6#

    1#

    1# 0#

    0#

    0#

    0#0#

    Figure 2: Possible search space using the example in Section 2,after the cost of the workers are discretized into k = 2 buckets,considering only one skill d1. The tree is constructed in descend-ing order of skill of the workers per bucket. For bucket 1, ifthe most skilled worker u2 is not selected, the other two workers(u1, u5) will never be selected.

    5.3 Approximation Algorithm ApprxGrpA popular variant of facility dispersion problem [12, 41]

    attempts to discover a set of nodes (that host the facilities)that are as far as possible, whereas, compact location prob-lems [11] attempt to minimize the diameter. For us, theworkers are the nodes, and Grp attempts to find a workergroup that minimizes the diameter, while satisfying the mul-tiple skills and a single cost constraint. We propose a 2-approximation algorithm for Grp, that is not studied before.

    Algorithm ApprxGrp works as follows: The main algo-rithm considers a sorted (ascending) list L of distance values(this list represents all unique distances between the avail-able worker pairs in the platform) and performs a binarysearch over that list. First, it calls a subroutine (GrpDia())with a distance value that can run at the most n times.Inside the subroutine, it considers worker ui in the i-th iter-ation to retrieve a star graph2 centered around ui that sat-isfies the distance . The nodes of the star are the workersand the edges are the distances between each worker pair,such that no edge in that retrieved graph has an edge > .One such star graph is shown in Figure 3.

    Next, given a star graph with a set of workers U , GrpDiainvokes GrpCandidateSet(U , t) to select a subset of workers(if there is one) from U , who together satisfy the skill andcost thresholds. GrpCandidateSet constructs the tree in thebest-first-search manner and terminates when the first validsolution is found, or no further search is possible. If the costvalues are further discretized, then the tree is constructedaccordingly, as described in Section 5.2. This variant ofApproxGrp is referred to as Cons-k-Cost-ApproxGrp.

    Upon returning a non-empty subset U of U ,GrpCandidateSet terminates. Then, ApprxGrp stores that and associated U and continues its binary search over L fora different . Once the binary search ends, it returns thatU which has the smallest associated as the solution with2Star graph is a tree on v nodes with one node having degreev 1 and other v 1 nodes with degree 1.

    Algorithm 1 Approximation Algorithm ApprxGrp()

    Require: U , human factors for U and task t1: List L contains all unique distance values in increasing order

    2: repeat3: Perform binary search over L4: For a given distance , U = GrpDia(, {Qi, di}, C)5: if U 6= then6: Store worker group U with diameter d 2.7: end if8: until the search is complete9: return U with the smallest d

    the diameter upper-bounded by 2, as long as the distancebetween the workers satisfy the triangle inequality3. In caseGrpDia() returns an empty worker set to the main function,the binary search continues, until there is no more option inL. If there is no such U that is returned by GrpDia(), thenobviously the attempt to find a worker group for the task tremains unsuccessful.

    The pseudo-code of the algorithm ApprxGrp() is presentedin Algorithm 1. For the given task t using the examplein Section 2, L is ordered as follows: 0, 0.4, 0.66, 0.85, 1.0.The binary search process in the first iteration considers = 0.66 and calls GrpDia(, {Qi,di}, C). In the first it-eration, GrpDia() attempts to find a star graph (referred toFigure 3) with u1 as the center of the star. This returnedgraph is taken as the input along with the skill thresh-old of t inside GrpCandidateSet()next. For our runningexample, subroutine GrpDia(0.66, 1.8, 1.66, 1.4, 2.5) returnsu1, u3, u4, u6. Now notice, these 4 workers do not satisfy theskill threshold of task t (which are respectively 1.8, 1.66, 1.4for the 3 domains.). Therefore, GrpCandidateSet(U , t) re-turns false and GrpDia() continues to check whether a stargraph centered around u2 satisfies the distance threshold0.66. Algorithm 2 presents the pseudocode of this subrou-tine. When run on the example in Section 2, ApprxGrp()returns workers u1, u2, u3, u5, u6 as the results with objec-tive function value upper bounded by 2 0.66.

    u1#

    u3# u4#

    u6#

    0.66$0.66$

    0.66$

    Figure 3: An instantiation of GrpDia(0.66) using the exam-ple in Section 2. A star graph centered u1 is formed.

    Theorem 3. Algorithm ApprxGrp has a 2-approximationfactor, as long as the distance satisfies triangle inequality.

    Lemma 1. Cons-k-Cost-ApproxGrp is polynomial.

    Both these proofs are elaborated in Section B in appendix.

    5.4 Optimal Algorithm OptGrpSubroutine GrpCandidateSet() leaves enough intuition

    behind to design an instance optimal algorithm that workswell in practice. It calls subroutine GrpCandidateSet() with

    3Without triangle inequality assumption, no theoretical guaran-tee could be ensured [41].

  • Algorithm 2 Subroutine GrpDia()

    Require: Distance matrix of the worker set U , distance , taskt.

    1: repeat2: for each worker u3: form a star graph centered at u, such that for each edge

    u, uj , dist(u,uj) . Let U be the set of workers in thestar graph.

    4: U = GrpCandidateSet(U , t)5: if U 6= then6: return U 7: end if8: until all n workers have been fully exhausted9: return U =

    the actual worker set U and the task t. For OptGrp, the treeis constructed in depth-first-fashion inside GrpCandidateSet()and all valid solutions from the subroutine are returned tothe main function. The output of OptGrp is that candi-date set of workers returned by GrpCandidateSet() whichhas the smallest largest edge. When run on the example inSection 2, this OptGrp returns G = {u1, u2, u3, u5, u6} withobjective function value 1.0.

    Furthermore, when workers wages are discretized into kbuckets, OptGrp could be modified as described in Section 5.2and is referred to as Cons-k-Cost-OptGrp.

    Theorem 4. Algorithm OptGrp returns optimal answer.

    Lemma 2. Cons-k-Cost-OptGrp is polynomial.

    Both these proofs are described in Section B in appendix.

    6. ENFORCING UPPER CRITICAL MASS: SPLT

    When Grp results in a large unwieldy group G that maystruggle with collaboration, it needs to be partitioned fur-ther into a set of sub-groups in the Splt phase to satisfythe upper critical mass (K) constraint. At the same time,if needed, the workers across the subgroups should still beable to effectively collaborate. Precisely, these intuitions arefurther formalized in the Splt phase.

    Definition 2. Splt: Given a group G, decompose it intoa disjoint set of subgroups (G1, G2, . . . , Gx) such that i|Gi| K,

    i |Gi| = |G| and the aggregated all pair inter group dis-tance Gi,GjG SumInterDist(Gi, Gj) is minimized.

    Theorem 5. Problem Splt is NP-hard.

    The proof is described in Section B in appendix.Proposed Algorithm for Splt: Since the ILP for Splt

    can be very expensive, our primary effort remains in design-ing an alternative strategy that is more efficient, that allowsprovable bounds on the result quality. We take the followingoverall direction: imagine that the output of Grp gives riseto a large group G with n workers, where n > K. First,we determine the number of subgroups x and the numberof workers in each subgroup Gi. Then, we attempt to findoptimal partitioning of the n workers across these x sub-groups that minimizes the objective function. We refer tothis as SpltBOpt which is the optimal balanced partitioningof G. For the running example in Section 2, this would meancreating 2 subgroups, G1 and G2, with 3 workers in one andthe remaining 2 in the second subgroup using the workersu1, u2, u3, u5, u6, returned by ApprxGrp.

    For the remainder of the section, we investigate how tofind SpltBOpt. There are intuitive as well as logical rea-sons behind taking this direction. Intuitively, lower numberof subgroups gives rise to overall smaller objective functionvalue (note that the objective function is in fact 0 whenx = 1). More importantly, as Lemma 3 suggests, under cer-tain conditions, SpltBOpt gives rise to provable theoreticalresults for the Splt problem. Finding the approximation ra-tio of SpltBOpt for arbitrary number of partitions is deferredto future work.

    Lemma 3. SpltBOpt has 2-approximation for the Spltproblem, if the distance satisfies triangle inequality, when

    x = dn

    Ke = 2.

    The proof is described in Section B in appendix.Even though the number of subgroups (aka partitions)

    is dn

    Ke with K workers in all but last subgroup, finding an

    optimal assignment of the n workers across those subgroupsthat minimizes the objective function is NP-hard. The proofuses an easy reduction from [17]. We start by showing howthe solution to SpltBOpt problem could be bounded by thesolution of a slightly different problem variant, known asMin-Star problem [17].

    Definition 3. Min-Star Problem: Given a group G withn workers, out of which each of x workers (u1, u2, . . . , ux),represents a center of a star sub-graph (each sub-graph standsfor a subgroup), the objective is to partition the remainingnx workers into one of these x subgroups G1, G2, . . . , Gxsuch that

    xi=1 kidist(ui,j 6=iGj) +

    i

  • Algorithm 3 Algorithm Min-Star-Partition

    Require: Group G with n workers and upper critical mass K1: x = dn

    Ke

    2: for all subset {u1, . . . , ux} G do3: Find optimal subgroups {G1, . . . , Gx} for {u1, . . . , ux} by

    formulating it as transportation problem4: Evaluate objective function for {G1, . . . , Gx}5: end for6: return subgroups {G1, . . . , Gx} with least objective func-

    tion

    Both these proofs are described in Section B in appendix.

    7. EXPERIMENTSWe describe our real and synthetic data experiments to

    evaluate our algorithms next. The real-data experimentsare conducted at AMT. The synthetic-data experiments areconducted using a parametrizable crowd simulator.

    7.1 Real Data ExperimentsTwo different collaborative crowdsourcing applications are

    evaluated using AMT. i) Collaborative Sentence Translation(CST), ii) Collaborative Document Writing (CDW).

    Evaluation Criteria: - The overall study is designedto evaluate: (1) Effectiveness of the proposed optimizationmodel, (2) Effectiveness of affinity calculation techniques,and (3) Effect of different upper critical mass values.

    Workers: A pool of 120 workers participate in the sen-tence translation study, whereas, a different pool of 135workers participate in the second one. Hired workers aredirected to our website where the actual tasks are under-taken.

    Algorithms: We compare our proposed solution withother baselines: (1) To evaluate the first criteria, Optimalalgorithm (in Section 4) is compared against an alternativeAff-Unaware Algorithm [43]. The latter assigns workers tothe tasks considering skill and cost but ignoring affinity. (2)Optimal-Affinity-Age and Optimal-Affinity-Region aretwo optimal algorithms that uses two different affinity cal-culation methods (Affinity-Age and Affinity-Region re-spectively) and are compared against each other to evaluatethe second criteria. (3) CrtMass-Optimal-K assigns workersto tasks based on the optimization objective and varies dif-ferent upper critical mass values K, which are also comparedagainst each other for different K.

    Pair-wise Affinity Calculation: Designing complexpersonality test [40] to compute affinity is beyond the scopeof this work. We instead choose some simple factors to com-pute affinity that have been acknowledged to be indicativefactors in prior works [47]. We calculate affinity in two ways- 1) Affinity-Age: age based calculation discretizes work-ers in different age buckets and assigns a value of 1 to aworker-pair, if they fall under the same bucket, 0 otherwise.2) Affinity-Region: assigns a value of 1, when two workersare from the same country and 0 otherwise. We continue toexplore more advanced affinity calculation methods in ourongoing work.

    Overall user-study design: The overall study is con-ducted in 3-stages : (1) Worker Profiling: in stage-1, we hireworkers and use pre-qualification tests using gold-data tolearn their skills. We also learn other human factors asdescribed next.(2) Worker-to-task Assignment: in stage-2,

    a subset of these hired workers are re-invited to partici-pate, where the actual collaborative tasks are undertakenby them.(3) Task Evaluation: in stage-3, completed tasksare crowdsourced again to evaluate their quality.

    Summary of Results: There are several key takeawaysof our user study results. First and foremost, effective col-laboration is central to ensuring high quality results for col-laborative complex tasks as demonstrated in Figure 4a andTable 5 in appendix. Then, we evaluate 2 different affin-ity computation models in Figure 4b and the results showthat people from same region collaborate more effectively,as correctness of Optimal-Affinity-Region outperformsOptimal-Affinity-Age. However, nothing could be saidwith statistical significance for the completeness dimen-sion. Both these dimensions are suggested to be indica-tive in prior works [47]. Interestingly, upper critical massalso has a significance in collaboration effectiveness, conse-quently, in the quality of the completed tasks, as shown inFigure 4c. Quality increases from K = 5 to K = 7, butit decreases with statistical significance when K = 10 forCrtMass-Optimal-10. The final results of our collaborativedocument writing application presented in appendix in Ta-ble 5 and in Section C hold similar observations.

    7.1.1 Stage 1 - Worker ProfilingWe hire two different sets of workers for sentence transla-

    tion and document writing. The workers are informed that asubset of them will be invited (through email) to participatein the second stage of the study.

    Skill learning for Sentence Translation: We hire 60workers and present each worker with a 20 second Englishvideo clip, for which we have the ground truth translationin 4 different languages: English, French, Tamil, Bengali.We then ask them to create a translation in one of the lan-guages (from the last three) that they are most proficient in.We measure each workers individual skill using Word ErrorRate(WER) [31].Skill learning for Document Writing: For the secondstudy CDW , we hire a different set of 75 workers. We designa gold-data set that has 8 multiple choice questions pertask, for which the answers are known (e.g. for the MOOCstopic - one question was, Who founded Coursera?). Theskill of each worker is then calculated as the percentage ofher correct answers. For simplicity, we consider only oneskill domain for both applications.Wage Expectation of the worker: We explicitly askquestion to each worker on their expected monetary incen-tive, by giving them a high level description of the tasksthat are conducted in the second stage of the study. Thoseinputs are recorded and used in the experiments.Affinity of the workers: Hired workers are directed toour website, where they are asked to provide 4 simple socio-demographic information: gender, age, region, and high-est education. Workers anonymity is fully preserved. Fromthere, affinity between the worker is calculated using, Affinity-Ageor Affinity-Region.

    Figure 18 and Figure 17 in appendix contain detailedworkers profile distribution information.

    7.1.2 Stage 2 - Worker-to-Task AssignmentOnce the hired workers are profiled, we conduct the sec-

    ond and most important stage of this study, where the actualtasks are conducted collaboratively.Collaborative Sentence Translation(CST): We carefully choose

  • Task Name Skill Cost Critical MassCST1- Destroyer 3.0 $5.0 5,7,10CST2- German Weapons 4.0 $5.0 5,7,10CST3 - British Aircraft 3 $4.5 5,7,10CDW1- MOOCs 5 $3 5,7,10CDW2- Smartphone 5 $3 5,7,10CDW3- top-10 place 5 $3 5,7,10

    Table 4: Description of different tasks; the default upper criticalmass value is 5. Default affinity calculation is region based.

    three English documentaries of suitable complexity and lengthof about 1 minute for creating subtitle in three different lan-guages - French, Tamil, and Bengali. These videos are cho-sen from YouTube with titles: (1) Destroyer, (2) GermanSmall Weapons, (3)British Aircraft TSR2.

    Collborative Document Writing (CDW): Three differenttopics are chosen for this application: 1) MOOCs and itsevolution, 2) Smart Phone and its evolution, 3) Top-10 placesto visit in the world.

    For simplicity and ease of quantification, we consider thateach task requires only one skill (ability to translate fromEnglish to one of the three other languages for CST, and ex-pertise on that topic for CDW). The skill and cost require-ments of each tasks are described in the Table 4. Thesevalues are set by involving domain experts and discussingthe complexity of the tasks with them.

    Collaborative Task Assignment for CST: We set up2 different worker groups per task and compare two algo-rithms Optimal-CST Aff-Unaware-CST to evaluate the ef-fectiveness of proposed optimization model. We set up ad-ditional 2 different worker groups for each task to compareOptimal-Affinity-Region with Optimal-Affinity-Age. Fi-nally, we set up 3 additional groups per task to compare theeffectiveness of critical mass and compare CrtMass-Optimal-5,CrtMass-Optimal-7, CrtMass-Optimal-10. This way, a to-tal of 15 groups are created. We instruct the workers to workincrementally using other group members contribution andalso leave comment as they finish the work. These sets oftasks are kept active for 3 days.

    Collaborative Task Assignment for CDW: An sim-ilar strategy is adopted to collaboratively edit a documentwithin 300 words, using the quality, cost, and critical massvalues of the document editing tasks, described in Table 4.Workers are suggested to use the answers of the Stage-1questionnaires as a reference.

    7.1.3 Stage 3 - Task EvaluationCollaborative tasks, such as knowledge synthesis, are of-

    ten subjective. An appropriate technique to evaluate theirquality is to leverage the wisdom of the crowds. This waya diverse and large enough group of individuals can accu-rately evaluate information to nullify individual biases andthe herding effect. Therefore, in this stage we crowdsourcethe task evaluation for both of our applications.

    For the first study Sentence Translation (CST), we havetaken 15 final outcomes of the translation tasks as well as theoriginal video clips and they are set up as 3 different HITsin AMT. The first HIT is designed to evaluate the optimiza-tion model, the second one to evaluate two different affinitycomputation models, and the final one to evaluate the ef-fectiveness of upper critical mass. We assign 20 workers ineach HIT, totaling 60 new workers. Completed tasks are

    asked to evaluate in two quality dimensions, as identifiedby prior work [47] - 1. correctness of translation. 2.com-pleteness of translation. The workers are asked to rate thequality in a scale of 15 (higher is better) without knowingthe underlying task production algorithm. Then, we averagethese ratings which is similar to obtaining the viewpoint ofthe average readers. The CST results of different evaluationdimensions are presented in Figure 4.

    A similar strategy is undertaken for the CDW applica-tion, but the quality is assessed using 5 key different qualityaspects, as proposed in prior work [6]. For lack of space,we present a subset of these results in Section C of the ap-pendix in Table 5. Both these results indicate that, indeed,our proposed model successfully incorporates different ele-ments that are essential to ensure high quality in collabora-tive crowdsourcing tasks.

    7.2 Synthetic Data ExperimentsWe conduct our synthetic data experiments on an Intel

    core I5 with 6 GB RAM. We use IBM CPLEX 12.5.1 for theILP. A crowd simulator is implemented in Java to generatethe crowdsourcing environment. All numbers are presentedas the average of three runs.

    Simulator Parametrization: The simulator parame-ters presented below are chosen akin to their respective dis-tributions, observed in our real AMT populations.1. Simulation Period - We simulate the system for a timeperiod of 10 days, i.e. 14400 simulation units, with eachsimulation unit corresponding to 1 minutes. With one taskarriving in every 10 minutes, our default setting runs 1 dayand has 144 tasks.2. # of Workers - default is 100, but we vary |U| upto 5000workers.3. Workers skill and wage - The variable udi in skill direceives a random value from a normal distribution withthe mean set to 0.8 and a variance 0.15. Workers wages arealso set using the same normal distribution.4. Task profile - The task quality Qi, as well as cost C isgenerated using normal distribution with specific mean 15and variance 1 as default. Unless otherwise stated, each taskhas a skill.5. Distance - Unless otherwise stated, we consider distanceto be metric and generated using Euclidean distance. 6.Critical Mass - the default value is 7.7. Worker Arrival, Task Arrival - By default, both workersand tasks arrive following a Poisson process, with an arrivalrate of = 5/minute 1/10 minute, respectively.

    Implemented Algorithms: 1. Overall-ILP: An ILP,as described in Section 4.2. Grp&Splt: Uses ApprxGrp for Grp and Min-Star-Partitionfor Splt.3. Grp&Greedy: An alternative implementation. In phase-1, we output a random group of workers that satisfy skill andcost threshold. In phase-2, we partition users greedily intomost similar subgroups satisfying critical mass constraint.4. Cons-k-Cost-ApprxGRP/Cons-k-Cost-OptGRP: with k =15 as default, as discussed in Section 5.3 and Section 5.4,respectively.5. GrpILP: An ILP for Grp.6. No implementation of existing related work: Due to crit-ical mass constraint, we intend to form a group, furtherpartitioned into a set of subgroups, whereas, no prior workhas studied the problem of forming a group along with sub-groups, thereby making our problem and solution unique.

  • 0 0.5 1

    1.5 2

    2.5 3

    3.5 4

    4.5

    Correctness Completeness

    Average Ra

    9ng

    Op#mal-CST

    Aff-Unaware-CST

    (a) Optimization Model

    3.4

    3.5

    3.6

    3.7

    3.8

    3.9

    4

    4.1

    4.2

    Correctness Completeness

    Average Ra

  • 0

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    5 10 15 20 25

    Mean Diam

    eter

    Task Mean Skill

    Grp' ApprxGrp GrpILP

    Figure 9: Grp : Mean Diametervarying Mean Skill

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    1 2 3 4 5 6 7 8 9 10

    Mean Diam

    eter

    Simula9on Days

    Grp' ApprxGrp GrpILP

    Figure 10: Grp :Mean Diamtervarying Simulation Days

    1

    10

    100

    1000

    10000

    100000

    1000000

    10000000

    20 30 500 1000 2000 5000

    Mean Co

    mple0

    on TIm

    e(lo

    g scale,ms)

    Number of Workers

    Grp&Splt Grp'&Greedy Overall-ILP

    Figure 11: Grp&Splt : MeanCompletion Time varying Num-ber of Workers

    1

    10

    100

    1000

    10000

    100000

    5 10 15 20 25

    Mean Co

    mple/

    on Tim

    e (lo

    g scale, m

    s)

    Task Mean Skill

    Grp' ApprxGrp GrpILP

    Figure 12: Grp : Mean Com-pletion Time varying Mean Skill

    1

    10

    100

    1000

    10000

    1 2 3 4 5 6 7 8 9 10

    Mean Co

    mple5

    on Tim

    e (lo

    g scale, m

    s)

    Simula5on Days

    Grp&Splt

    Grp'&Greedy

    Figure 13: Grp&Splt : MeanCompletion Time varying Sim-ulation Days

    1

    10

    100

    1000

    10000

    100000

    1000000

    10000000

    1 2 3 4 5 6 7 8 9 10

    Mean Co

    mple5

    on Tim

    e (lo

    g scale, m

    s)

    Simula5on Days

    Grp' ApprxGrp GrpILP

    Figure 14: Grp :Mean Comple-tion Time varying SimulationDays

    fails to converge beyond 20 workers. Grp&Greedy is alsoscalable (because of the simple algorithm in it), but clearlydoes not ensure high quality.Varying Task Mean Skill: Akin to previous result, Grp&Spltand Grp&Greedy are both scalable,Grp&Splt achieves higherquality. We omit the chart for brevity.Varying Critical Mass: As before, increasing critical massleads to better efficiency for the algorithms. We omit thechart for brevity.Varying Simulation Period: Figure 13 demonstrates thatGrp&Splt is highly scalable in a real crowdsourcing environ-ment, where more and more workers are entering into thesystem. The results show that Grp&Greedy is also scalable(but significantly worse in quality). But as number of workerincreases, efficiency decreases, for both, as expected.

    7.2.2.2 Grp Phase Efficiency.We evaluate the efficiency of ApprxGrp by returning mean

    completion time for 144 tasks.Varying Task Mean Skill: As Figure 12 demonstrates,ApprxGrp outperforms GrpILP significantly. With higherskill threshold, the difference becomes even more noticeable.Varying Simulation Period: Figure 14 shows the averagetask completion time in each day for ApprxGrp,GrpILP,Grp&Greedy. Clearly, GrpILP is impractical to use as moreworkers arrive in the system.

    8. RELATED WORKWhile no prior work has investigated the problem we study

    here, we discuss how our work is different from a few existingworks that discuss the challenges in crowdsourcing complextasks, as well as traditional team formation problems.

    Crowdsourcing Complex Tasks: This type of humanbased computation [29, 28] handles tasks related to knowl-edge production, such as article writing, sentence transla-tion, citizen science, product design, etc. These tasks areconducted in groups, are less decomposable compared tomicro-tasks (such as image tagging) [16, 21], and the qual-ity is measured in a continuous, rather than binary scale.

    A number of crowdsourcing tools are designed to solve ap-plication specific complex tasks. Soylent uses crowdsourcinginside a word processor to improve the quality of a writtenarticle [5]. Legion, a real time user interface, enables inte-gration of multiple crowd workers input at the same time[35]. Turkit provides an interface to programmer to use hu-man computation inside their programming model [37] andavoids redundancy by using a crash and return model whichuses earlier results from the assigned tasks. Jabberwocky isanother platform which leverages social network informationto assign tasks to workers and provide an easy to use inter-face for the programmers [1]. CrowdForge divides complextask into smaller sub-tasks akin to map-reduce fashion [30].Turkomatic introduces a framework in which workers aid re-quresters to break down the workflow of a complex task andthereby aiding to solve it using systematic steps [32].

    Unfortunately, these related work are very targeted to spe-cific applications and no one performs optimization basedtask assignment, such as ours. A preliminary work discussesmodular team structures for complex crowdsourcing tasks,detailing however more on the application cases, and noton the computational challenges[9]. One prior work inves-tigates how to assign workers to the task for knowledge in-tensive crowdsourcing [43] and its computational challenges.However, this former work does not investigate the necessitynor the benefit of collaboration. Consequently, the problemformulation and the proposed solutions are substantially dif-ferent from the one studied here.

    Automated Team Formation: Although tangentiallyrelated with crowdsourcing, automated team formation iswidely studied in computer assisted cooperative systems.[34] forms a team of experts in social networks with the fo-cus of minimizing coordination cost among team members.Although their coordination cost is akin to our affinity, butunlike us, the former does not consider multiple skills. Teamformation to balance workload with multiple skills is stud-ied later on in [2] and multi-objective optimization on co-ordination cost and balancing workload is also proposed [3,38], where coordination cost is posed as a constraint. Den-sity based coordination is introduced in [13], where multipleworkers with similar skill are required in a team, such asours. Formation of team with a leader (moderator) is stud-ied in [22]. Minimizing both communication cost and budgetwhile forming a team is first considered in [23, 24]. The con-cept of pareto optimal groups related to the skyline researchis studied in [23].

    While several elements of our optimization model are ac-tually adapted from these related work, there are many starkdifferences that precludes any easy adaptation of the teamformation research to our problem. Unlike us, none of theseworks considers upper critical mass as a group size con-

  • straint, that forms a group multiple subgroups, which makesthe former algorithms inapplicable in our settings. Addition-ally, none of these prior work studies our problem with theobjective to maximize affinity with multiple skills and costconstraints. In [8], authors demonstrate empirically thatthe utility is decreased for larger teams which validates ourapproach to divide group into multiple sub-groups obeyingupper critical mass. However, no optimization is proposedto solve the problem.

    In summary, principled optimization opportunities for com-plex collaborative tasks to maximize collaborative effective-ness under quality and budget constraints is studied for thefirst time in this work.

    9. CONCLUSIONWe initiate the study of optimizing collaboration that

    naturally fits to many complex human intensive tasks. Wemake several contributions: we appropriately adapt variousindividual and group based human factors critical to thesuccessful completion of complex collaborative tasks, andpropose a set of optimization objectives by appropriatelyincorporating their complex interplay. Then, we presentrigorous analyses to understand the complexity of the pro-posed problems and an array of efficient algorithms withprovable guarantees. Finally, we conduct a detailed experi-mental study using two real world applications and syntheticdata to validate the effectiveness and efficiency of our pro-posed algorithms. Ours is one of the first formal investiga-tions to optimize collaborative crowdsourcing. Conductingeven larger scale user studies using a variety of objectivefunctions is one of our ongoing research focus.

    APPENDIXA. REFERENCES[1] S. Ahmad, A. Battle, Z. Malkani, and S. Kamvar. The

    jabberwocky programming environment for structuredsocial computing. In Proceedings of the 24th annualACM symposium on User interface software andtechnology, pages 5364. ACM, 2011.

    [2] A. Anagnostopoulos, L. Becchetti, C. Castillo,A. Gionis, and S. Leonardi. Power in unity: Formingteams in large-scale community systems. CIKM 10,2010.

    [3] A. Anagnostopoulos, L. Becchetti, C. Castillo,A. Gionis, and S. Leonardi. Online team formation insocial networks. WWW 12, 2012.

    [4] H. P. Andres. Team cognition using collaborativetechnology: a behavioral analysis. Journal ofManagerial Psychology, 2013.

    [5] M. S. Bernstein, G. Little, R. C. Miller, B. Hartmann,M. S. Ackerman, D. R. Karger, D. Crowell, andK. Panovich. Soylent: a word processor with a crowdinside. In Proceedings of the 23nd annual ACMsymposium on User interface software and technology,pages 313322. ACM, 2010.

    [6] K. Chai, V. Potdar, and T. Dillon. Content qualityassessment related frameworks for social media. InICCSA 2009.

    [7] D. L. Chen and W. B. Dolan. Building a persistentworkforce on mechanical turk for multilingual datacollection. In HCOMP, 2011.

    [8] M. Chhabra, S. Das, and B. Szymanski. Teamformation in social networks. In Computer andInformation Sciences III, pages 291299. Springer,2013.

    [9] A. T. M. B. Daniela Retelny, Sbastien Robaszkiewicz.Expert crowdsourcing with flash teams. In CrowdConf2013 poster.

    [10] J. S. Downs, M. B. Holbrook, S. Sheng, and L. F.Cranor. Are your participants gaming the system?:screening mechanical turk workers. CHI 10.

    [11] S. K. et. al. Compact location problems. Th. Comp.Sci, 1996.

    [12] S. S. R. et. al. Facility dispersion problems: Heuristicsand special cases. In WADS, 1991.

    [13] A. Gajewar and A. D. Sarma. Multi-skill collaborativeteams based on densest subgraphs. In SDM, pages165176. SIAM, 2012.

    [14] M. R. Garey and D. S. Johnson. Computers andIntractability: A Guide to the Theory ofNP-Completeness. 1979.

    [15] M. Grotschel and L. Lovasz. Combinatorialoptimization. Handbook of combinatorics,2:15411597, 1995.

    [16] S. Guo, A. G. Parameswaran, and H. Garcia-Molina.So who won?: dynamic max discovery with the crowd.In SIGMOD Conference, pages 385396, 2012.

    [17] N. Guttmann-Beck and R. Hassin. Approximationalgorithms for minimum k-cut. Algorithmica, 2000.

    [18] G. Hertel. Synergetic effects in working teams.Journal of Managerial Psychology, 2011.

    [19] J. Hffmeier and G. Hertel. When the whole is morethan the sum of its parts: Group motivation gains inthe wild. Journal of Experimental Social Psychology,2011.

    [20] A. Jsang, R. Ismail, and C. Boyd. A survey of trustand reputation systems for online service provision.Decis. Support Syst., 43(2):618644, Mar. 2007.

    [21] H. Kaplan, I. Lotosh, T. Milo, and S. Novgorodov.Answering planning queries with the crowd. PVLDB,6(9):697708, 2013.

    [22] M. Kargar and A. An. Discovering top-k teams ofexperts with/without a leader in social networks.CIKM 11, 2011.

    [23] M. Kargar, A. An, and M. Zihayat. Efficientbi-objective team formation in social networks. InP. Flach, T. Bie, and N. Cristianini, editors, MachineLearning and Knowledge Discovery in Databases,volume 7524 of Lecture Notes in Computer Science,pages 483498. Springer Berlin Heidelberg, 2012.

    [24] M. Kargar, M. Zihayat, and A. An. Finding affordableand collaborative teams from a network of experts.

    [25] M. Karpinski. Approximability of the minimumbisection problem: an algorithmic challenge. InMathematical Foundations of Computer Science 2002.

    [26] D. Katz and R. L. Kahn. The social psychology oforganizations. 1978.

    [27] R. Kenna and B. Berche. Managing research quality:critical mass and optimal academic research groupsize. IMA Journal of Management Mathematics.

    [28] A. Kittur and R. E. Kraut. Harnessing the wisdom ofcrowds in wikipedia: quality through coordination. InProceedings of the 2008 ACM conference on Computer

  • supported cooperative work, CSCW 08, pages 3746,New York, NY, USA, 2008. ACM.

    [29] A. Kittur, J. V. Nickerson, M. Bernstein, E. Gerber,A. Shaw, J. Zimmerman, M. Lease, and J. Horton.The future of crowd work. In CSCW 13, 2013.

    [30] A. Kittur, B. Smus, S. Khamkar, and R. E. Kraut.Crowdforge: Crowdsourcing complex work. In UIST,2011.

    [31] D. Klakow and J. Peters. Testing the correlation ofword error rate and perplexity. Speech Commun.,38(1):1928, Sept. 2002.

    [32] A. Kulkarni, M. Can, and B. Hartmann.Collaboratively crowdsourcing workflows withturkomatic. In Proceedings of the ACM 2012conference on Computer Supported Cooperative Work,pages 10031012. ACM, 2012.

    [33] T. Lappas, K. Liu, and E. Terzi. Finding a team ofexperts in social networks. KDD 09.

    [34] T. Lappas, K. Liu, and E. Terzi. Finding a team ofexperts in social networks. In SIGKDD, pages467476, 2009.

    [35] W. S. Lasecki, K. I. Murray, S. White, R. C. Miller,and J. P. Bigham. Real-time crowd control of existinginterfaces. In Proceedings of the 24th Annual ACMSymposium on User Interface Software andTechnology, UIST 11, pages 2332, New York, NY,USA, 2011. ACM.

    [36] E. L. Lawler and D. E. Wood. Branch-and-boundmethods: A survey. Operations research,14(4):699719, 1966.

    [37] G. Little, L. B. Chilton, M. Goldman, and R. C.Miller. Turkit: human computation algorithms onmechanical turk. In Proceedings of the 23nd annualACM symposium on User interface software andtechnology, pages 5766. ACM, 2010.

    [38] A. Majumder, S. Datta, and K. Naidu. Capacitatedteam formation problem on social networks. InProceedings of the 18th ACM SIGKDD InternationalConference on Knowledge Discovery and Data Mining,KDD 12, pages 10051013, New York, NY, USA,2012. ACM.

    [39] G. Marwell, P. E. Oliver, and R. Prahl. Socialnetworks and collective action: A theory of the criticalmass. American Journal of Sociology, 1988.

    [40] I. B. Myers and M. H. McCaulley. Myers-Briggs TypeIndicator: MBTI. Consulting Psychologists Press,1988.

    [41] D. J. Rosenkrantz, G. K. Tayi, and S. S. Ravi. Facilitydispersion problems under capacity and costconstraints. J. Comb. Optim., 2000.

    [42] S. B. Roy, I. Lykourentzou, S. Thirumuruganathan,S. Amer-Yahia, and G. Das. Crowds, not drones:Modeling human factors in interactive crowdsourcing.In DBCrowd, 2013.

    [43] S. B. Roy, I. Lykourentzou, S. Thirumuruganathan,S. Amer-Yahia, and G. Das. Optimization inknowledge-intensive crowdsourcing. CoRR,abs/1401.1302, 2014.

    [44] A. Schrijver. Theory of Linear and IntegerProgramming. John Wiley & Sons, Inc., New York,NY, USA, 1986.

    [45] J. Surowiecki. The wisdom of crowds: Why the many

    are smarter than the few and how collective wisdomshapes business. Economies, Societies and Nations,2004.

    [46] S. van Dongen and A. J. Enright. Metric distancesderived from cosine similarity and pearson andspearman correlations. CoRR, abs/1208.3145, 2012.

    [47] R. Yan, M. Gao, E. Pavlick, and C. Callison-Burch.Are two heads better than one? crowdsourcedtranslation via a two-step collaboration ofnon-professional translators and editors.

    B. PROOFS OF THE THEOREMS AND LEM-MAS

    Proof: Theorem 1 - AffAware-Crowd is NP-hard.

    Proof. Sketch: Given a collaborative task t and a setof users U and a real number value X, the decision versionof the problem is, whether there is a group G (further par-titioned into multiple subgroups) of users (G U), suchthat the aggregated inter and intra distance value of G is Xand skill, cost, and critical mass constraints of t are satis-fied. The membership verification of the decision version ofAffAware-Crowd is clearly polynomial.

    To prove NP-hardness, we consider a variant of compactlocation [11] problem which is known to be NP-Complete.Given a complete graph G with N nodes, an integer n Nand a real number X , the decision version of the problemis whether there is a complete sub-graph g of size n N ,such that the maximum distance between between any pairof nodes in g is X . This variant of the compact locationproblem is known as Min-DIA in [11].

    Our NP-hardness proof uses an instance of Min-DIA andreduces that to an instance of AffAware-Crowd problem inpolynomial time. The reduction works as follows: each nodein graph G represents a worker u, and the distance betweenany two nodes in G is the distance between a pair of workersfor our problem. We assume that the number of skill domainis 1, i.e., m = 1. Additionally, we consider that each workersu has same skill value of 1 on that domain, i.e., ud = 1,uand their cost is 0, i.e., wu = 0,u. Next, we describe thesettings of the task t. For our problem, the task also hasthe quality requirement in only one domain, which is, Q1.The skill, cost, and critical mass of t are, Q1 = n, C =0,K =. This exactly creates an instance of our problemin polynomial time. Now, the objective is to form a group Gfor task t such that all the constraints are satisfied and theobjective function value of AffAware-Crowd is X , such thatthere exists a solution to the Min-DIA problem, if and onlyif, a solution to our instance of AffAware-Crowd exists.

    Proof: Theorem 2 - Grp is NP-hard.

    Proof. Sketch: Given a collaborative task t with criti-cal mass constraint and a set of users U and a real numberX, the decision version of the problem is, whether there isa group G of users (G U), such that the diameter is X,and skill and cost constraints of t are satisfied.The mem-bership verification of this decision version of Grp is clearlypolynomial.

    To prove NP-hardness, the follow the similar strategy asabove. We use an instance of Min-DIA [11] and reduce thatto an instance of Grp, as follows: each node in graph G ofMin-DIA represents a worker u, and the distance between anytwo nodes in G is the distance between a pair of workers forour problem. We assume that the number of skill domain is

  • =0.66&

    =0.66&

    =0.66&

    u1&

    u3&u4&

    u6&

  • 2#

    2#

    #2#

    2#

    #####################

    a#

    b#

    c#

    d#

    e######################

    #f#

    2#

    #### ##

    ##########################################

    ######2#

    2#

    2#

    #2#

    2#

    #####################

    a#

    c#

    d#

    e######################

    #f#

    2#

    #### ##

    ##########################################

    ######2#

    Figure 16: Balanced Partitioning in SpltBOpt when the dis-tance satisfies triangle inequality for a graph with 6 modes. Theleft hand side figure has two partitions({a, b, c}, {d, e, f}) with 3-nodes in each (red nodes create one partition and blue nodes cre-ate another). The intra-partion edges are drawn solid, whereas,inter-partition edges are drawn as dashed. Assuming K = 4, inthe right hand side figure, node d is moved with a, b, c. This in-creases the overall inter-partition weights, but is bounded by afactor of 2.

    Proof. Sketch: For the purpose of illustration, imaginethat a graph with n nodes is decomposed into two parti-tions. Without loss of generality, imagine partition-1 hasn1 nodes and partition-2 has n2 nodes, where n1 + n2 = n

    with total weight of w. Let K be the upper critical massand assume that K > n1,K > n2. For such a scenario,SpltBOpt will move one or more nodes from the lighter par-tition to the heavier one, until the latter has exactly K nodes(if both partitions have same number of nodes then it willchoose the one which gives rise to overall lower weight). No-tice, the worst case happens, when some of the intra-edgeswith higher weights now become inter edges due to this bal-ancing act. Of course, some inter-edges also gets knockedoff and becomes intra-edges. It is easy to notice that thenumber of inter-edges that gets knocked off is always largerthan that of the number of inter-edges added (because themove is always from the lighter partition to the heaver one).The next argument we make relies heavily on the triangleinequality property. At the worst case, every edge that gets

    added due to balancing, could at most be twice the weightof an edge that gets knocked off. Therefore, an optimal so-lution of SpltBOpt has 2-approximation factor for the Spltproblem.

    An example scenario of such a balancing has been illus-trated in Figure 16, where n1 = n2 = 3,K = 4. Notice thatafter this balancing, three inter-edges get deleted (ad,bd,cd),each of weight and two inter-edges get added, where eachedge is of weight 2. However, the approximation factor of2 holds, due to the triangle inequality property.

    Proof: Theorem 6 - Algorithm for Min-Star-Partitionhas a 3-approximation for SpltBOpt problem.

    Proof. sketch: This result is a direct derivation of theprevious work [17]. Previous work [17] shows that Min-Star-Partitionobtains a 3-approximation factor for the Minimum k-cut prob-lem. Recall that SpltBOpt is derived from Minimum k-cutby setting each partition size (possibly except the last one)to be equal with K nodes, giving rise to a total number of

    dn

    Ke partitions. After that, the result from [17] directly

    holds.

    Proof: Lemma 4 - Min-Star-Partition is polyno-mial.

    Proof. It can be shown that Min-Star-Partition takesO(nx+1) time, as there are O(nx) distinct transportation

    problem instances (corresponding to each one of(n

    x

    )combi-

    nations), and each instance can be solved in O(n) [17] time.Since, x is a constant, therefore, the overall running time ispolynomial.

    C. USER STUDY DETAILSThis section in the appendix is dedicated to provide addi-

    tional results of the user studies in Section 7.1. We presentthe partial results of distribution of workers profile for bothapplications. Additionally, the Stage-2 results of collabora-tive document writing application is presented here.

  • Average RatingTask Algorithm Completeness Grammar Neutrality Clarity Timeliness Added-value

    MOOCsOptimal-CDW 4.6 4.5 4.3 4.3 4.3 3.7Aff-Unaware-CDW 4.1 4.2 4.2 3.9 3.9 3.0CrtMass-Optimal-10 4.0 4.1 4.2 3.9 3.9 3.5

    SmartphoneOptimal 4.8 4.6 4.7 4.1 4.2 4.2Aff-Unaware 4.1 4.1 4.2 4.2 3.9 3.3CrtMass-Optimal-10 4.0 3.9 3.8 4.1 3.9 3.3

    Top-10 placesOptimal 4.4 4.2 4.3 4.2 4.3 4.3Aff-Unaware 3.9 3.8 3.7 3.6 3.3 2.9CrtMass-Optimal-10 3.9 4.0 4.1 4.0 3.9 3.9

    Table 5: Stage 3 results of document writing application in Section 7.1: Quality assessment on the completed tasks of Stage-2is performed by a new set of 60 AMT workers on a scale of 1 5. For all three tasks, the results clearly demonstrate that effectivecollaboration leads to better task quality. Even though all three groups (assigned to the same task) surpass the skill threhsold and satisfythe wage limit, however, our proposed formalism Optimal enables better team collaboration, resulting in higher quality of articles.

    0

    5

    10

    15

    20

    25

    30

    35

    0.2 0.6 0.7 0.8 0.9 0.95 1

    percen

    tage

    Skill Distribu